Brent Lees, Senior Product Marketing Manager, EMEA, Riverbed Technology
Global enterprises and growing businesses are harnessing IT to add branch offices, remote sites and enabling mobile users. Simultaneously, they are decreasing IT administrative footprints, boosting productivity, working with greater efficiency, and improving the bottom line in other ways.
IT initiatives such as data centre consolidation, cloud computing, virtualisation, and new application deployments can help meet all of these objectives, but in many cases, these projects are major undertakings that consume significant time and resources, and are fraught with risk. Planning for these IT changes can be a long and error-prone process, but an accurate and comprehensive understanding of IT infrastructure is a necessity for successful implementations and smoother transitions.
To avoid problems and make good decisions, organisations must create high-quality project plans with accurate and detailed information available before, during, and after migration activities. For example, if a business were to move one of its Oracle servers, it would be critical for it to understand all the databases and components that connect to it so that all connections were re-established after the move. Without precise knowledge of the end-to-end service delivery path, it would be easy to misconfigure the new environment and possibly cause an outage. Some organisations have come to realise that documentation and historical asset inventories are rarely up to date, and the people who built them have invariably moved on, providing an unreliable basis for planning.
To improve planning, organisations have taken to approaches like the “clipboard method” of manually inventorying IT assets and mapping physical components, or by using client/server-based discovery agents or scan-based discovery tools. However, these approaches can be time-consuming, expensive and limited in functionality. More importantly, these methods can leave gaps and introduce new problems, such as network performance degradation, and they lack the on-going visibility needed to identify and troubleshoot performance issues.
A better approach involves better up-front planning and identifying and fixing problems more quickly. To accomplish this, organisations use network performance management tools for end-to-end visibility into a new environment and to spot early signs that performance is degrading. Fortunately, these tools are becoming more sophisticated with the addition of discovery and dependency mapping capabilities, going far beyond providing a basic collection of packet and flow data.
Network performance management is also no longer just about capacity planning, collecting and analysing application performance on the network, watching for trouble spots, and the myriad other tasks associated with keeping the network running smoothly. Network performance management is evolving into service performance management, the ability to monitor the performance of networked application services as a whole, rather than in piecemeal components.
In order for IT to monitor service health across a global organisation, it must first understand what is involved in delivering the service to end users. That requires an understanding of all the individual components–the application development controllers (ADCs), Web servers, authentication servers, application servers, and databases–that work together to provide the complete service.
This same level of visibility is also critical to minimising the risks associated with IT change projects. The sooner all assets and dependencies are identified the faster the change can be implemented, but that may not be it. The solution must also ensure that critical dependencies are no overlooked (e.g., an essential server is shut down that nobody thought was still being used such as the server under the developers desk that is actually being used in the production environment). This can be accomplished efficiently with a network flow monitoring solution that recognises when a server is acting as a client and when it is acting as a server.
Some network performance management solutions have evolved to equip IT with the sophistication to ensure the network can deliver and sustain the performance required to optimally run the business. The emergence of application discovery and dependency mapping (ADDM) capabilities within traditional network performance management solutions can now accelerate complex IT initiatives by enabling fast, accurate and complete discovery of IT assets and their dependencies. ADDM solutions also validate performance and troubleshoot issues before, during and after an IT change event, providing significant benefits over traditional scan or agent-based discovery solutions.
IT departments are starting to use these tools to create agile infrastructures even as they grow more complex with new technologies, such as cloud computing and data centre consolidation. Ultimately, ADDM solutions can enable successful IT deployments for now and well into the future as organisations continue to make technology-based adjustments to align with their business goals.