With the evolution of IT, the network has undoubtedly changed from what it was a decade ago. Enterprise organisations need to look at upgrading their network management and monitoring tools to keep up with more demanding network activities.
Step back and imagine the world of technology 10 years ago. YouTube was in its infancy, the iPhone was more than a year away from release, BlackBerry was the smartest phone in the market and Twitter was barely making a peep.
While the masses are now glued to their iPhones watching cat videos and pontificating 140 characters at a time, the backend infrastructure that supports all of that watching and tweeting – not to mention electronic health records, industrial sensors, e-commerce, and thousands of other serious activities – has also undergone a massive evolution. Unfortunately, the tools tasked with monitoring and managing the performance, availability, and security of those infrastructures have not kept up with the scale of data or with the speed at which insight is required today.
There is no nice way to say this: What worked 10 years ago will not work now. Today, exponentially more data is moving exponentially faster. IT organisations who cling to the old models of monitoring and managing will be at a significant disadvantage to their counterparts who adapt by embracing new technologies.
Take Ethernet, for example. It’s been less than 20 years since the standard for 1Gbps was established, and less than 10 years since 10Gbps started to gain a meaningful foothold. Now, we’re looking at 40Gbps and 100Gbps speeds. It is a different world, and it’s not slowing down. According to the Global Cloud Index, “global IP traffic has increased fivefold over the past five years, and will increase threefold over the next five years.” Between 2014 and 2019, IP traffic is expected to grow 23 percent annually.
Speeds and feeds are not the only forces at work. Server and application virtualisation, software-defined networking and cloud computing are also catalysts for IT change, reshaping how infrastructures are architected and resources are delivered.
Increasingly complex, dynamic and distributed, the network is a different place today than it was 10 years ago. Some view that as a problem to be solved. On the contrary, it’s an opportunity to be seized by forward-thinking network professionals.
The reality is that traditional network performance monitoring (NPM) technologies like packet sniffers and flow analysers can’t scale or evolve to meet this new demand. Capturing, storing and sniffing packets was relatively straightforward for ‘fast’ Ethernet supporting 100Mbps of throughput. At 100Gbps, capturing and storing terabytes worth of packets would require massive time and infrastructure investments, not to mention hours of a person’s life just to sniff a small subset of those packets.
The market is starting to take notice. In its most recent Magic Quadrant and corresponding critical capabilities report for NPM and diagnostics, Gartner placed a high emphasis on operational analytics functionality capable of elevating network data beyond the realm of the network and even IT. At the same time, the analyst firm also noted stagnating innovation in the space, the result of legacy frameworks built for speeds and architectures that were already being phased out more than a decade ago. Put simply, these legacy architectures are ill-equipped to address the new realities that modern IT delivers.
While legacy architectures are stymying technological innovation, marketing innovation abounds. The monitoring and analytics markets have long been fraught with misleading statements, and vendors in these sectors are growing more and more adept at applying the latest buzzwords to antiquated technologies in the hopes of extending their lifespans by a few short years.
Compounding this problem is the lack of transparency in these markets. Reasonable comparisons of competitive offerings are nearly impossible because so few vendors publish their performance numbers. Even when they do, definitions are often fluid, confusing, or outright misleading, making it a massive challenge to put those numbers in context.
As enterprise customers increasingly look for next-generation solutions, it will be critical for them to understand the nuances of vendor terminology and architecture in order to separate and effectively assess what is actual functionality versus what is marketing gloss.
IT buyers deserve to be able to make a fair, real, honest comparison of vendor offerings, which is nearly impossible in the current climate where performance numbers are obscured and terms are loosely defined.
It is time for every vendor in the network performance monitoring sector – and frankly, every vendor in the IT operations management sector take action. Customers deserve the opportunity to make an apples-to-apples comparison of claims around performance, scale, and deployment.
Every IT person should be able to get a real answer to these and many other questions:
- How are services discovered and classified? Are they automatic, or do they rely on manual tagging and configuration?
- Can the solution be deployed in hybrid and cloud environments?
- Can it scale to 40Gbps or have a path forward to 100Gbps?
- Does it decrypt SSL traffic at line rate?
- Does it provide visibility into Internet of Things devices?
- Is there a way for users to easily program the analysis or extend functionality?
- Does the product store packets before analysis?
- What are the storage requirements associated with data captured by the product?
Information technology is a different world than it was 10 years ago, and the demands a typical organisation experiences are increasing. Over the next 18 to 24 months, the massive shift IT is undergoing will start to meaningfully separate the wheat from the chaff in the NPM market, if for no other reason than because the solutions that cannot evolve will start to fail in deployment.