Opinion

Expanding horizons

Taj ElKhayat, Managing Director, METNA, Riverbed Technologies
Taj ElKhayat, Managing Director, METNA, Riverbed Technologies

Taj ElKhayat, Managing Director, Middle East, Turkey, North, West, and Central Africa at Riverbed Technology, explains how to overcome challenges in location- independent computing.

The explosion of mobile, cloud and social technologies has dramatically changed the business landscape. Today, distance is no longer a barrier to business success. An organisation’s applications, data centre and offices can be opposite sides of the world, yet the business can still achieve the same reliability and performance that it has come to expect.

This is the definition of location-independent computing. The ability to turn distance and location into a competitive advantage by hosting applications and data in optimal locations, while ensuring flawless application performance and the best user experience. MENA’s appetite for applications and services being delivered to devices irrespective of geographic location is evident from the region’s growing interest in cloud computing – which enables precisely this. Gartner has, in fact, predicted that from 2013 through 2017, $3.8 billion will be spent on cloud services in the region.

Achieving true location-independent computing requires an application performance platform that maintains visibility no matter where or when the applications are located or accessed. But IT leaders are faced with challenges that include the increasing popularity of BYOD and business critical applications, the need to extend virtualisation beyond servers, the shift to ubiquitous wireless networks and of course stagnant or at best marginally incremental IT budgets.

Providing visibility into critical services in the face of all these challenges is a daunting task. Organisations must be nimble and invest in monitoring solutions that can adapt to the new world of location-independent computing. So, what are some of the requirements to consider when evaluating monitoring solutions? Here are a few things to take into account when choosing a visibility solution.

Measure performance where it matters

An end-user’s experience with an application can mean the difference between success and failure.  So, whenever possible, measuring the actual performance as experienced by the end-user on their system from their browser is the best determinant of the quality of their experience.  A measurement tool that includes very detailed data about individual transactions and also shows high level data for all users by country or by type of browser is ideal. This helps address one of the most important issues faced by IT because they can now examine how well the application is performing regardless of the end user’s location or platform.

Visibility across the enterprise

As applications are shifted around the data centre or to the cloud as a result of consolidation, cost savings or virtualisation, it’s not always practical to quickly relocate the monitoring tools to avoid loss of visibility. Therefore, organisations should leverage monitoring solutions that are an embedded part of the infrastructure. For example, solutions that use flow data such as NetFlow collected throughout the environment will provide the much needed application insight no matter where the application is moved.

Flexible deployment

With applications increasingly virtualised and run in the cloud, appliance based performance management solutions aren’t always practical.  That’s why it is critical to implement a visibility solution that is as flexible as the application itself.  If a virtualised application is migrated to a different set of servers, the monitoring components must be easy to relocate as well without loss of visibility.  In fact, many organisations include the virtual monitoring solutions as part of the overall application deployment definition to insure that visibility is always available and not an afterthought.

Performance matters

Quickly resolving complex application problems requires access to a lot of detailed metrics. So, having a solution that can efficiently store and retrieve the relevant data quickly can make the difference between solving a problem in minutes versus hours or even days. The product must have user friendly workflows that enable IT team to quickly drill down from summary level views to low level metrics. A solution should be evaluated in scenarios that are as close to production as possible. Only by simulating real world scenarios can an IT organisation be confident that the monitoring solution will hold up when it really matters.

Measuring performance into the code

Being able to instrument and measure the performance of a running application is an important aspect of any performance management solution. For modern applications, this involves supporting development environments, like Java and .NET that are commonly used to build enterprise applications. It also means having instrumentation that provides comprehensive metrics, measured each second, with low overhead so as not to introduce performance problems.

A high performance, Big Data repository is a must. This repository must be coupled with a powerful user interface that facilitates quickly pivoting from a high level view of all transactions to examine problematic areas of the code. Once a developer spots a transaction of interest, they must be able to view all of the source code in context by jumping directly into their development environment at the exact place where a problem was detected. Finally, the flexibility to monitor applications both in development and in production using the same solution is essential.

Testing critical application capabilities

One of the best ways to insure a critical application function is working is to test it frequently. Implementing automated tests to validate an application either in pre-production (test) or in production will ensure applications are operating as expected. These tests can be simple or arbitrarily complex and if a test fails, IT can address the problem ideally before end-users notice the failure.

Scalability and analytics

As you might expect, collecting detailed data about transactions in a running application or capturing all of the network traffic for that same application requires highly scalable data repositories that can be searched quickly. Unfortunately, too many organisations identify the cause of a problem by blindly searching large amounts of data thereby engaging in a mostly hopeless process that is not efficient at solving the problem. Instead, comprehensive analytics that are always monitoring the incoming data for signs of trouble provide the needed visibility and scalability.

Support for SDN

Software Defined Networks is a rapidly emerging trend. As network virtualisation becomes more commonplace, having tools that understand both the logical network and how that relates to the physical infrastructure is critically important.

Mobility has brought tremendous freedom to today’s worker and the businesses they serve with the flexibility to access applications from anywhere. We are seeing similar freedom in the network via cloud platforms and virtualisation and in applications with SaaS applications and new development platforms that spin up apps almost dynamically. To drive success in this new landscape, CIOs need to have full visibility and control across an application performance infrastructure that is also location-independent.

 

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines