Sacha Giese, Head Geek at SolarWinds, has penned an exclusive op-ed for February’s edition of CNME, in which he outlines how a multi-cloud approach and strategy can be a driver for digital transformation.
In most regions in the world, cloud adoption is slower than in the USA or China, for various reasons.
But since the beginning of the pandemic and the necessary measures to allow remote work we see an increase.
Plans for digital transformation which spanned the next few years have been accelerated and became reality within just a few months.
And even though knowledge and expertise of IT professionals is continuously increasing, we get loads of questions regards managing the service level of a cloud environment, and more complex multi-cloud environments in particular.
There are various and valid reasons why a modern organisation would use multiple cloud providers instead of just one.
In some cases, the driver behind multi-cloud is to increase the availability of an application.
Using more than one provider allows active/active and active/passive designs. Multi-cloud high availability is the only valid form of an active/active design and could be used to prevent outages of a single provider, but also of a specific region as a whole, or with DNS.
In an active/passive design, the second cloud provider acts as a failover in a disaster recovery, not only for provider outages but also temporary attacks, like a DDOS.
Both ways are crucial for any business who plans to get serious with offering digital services. Customers want access from any device, any location, and at any time, so availability and reliability are amongst the most important pillars.
In most cases, on the other hand, the intention is to use both—or multiple—providers based on the current business need, or to split a process between them. The most common model is to use one provider for the production environment, and a different one for preprod/dev. This prevents accidental changes on the production system that could feel like an act of self-inflicted sabotage to the business.
It could also help save money, considering a development environment doesn’t need to be available 24/7 and could automatically suspend based on the dev-business hours. Yes, even developers would like to go home after work. At least once a week.
A clear separation of platforms based on projects also helps with billing in general, as there might be a specific budget for a project and it’s easier to keep control if the bills from vendor A go into the budget pool of project A, etc.
For international organisations, who are concerned with not losing the sovereignty of their data, the situation is a bit different.
Multi-cloud can be a perfect way to stay compliant. Sensitive data could be stored in “local” private clouds, which are basically datacentres inside the region, where controlling data and access is feasible.
It stays encrypted, and the latest innovation and theoretically endless resources of the major, public cloud providers can be used for compute, whilst still remaining encrypted. The decryption happens on the end devices, corporate laptops used by employees wherever they are, with an enforced two factor authentication policy.
Whatever the reasons for a multi cloud deployment, management and transparency are important parts of the digital strategy.
As the performance data looks different on each platform, for example proprietary CloudWatch on AWS and PowerShell on Azure, an independent monitoring system helps with collecting and visualising key performance indicators.
Solutions like the SolarWinds Orion Platform supports monitoring of AWS and Azure environments out of the box, and allows to compare data from both cloud providers on top of managing on-premises entities, or other cloud providers using standard protocols
What Data Should Be Collected?
Short answer: A lot. But it’s a bit more complicated than that.
In general, the API calls will retrieve information about the underlying infrastructure and start with regions/locations, network information like IP, DNS and connections, information about storage including attached volumes, more specific information like placement groups and availability zones, more general info like resource use, and some security bits.
Azure is a bit more “accessible” using PowerShell, so it’s easier to collect specific information like site-to-site connections, or subscription information.
To get information from the OS or application level, an agent on the VM/instance is required. The agent is available for Windows and various flavours of Linux. It works in a push-mode, automatically sending metrics to the monitoring platform. Other ways will collect Amazon Route 53 and Azure DNS zone and their records, as well as V-Nets and their gateways, or storage.
What About the Connections?
Often overlooked when monitoring in general is the connectivity between resources. What’s easy in an old-school environment is a bit of a challenge from one cloud provider to another.
A common and simple use case could be an application sitting in AWS, maybe a SharePoint Server, but its database is hosted in Azure because of the attractive SQL licensing.
An automated application dependency mapping can discover the relation between them and even put it on a map for dynamic visualisation:
We learned already it’s not a big deal to control both the machines and the application independently of the actual location/deployment type, but the IT team needs to ensure they play nicely together.
This is where a TCP based hop-by-hop analysis come in handy to show each single node in the path between the app and its database:
Using an independent monitoring system is a huge advantage as it allows you to merge individual layers, metrics, and variables collected from a hybrid multi-cloud environment into what’s important in the end: IT-based business processes keeping the organisation alive, and help improving digital transformation even in difficult times.