News

Red Hat’s virtualisation road map

Red Hat has been around for a while now and you've been with the company almost eight years. How important do you think the company has been in the development of open source software in the enterprise space?

I think we've been the driver of it, frankly. When I joined the company almost eight years ago the big milestone was finishing up a release in early October so we could get it out and on the shelves in time for the Christmas retail market. One of the things that we did back then was recognize that open source was a great new and emerging development model.

But from a commercial perspective, customers were having trouble consuming it because open source moves very fast in the way you get to the next release. There wasn't really any notion of stability, a notion of building an ecosystem around it, or building subscription around it where customers could actually help drive where it was going etc. So that is where we came up with RHEL, and frankly I think that was really the major driver in making open source a viable alternative in the commercial market. When you look at it today we're running most of Wall street, we're in most if not all the major verticals, and we're approaching 3 million subscriptions in the commercial space, so we've made a pretty good dent in the enterprise space.

With Red Hat's Linux-based hypervisor, what is the company aiming to achieve in the virtualization space and why does the market need an open source management platform?

Ubiquity. Virtualization really is the next generation operating system. If you look at what virtualization does, and what the lower half of an operating system does – it's one in the same. That's the part that interfaces to the hardware. Right now a RHEL offering of some sort is used in almost all of the other hypervisors as the hardware enablement layer, so it really is the next generation OS. We think that virtualization is the foundation of one of the next big waves in computing: cloud computing. Cloud computing is the ability to run across private networks to semi-private networks to public networks. Doing that in a proprietary way will limit how far you can go, it will limit what type of networks you can run across and what public networks you can run across etc. So you have to do that in an open source way in order to get that interoperability with open source and open interfaces. So we think that open source virtualization will really drive the next wave which is cloud computing.

With Red Hat really being the main driver of commercial open source operating systems we think we're in a great position to help to make virtualization ubiquitous across all hardware platforms. If you look at it now virtualization is run on somewhere between six to eight percent of servers, and we think in the next two years that will be 90 percent of servers.

We think that Linux in the commercial space, and RHEL specifically, opened up and really showed the viability and advantages of having open source in the enterprise. Where virtualization is the bottom half of the OS, I think it would be a step backwards for the world to go to a proprietary virtualization layer because we would be back to one or two companies dictating when vendors can ship their hardware, what things they can support, and how they perform etc. We think it's a step backwards to go at that level with proprietary software and I think people are starting to see that and are not wanting to lose the advantages they've gained with open source operating systems.

Red Hat CEO Jim Whitehurst said that the value in virtualization from Red Hat's perspective is less around server consolidation and more about what new functionality or architectures virtualization can enable. Could you elaborate on this a little more?

Server consolidation is one case use for virtualization, and unfortunately when virtualization came out everybody sort of pounded on the server consolidation thing. Server consolidation certainly is a use case, but there are other very important use cases: with the ability to split the hardware layer from the layer that interacts with the ISVs you now can run multiple versions of a particular operating system on one set of hardware. So if you're not ready to move an application from one version to the other you can run it right next to the new version of the operating system, and not have to spread out your applications and buy new boxes while you migrate those applications. That is an important use case. We also think high availability in fail-over is a very important application.

One of the next big things coming is the virtual desktop; the ability to have a very, very inexpensive Linux thin client on the desktop with the ability to serve a Windows OS or a Linux OS from a remote server back in the data center to desktops scattered around the enterprise. We think that's a huge use case that solves a lot of management and scalability problems.

So we think server consolidation is a use case but there are many other use cases out there that we've just started to scratch the surface on. And as I said not to mention the cloud computing that you hear so much about – that is probably the biggest use case for virtualization.

Rather than following Microsoft and others in building its own infrastructure to host its software, Red Hat is using Amazons EC2. Can you tell me about the company's approach to cloud computing?

Our position, or what cloud computing means to us, is you will see at least three kinds of networks: private networks as an enterprise may have today where they control all the resources on the network; semi-private networks where you may have a data center where resources are shared with a trusted partner or a series of trusted partners; and a public network such as Amazon EC2. We think that customers will want to run any combination or permutation of those three, and they will want to be able to manage it as they have all the resources in front of them – the ability to spin up, spin down, provision, monitor etc – all those resources as they are in front of them even if they are in the public network. So that's how we think its going to happen, we think there will be lots of public networks out there.

The path that Microsoft looks like they are going down, I can't tell for sure, but the path it looks like they are going down is similar to what they do with their operating systems and applications. You could play in that game but you'll be locked into their cost, their pricing and their terms. Whereas what we're doing is we want to enable enterprises as well as other people that are running these public networks, we want to be able to enable them with the software to be able to work within that structure. That's where open interfaces and open source comes in as I think there will be many, many types of these networks. You shouldn't care where you're getting the service from, you should just care that you can work within that service and you can't do that in a proprietary closed way. It has to be open source.

I think that's also one of the reasons why you're seeing so much in infrastructure software being done from an open source perspective. Our goal is not to see ourselves as just a Linux vendor – that's certainly important and where we cut our teeth – but we see ourselves as pushing open source into viable commercial solutions across the entire infrastructure.

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines