News

IBM eyes the future of virtualization

The future of technology always has roots in the past. And the past is indeed long in the case of virtualization, a technology that is reshaping today's IT industry and will likely play a huge role in the building of next-generation data centers. Few people are more aware of that history than Jim Rymarczyk, who joined IBM as a programmer in the 1960s just as the mainframe giant was inventing virtualization.

Rymarczyk, still at Big Blue today as an IBM fellow and chief virtualization technologist, recalls using CP-67 software, one of IBM's first attempts at virtualizing mainframe operating systems. CP-67 and its follow-ups launched the virtualization market, giving customers the ability to greatly increase hardware utilization by running many applications at once. The partitioning concepts IBM developed for the mainframe eventually served as inspiration for VMware, which brought virtualization to x86 servers in 1999.

“Back in the mid-60s, everyone was using key punches and submitting batch jobs,” Rymarczyk says in a recent interview with Network World. “It was very inefficient and machines were quite expensive.”

The problem of implementing a time-sharing system that would let multiple users access the same computer simultaneously was not an easy one to solve. Most engineers were taking traditional batch operating systems and making them more interactive to let multiple users come into the system, but the operating system itself became extremely complex, Rymarczyk explains. IBM's engineering team in Cambridge, Mass., came up with a novel approach that gave each user a virtual machine (VM), with an operating system that doesn't have to be complex because it only has to support one user, he says.

The first stake in the ground was CP-40, an operating system for the System/360 mainframe that IBM's Robert Creasy and Les Comeau started developing in 1964 to create VMs within the mainframe. It was quickly replaced by CP-67, the second version of IBM's hypervisor, which Rymarczyk began using upon joining IBM's Cambridge operations in 1968. The early hypervisor gave each mainframe user what was called a conversational monitor system (CSM), essentially a single-user operating system. The hypervisor provided the resources while the CMS supported the time-sharing capabilities. CP-67 enabled memory sharing across VMs while giving each user his own virtual memory space.

Rymarczyk says he got to know several of the CP-67 developers and describes himself as one of their “guinea pigs.” But even in these early days of virtualization, the technology's benefits were clear.

“What was most impressive was how well it worked and how powerful it was,” Rymarczyk says. “It let you provide test platforms for software testing and development so that now all of that activity could be done so much more efficiently. It could be interactive too. You could be running a test operating system. When it failed you could look in virtual memory at exactly what was happening. It made debugging and testing much more effective.”

IBM's first hypervisors were used internally and made available publicly in a quasi-open source model. Virtualization was “an internal research project, experimental engineering and design,” Rymarczyk says. “It wasn't originally planned as a product.”

The hypervisor did become a commercially available product in 1972 with VM technology for the mainframe. But it was an important technology even before its commercial release, Rymarczyk says.

“In the late 1960s it very quickly became a critical piece of IT technology,” he says. “People were using it heavily to do interactive computing, to develop programs. It was a far more productive way to do it, rather than submit batch jobs.”

When Rymarczyk joined IBM on a full-time basis he was working on an experimental time-sharing system, a separate project that was phased out in favor of the CP-67 code base. CP-67 was more flexible and efficient in terms of deploying VMs for all kinds of development scenarios, and for consolidating physical hardware, he says.

While Rymarczyk didn't invent virtualization, he has played a key role in advancing the technology over the past four decades. A graduate of Massachusetts Institute of Technology in electrical engineering and computer science, Rymarczyk worked for IBM in Cambridge until 1974, when he transferred to the Poughkeepsie, N.Y., lab, where he stayed for two decades.

In the early 1990s, Rymarczyk helped develop Parallel Sysplex, an IBM technology that lets customers build clusters of as many as 32 mainframe systems to share workloads and ensure high availability. He was also one of the lead designers of Processor Resource/System Manger, which let users logically slice a single processor into multiple partitions.

In 1994, Rymarczyk transferred to IBM's lab in Austin, Texas, as part of an effort to bring mainframe technology and expertise to IBM Power systems. This helped spur the creation of a hypervisor for IBM's Power-based servers in 1999. Rymarczyk is still based in Austin, and has no plans to leave IBM.

As chief virtualization technologist, “my main focus now is looking at the bigger picture of IT complexity and cost, and how we can leverage virtualization as well as other technologies to get cost and complexity under control,” he says. “We just can't afford to keep doing IT the way we do it today.”

Rymarczyk watched with interest as VMware adapted the concepts behind IBM's virtualization technology to x86 systems. In some ways, VMware's task was more difficult than IBM's because the Intel and AMD x86 processors used in most corporate data centers were not built with virtualization in mind. With the mainframe, IBM has total control over both the hardware and virtualization software, but VMware had to overcome the idiosyncrasies of x86 hardware developed by other vendors.

Like IBM, “VMware is creating a virtual machine for every user. But they started before there was any hardware assist. It turns out the x86 architecture has some nasty characteristics,” Rymarczyk says. To run Windows in a VM on an x86 platform, VMware had to intercept “difficult” instructions and replace them, he says.

“The x86 architecture had some things that computer scientists would really frown upon,” he says. “Intel now has put in some hardware features to make it easier. They have started going down a similar path to what we did in the 1960s.”

While there was a clear need for virtualization on the mainframe in the 1960s, the idea of building hypervisors for new platforms was “effectively abandoned during the 1980s and 1990s when client-server applications and inexpensive x86 servers and desktops led to distributed computing,” according to a short history of virtualization written by VMware.

In the 1980s and early 1990s, x86 servers lacked the horsepower to run multiple operating systems, and they were so inexpensive that enterprises would deploy dedicated hardware for each application without a second thought, Rymarczyk says. But chip performance has increased so dramatically that the typical Windows machine needs less than 10% of the processing power actually delivered by a server today, he says.

That's one of the reasons x86 virtualization has become so important, but it still lags significantly behind the technology available on IBM's mainframes and Power systems, in Rymarczyk's opinion. One reason is that with mainframes and Power servers, virtualization isn't an optional add-on – it's part of the system's firmware. “It's sort of routine for customers on our Power servers to be running 40 or 50 virtual machines or LPARs [logical partitions] concurrently, and many of these virtual machines may be mission critical,” he says.

Simply creating VMs is just the tip of the iceberg, though. Rymarczyk says tomorrow's data center “needs robust I/O virtualization, which we've had on the mainframe for decades.” But he does credit VMware with being the first to introduce live migration, the ability to move a VM from one physical host to another without suffering downtime. Live migration is a key enabler of cloud computing because it helps ensure high availability and gives IT pros extra flexibility in the deployment of VMs.

While IBM is a major producer of x86 servers, Big Blue has no plans to develop its own x86 hypervisor. But IBM is trying to position itself as one of the leaders in using virtualization technology to make tomorrow's data center more scalable and efficient.

“You're going to see the hypervisor on x86 essentially become free and there will be multiple choices,” Rymarczyk says. “Open source, VMware, Microsoft, maybe even something from Intel that comes with the platform. There's little reason [for IBM] to invest in trying to make money by building a better [x86] hypervisor. Where the real opportunity exists in adding value for data centers is much higher up the stack.”

IBM and VMware have advanced similar concepts that leverage virtualization technologies to aggregate data center resources into small numbers of logical computing pools that can be managed from single consoles. VMware just announced vSphere, which it calls a “cloud operating system,” while Rymarczyk at IBM came up with “Ensembles.” Similar to Parallel Sysplex on the mainframe, Ensembles seeks to pool together compatible servers and automatically move virtual resources around the pool as needs change.

Rymarczyk is working with IBM's Tivoli software team to develop architectures that will lead to more dynamic and responsive data centers.

“Today's data center tends to be ad hoc and rigid, with lots of constraints,” Rymarczyk says. “We are working on development of architectures that will make the entire data center much simpler. It's largely management software that is going to make the difference.”

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines