Emulation is what we do when we try to make one system behave like or imitate a different system. We want to take System A (something we already have) and give it the inputs we would normally use for System B (which we may not have) and have System A produce the same results as System B.
What's involved is more than a simple translation of commands or machine instructions; compilers and interpreters have done that for years. No, we're taking complete operating systems, APIs and functions, and we're having them work on a machine they were never designed for — a machine that may use totally different methods and commands. That this is even possible is sometimes miraculous, but it nearly always carries a high performance price. Emulation entails a high overhead, and it significantly reduces throughput.
If emulation takes such a toll, why bother? Because we might want to do one of the following:
Run an OS on a hardware platform for which it was not designed.
Run an application on a device other than the one it was developed for (e.g., run a Windows program on a Mac).
Read data that was written onto storage media by a device we no longer have or that no longer works.
Emulation is important in fighting obsolescence and keeping data available. Emulation lets us model older hardware and software and re-create them using current technology. Emulation lets us use a current platform to access an older application, operating system or data while the older software still thinks it's running in its original environment.
The term emulator was coined at IBM in 1957. Before 1980, it referred only to hardware; the term simulation was preferred when talking of software. For example, a computer built specifically to run programs designed for a different architecture would be called an emulator, whereas we'd use the word simulator to describe a PC program that lets us run an older program (designed for a different platform) on a modern machine. Today emulation refers to both hardware and software.
Virtualization
Virtualization is a technique for using computing resources and devices in a completely functional manner regardless of their physical layout or location. This includes splitting a single physical computer into multiple “virtual” servers, making it appear as though each virtual machine is running on its own dedicated hardware and allowing each to be rebooted independently. In storage virtualization, on the other hand, the server regards multiple physical devices as a single logical unit.
A virtual server is a carefully isolated software “container” with its own software-based CPU, RAM, hard disk and network connection. An operating system or application — even the virtual server itself, or other computers on the same network — can't tell the difference between a virtual machine and a physical machine.
Virtual machines offer the following advantages:
They're compatible with all Intel x86 computers.
They're isolated from one another, just as if they were physically separate.
Each is a complete, encapsulated computing environment.
They're essentially independent of the underlying hardware.
They're created using existing hardware.
IBM developed virtualization in the 1960s so big, expensive mainframes could run multiple applications and processes concurrently. During the 1980s and '90s, virtualization was largely abandoned in favor of client/server applications and distributed computing. Today's servers and PCs, however, face many of the same underutilization problems as those 1960s mainframes.
VMware Inc. invented virtualization for the x86 platform in the late 1990s. Recently it introduced a product, called Fusion, that lets Windows applications run concurrently on Macintosh computers that use OS X.