A deep-dive into the world of virtualization would teleport you back to 1968, when IBM laid down the foundations for the CP-67 system in response to requirements from MIT & Bell Labs, becoming the first mainframe to support virtualization. The ancient precursor which defined the early ecosystem to the modern as-a-service revolution of today, was the CP (Control Program) which ran on the Mainframe, and created multiple virtual machines which ran a CMS (Control Monitor System) that supported user interaction.

Today, almost half a century after those initial foundations, the market has matured to an extent that the entire data center is becoming software-defined. Let’s take a dive into where it began and where we go from here, and how VDIworks has been a concrete part of it all.

Prelude: Anyone Remember Terminal Services?

Quite the rage in late 90s and through the early 2000s, terminal services made it possible to host multiple & simultaneous client sessions on a single Windows Server OS instance. Looking at its pros, it was easy to deliver and cost effective; and with a proven user base of around 80 million, it had passed the trials of market turf.

However, the limited use cases restricted its viability. More critically, the fact that everyone shared the application instances and if someone’s session experienced a crash in MS Office, it just went down for everyone (this was changed in Windows Server 2008). Moreover, the user experience wasn’t exactly up to standard. The need for a more innovative and advanced alternative to server virtualization was very much there.

Terminal services typically went with HDD arrays, albeit with performance that fitted only the need of the day, along with dual controller architecture or scale up architecture. Both storage controllers being active meant that the data service and IO functions were split between them, and if one failed, the other took over and held things together till the failed controller was replaced.

Hypervisors: The Nucleus of Modern VDI

Hypervisors date back to 1966, when the IBM System/360 line of mainframes provided hardware-level support for preserving & restoring the machine’s state, a function central to the way hypervisors work.

Fast-forward to the modern x86 era, and VMware’s Workstation was the first hypervisor for x86 systems, circa 1999. The hypervisors of the early 2000s were based on software and utilized primarily for development and testing. VMware’s initial version made use of a technique known as “dynamic translation” to intercept privileged operations by a guest OS. When the guest accessed hardware, VMware rewrote the instructions in real-time to protect itself from the guest and also isolate guests from one another.

Soon enough the open-source Xen Project, born out of the University of Cambridge’s computer labs, came along and coined the term paravirtualization (PV). This method utilized PV guests that were modified to run on a PV host, to have the hypervisor execute privileged instructions on their behalf instead of doing it themselves.

Read About: Microsoft Licensing for Virtual Desktops

virtualization Benefits

A Basket full of Virtualized Benefits

The rise of Hypervisors as mature technology propelled overall virtualization to become a full-fledged domain, and enterprises kept joining the bandwagon because of key benefits that made even more sense as the industry reeled away from the effects of the dot-com crash.

Real-Estate Optimization

With rents that were soaring by the square foot, virtualization offered the ability to make more use of the existing space that companies had and optimize their real-estate budget. The fact that companies could run multiple applications and operating systems independently on a single server equaled to less hardware, and in turn, better usage of the same office space.

Centralization, Consolidation & Productivity

In addition to the reduced CapEx, the centralization of assets via a single data-center meant more insight into operations for the IT staff, and consequently more productivity. The fact that everything was in one place made it a lot easier to implement patches and provide support. This had an impact in terms of reducing Operating Expenses OpEx.

Energy + HVAC Cost Reduction

Having fewer servers and hardware meant overall improvement in terms of managing energy and HVAC costs. A plethora of servers running not only meant energy to keep them going, but also the additional requirement of regulating temperature to where it would keep the hardware cool enough. Virtualization presented a simple alternative. Less equipment means less heat dissipation to be taken care of by mechanical cooling systems, and eventually reduced costs.

Connection Brokers: The Hub of Virtualized Environments

Centralization of desktops mean that the unit sitting on the desk was replaced by a solid-state device that connected back to some type of computer, whether a 1U server, blade or even a virtual machine. This is where connection brokering came into play, allowing IT administrators to simplify the management of the centralized environment by being able to set policies that determined which client device would connect to what server resource.

The first generally available Connection Broker was released in 2003 by the team behind desktop virtualization pioneer ClearCube Technologies. The software allowed administrators to create mappings between back-end host hardware & edge devices. When users logged on to their client devices, they would find themselves automatically connected to the right data center resource.

It is notable to state that the same team, behind the world’s first connection broker, went on to form VDIworks, and this was way before the release of Citrix’s Desktop Broker in October of 2006, and VMware’s acquisition of Propero software in 2007, which led to the release of their VDM 2.0 connection broker.

Virtual Desktop Platform (VDP) Ensures Effective Connection Brokering Click To Find Out More about VDP

Enter VDI 1.0, Non-Persistent Virtual Desktops

As server virtualization matured, desktop virtualization became the next area of focus circa late 2000s, with the introduction of non-persistent virtual desktops. These did not maintain personalized settings and the moment the user began a new session after logging off, they got a generic & fresh virtual desktop image. All differential user changes were made in real-time to the unique data space of each specific desktop.

This was brilliant from a security perspective, as users couldn’t install anything to the master image, rendering it more secure. And if there was something like a hacking incident, all desktops simply needed to be rebooted back to a clean state.

From a storage perspective, they were welcomed as they required less storage. And the industry needed it at that point, since data-center-grade storage had serious costs involved. However, the trade-off obviously came in the form of losing the element of personalization. A typical user scenario for non-persistent virtual desktops was that of a call-center, where users begin fresh with each shift.

The Sub-Par User Experience Dilemma & A Market-Disruptive Solution

While VDI technology was slowly progressing, a key bottleneck lay from a user-experience perspective. Many connection remoting protocols at the time were unsuited for multimedia playback, while others were tied to specific Thin Clients or server hardware, and multitudes of others had technical limitations pertaining to the kinds of video they could accelerate.

VDIworks took this challenge head-on and addressed the market-need through the multi-media and multi-monitor capable VideoOverIP protocol in 2009. Soon the product became a problem-solver for scores of organizations that were concerned about compromising on user-experience as they transitioned towards VDI, based on the fact that it flexibly worked with all major hypervisors: Xen, Microsoft Hyper-V and VMware ESX.

The market-acceptance and popularity enabled VDIworks to take technology made in America and expand to global markets such as the oil-rich Middle East, where government to private sector entities were keen on expanding their technology footprints while going green to save power consumption; something that VDIworks integrates very-well into its products.

Virtualization Solution

We’re Ready to Go Virtual But Where Do We Start?

2010 saw VDIworks introduce another ground-breaking technology that made it easier, simpler and quicker for IT admins and professional services engineers to get their virtual ecosystem going. The Protocol Inspector made it possible to conduct site surveys prior to deploying VDI, effortless for IT admins to perform network audits and other things such as reporting on up time by checking whether all physical & virtual machines are up and running the right protocol set.

Going Mobile With Virtualization When Only a Few Dared to

Right when the Apple iPad was launched, VDIworks knew that a revolution was afoot. Capitalizing on the oceanic growth of the tablet, VDIworks released the Fast Remote Desktop app which allowed the iPad to connect to a home or office Windows PC, enabling more productivity to remote workforces. Soon the device peaked at number 4 on the list of most popular iPad utilities.

Just a few months after launching a series of innovative technologies, VDIworks was named to Everything Channel’s CRN Virtualization 100 List, featuring the most compelling virtualization vendors.

Virtualized & Personalized, Welcome to Persistent VDI

Come 2012-2013 and persistent virtual desktops were wide-spread, and as you guessed by the name, these achieved what non-persistent couldn’t. You could take your personalized desktop, the OS, settings, applications, and data, and morph into a virtual machine. As simple as that. Not only did this satisfy the user end, but the fact that desktop management tools worked exactly the same in each instant made admins happier as well.

However, the fact that persistent VDI entails individual customized disk images, it warranted a lot more storage capacity than that of many-to-one non-persistent desktops. These also generated higher IOPS (Input/Output Operations Per Second), significantly enhancing overall user-experience.

Scaling to Newer Heights

As the industry grew, so did VDIworks’ keenness to further introduce bleeding-edge technologies that defined the market, with the company’s DaaS platform unveiled at the Interop Las Vegas 2012 being one of them. VDIworks DaaSManager provided a holistic end-to-end solution when the Desktops-as-a-Service movement was beginning to gain steam.

The solution was available for free and solved critical end-user issues such as dealing with high-costs of proprietary infrastructure, and achieving deliverance from vendor lock-in. In no time, VDIworks received the Best of Business 2012 award from the Small Business Community Association (SBCA), and was named a Tech Innovator for Cloud by UBM Channel’s CRN for 2012.

Read About: Virtual Desktop Platform Benefits

Covering Industry Needs All-Inclusively

Taking the promise of solving one of VDI’s key adoption barriers to the next level, mainly preserving that true PC-like experience, VDIworks Virtual Desktop Platform (VDP) became an award-winning VDI solution that also proved to be the most powerful. It was the earliest to encompass every single VDI component into one suite: connection brokering, health, alerting, VM management, inventory, physical management and much more.

Further, by offering support for Nvidia Grid and Microsoft RemoteFX, VDP became the go-to solution for enterprises seeking mid to high-end VDI, by offering the ability to offload graphical processing from the CPU to the GPU which enables a true PC like experience and enhances the density of the back-end infrastructure. Once again, such innovations led VDIworks to being recognized by CRN as a 2014 Emerging Vendor.

The Current Paradigm Shift, Software-Defined Data-Centers (SDDC)

Just as desktop virtualization became the logical candidate after server virtualization, at this point in time the market is starting to extend virtualization to each area of the data-center. The data-center is regulated by intelligent software systems, instead of the traditional scenario where infrastructure is an amalgam of disparate hardware and devices. This converged infrastructure is made possible through the union of network, storage and server virtualization. Just like desktop virtualization, the central theme that pervades each area is centralizing it to make it more manageable and efficient from the enterprise perspective.

Software Defined Data Centers (SDDC)

Take a look at network virtualization for example. It essentially decouples virtual network from network hardware just as virtual desktops reduced the need for hardware. Software-defined networking (SDN) allows administrators to simply specify which servers have to be connected, while software then figures out the most efficient way of fulfilling these requirements, ensuring rapid provisioning minus the typically intensive configuration process.

Individual transitions across the network, storage, computing, memory domains are all intertwining to make the grand vision of IT as a Service (ITaaS), Infrastructure-in-a-Box an operable & pervasive reality. Add a hypervisor to the virtualization layer and you effectively have what is known as hyper-converged infrastructure, very apparent from the growing numbers of partnerships taking place between software providers and vendors.

Based on years of experience and innovation that is always ahead of the industry, VDIworks is fully poised to capitalize on the hyperconverged infrastructure revolution, and deliver the same level of excellence across this domain as it has in other areas of the industry.