In the busy run up to VMware’s VMworld, a number of technology suppliers have introduced me to new products. My friends at Parallels showed me their take on what users of Windows XP should do. Rather than migrating to Windows Vista and then on to Windows 7, Parallels thinks that folks might be happier moving to a Mac. They’re now offering Parallels Desktop® Switch to Mac Edition™ to facilitate that process. If I was a CIO facing a move of thousands or perhaps tens of thousands of people to newer generation systems and from Windows XP to Windows 7, this might be an interesting option.
Here’s what Parallels has to say about their Switch to Mac Edition
Parallels Desktop Switch to Mac Edition includes everything you need to get up and running on your new Mac:
Comprehensive learning tutorials that teach the ins and outs of the Mac
Easy to use migration tools that make moving your old PC as easy as plug and click
The fastest way to run Your PC on Mac — without rebooting. The award winning Parallels Desktop 4.0 for Mac runs Windows-on-Mac so you can enjoy the best of both worlds.
Parallels Desktop Switch to Mac Edition makes the move to Mac as easy as ready, set, switch.
Snapshot analysis
First of all, let me point out that my rather long desk supports several laptops - a Windows XP system, a MacBook Pro and a Linux system. I use all of these environments and have found that each has benefits and challenges. I use the Mac most of all after a forced migration from a broken Dell laptop to the Mac (see The old coffee-in-the-keyboard trick for all of the embarassing details.) I found it workable enough that I never migrated back.
Although there are those who are proponents of following Microsoft’s migration path from Windows XP to Windows Vista to Windows 7, there are others who have considered what Microsoft is offering and have decided to go another way. They’ve looked at Mac OS X and have decided that is a better next logical step. After all, if they have to go to all of that trouble, it might be better to consider an entirely different platform.
Parallels is more than happy to help these people migrate their Windows licenses and licenses for Windows-supported software into virtual machines that run on a Mac OS X host operating system.
Parallels knows that others, such as VMware, could offer similar migration paths. So, they’ve decided to add an easy to use, easy to understand computer based training module that provides context based tutorials that will, they hope, show Windows XP users that Mac OS X isn’t all that frightening.
The demo I was shown was quite impressive. Windows XP applications seemed responsive and easy to access. The tutorial appeared straightforward and useful. the only thing missing is one of David Pogue’s Switching to the Mac books.
What they didn’t show in this demonstration that I’ve been shown in the past was how documents could easily be created in either Windows XP or Mac OS X virtual machines and updated in the other environment.
Is your organization thinking about or planning the migration from Windows XP to Windows 7? If so, does the availability of this type of approach look like a viable alternative?
Thursday, August 27, 2009
Waking up to the full extent of virtualization options
While doing my morning expedition through the wilds of news sites, blogs and, of course, comic strip sites, I came across something by Laura McCabe, a partner at Hurwitz and Associates, titled “What is Virtualization, and Why Should You Care?” While Laura’s comments are useful, she didn’t try to present a model of virtualization technology that is comprehensive.
Several important categories, such as storage virtualization, security for virtualized environments and management for virtualized environments were not mentioned at all. Processing virtualization, which is far more than merely virtual machine technology, was only lightly touched upon. I suspect this is more related to space limitations than a limitation in Laura’s understanding of the environment. Let’s look a bit deeper, shall we?
A more comprehensive view of Virtualization
Virtualization has been around for quite some time, first appearing in the world of the mainframe in the late 1960s. It appeared once again in the world of midrange systems in the early 1980s and again in the world of industry standard systems hosting Windows, UNIX and Linux in the 1990s. Virtualization is usually defined as using the excess computing power, storage, memory or some other system resource to place a function into an artificial, illusionary environment that offers enhanced characteristics.
Kusnetzky Group Model
There are many layers of technology that virtualize some portion of a computing environment depending upon whether the organization is seeking performance, reliability/availability, scalability, consolidation, agility, a unified management domain or some other goal. Let’s look at each of them in turn.
Access Virtualization — hardware and software technology that allows nearly any device to access any application without either having to know too much about the other. The application sees a device it’s used to working with. The device sees an application it knows how to display. In some cases, special purpose hardware is used on each side of the network connection to increase performance, allow many users to share a single client system or allow a single individual to see multiple displays. This is part of the notion of “desktop” or “user” virtualization.
Application Virtualization — software technology allowing applications to run on many different operating systems and hardware platforms. This usually means that the application has been written to use an application framework. It also means that applications running on the same system that do not use this framework do not get the benefits of application virtualization. More advanced forms of this technology offer the ability to restart an application in case of a failure, start another instance of an application if the application is not meeting service level objectives, or provide workload balancing among multiple instances of an application to archive high levels of scalability. Some really sophisticated approaches to application virtualization can do this magical feat without requiring that the application be re-architected or rewritten using some special application framework. This is another part of what is sometimes called “desktop” or “user” virtualization.”
Processing Virtualization — hardware and software technology that hides physical hardware configuration from system services, operating systems or applications. This type of Virtualization technology can make one system appear to be many or many systems appear to be a single computing resource to achieve goals ranging from raw performance, high levels of scalability, reliability/availability, agility or consolidation of multiple environments onto a single system. One type of processing virtualization, virtual machine technology (sometimes refered to as a “hypervisor”), is also part of the notion of “desktop” or “user” virtualization. Another type, clustering and availability software, can be used to increase reliability and availability.
Storage Virtualization — hardware and software technology that hides where storage systems are and what type of device is actually storing applications and data. This technology also makes it possible for many systems to share the same storage devices without knowing that others are also accessing them. This technology also makes it possible to take a snapshot of a live system so that it can be backed up without hindering online or transactional applications.
Network Virtualization — hardware and software technology that presents a view of the network that differs from the physical view. So, a personal computer may be allowed to only “see” systems it is allowed to access. Another common use is making multiple network links appear to be a single link.
Security for virtualized environments - technology that protects all of the layers of virtualization technology. This layer is designed to assure that the organization’s IT resources are used in the proper way only by authorized individuals, from authorized locaitons at authorized times.
Management of virtualized environments — software technology that makes it possible for multiple systems to be provisioned and managed as if they were a single computing resource.
Several important categories, such as storage virtualization, security for virtualized environments and management for virtualized environments were not mentioned at all. Processing virtualization, which is far more than merely virtual machine technology, was only lightly touched upon. I suspect this is more related to space limitations than a limitation in Laura’s understanding of the environment. Let’s look a bit deeper, shall we?
A more comprehensive view of Virtualization
Virtualization has been around for quite some time, first appearing in the world of the mainframe in the late 1960s. It appeared once again in the world of midrange systems in the early 1980s and again in the world of industry standard systems hosting Windows, UNIX and Linux in the 1990s. Virtualization is usually defined as using the excess computing power, storage, memory or some other system resource to place a function into an artificial, illusionary environment that offers enhanced characteristics.
Kusnetzky Group Model
There are many layers of technology that virtualize some portion of a computing environment depending upon whether the organization is seeking performance, reliability/availability, scalability, consolidation, agility, a unified management domain or some other goal. Let’s look at each of them in turn.
Access Virtualization — hardware and software technology that allows nearly any device to access any application without either having to know too much about the other. The application sees a device it’s used to working with. The device sees an application it knows how to display. In some cases, special purpose hardware is used on each side of the network connection to increase performance, allow many users to share a single client system or allow a single individual to see multiple displays. This is part of the notion of “desktop” or “user” virtualization.
Application Virtualization — software technology allowing applications to run on many different operating systems and hardware platforms. This usually means that the application has been written to use an application framework. It also means that applications running on the same system that do not use this framework do not get the benefits of application virtualization. More advanced forms of this technology offer the ability to restart an application in case of a failure, start another instance of an application if the application is not meeting service level objectives, or provide workload balancing among multiple instances of an application to archive high levels of scalability. Some really sophisticated approaches to application virtualization can do this magical feat without requiring that the application be re-architected or rewritten using some special application framework. This is another part of what is sometimes called “desktop” or “user” virtualization.”
Processing Virtualization — hardware and software technology that hides physical hardware configuration from system services, operating systems or applications. This type of Virtualization technology can make one system appear to be many or many systems appear to be a single computing resource to achieve goals ranging from raw performance, high levels of scalability, reliability/availability, agility or consolidation of multiple environments onto a single system. One type of processing virtualization, virtual machine technology (sometimes refered to as a “hypervisor”), is also part of the notion of “desktop” or “user” virtualization. Another type, clustering and availability software, can be used to increase reliability and availability.
Storage Virtualization — hardware and software technology that hides where storage systems are and what type of device is actually storing applications and data. This technology also makes it possible for many systems to share the same storage devices without knowing that others are also accessing them. This technology also makes it possible to take a snapshot of a live system so that it can be backed up without hindering online or transactional applications.
Network Virtualization — hardware and software technology that presents a view of the network that differs from the physical view. So, a personal computer may be allowed to only “see” systems it is allowed to access. Another common use is making multiple network links appear to be a single link.
Security for virtualized environments - technology that protects all of the layers of virtualization technology. This layer is designed to assure that the organization’s IT resources are used in the proper way only by authorized individuals, from authorized locaitons at authorized times.
Management of virtualized environments — software technology that makes it possible for multiple systems to be provisioned and managed as if they were a single computing resource.
Nissan halts server sprawl with Microsoft virtualization software, cuts energy use 34%Nissan halts server sprawl with Microsoft virtualization softwar
Using Microsoft's Hyper-V software, Nissan virtualized most servers, reduced number of manufacturing operations servers from 159 to 28, decreased energy usage by 34%, simplified systems at each plant, and reduced physical space requirements.
Smyrna, TN - Nissan North America Inc. (NNA) reduced the number of computer servers for its manufacturing operations from 159 to 28 at its Smyrna and Decherd, TN, plants. The consolidation increased NNA's production efficiency and has cut energy usage by 34 percent, ultimately helping to create a "greener" Nissan, Microsoft reported on Aug. 19.
"Over the past two to three years, our server population had exploded to almost 160 and was continuing to grow," said Phil D'Antonio, manager of Conveyors and Controls Engineering, NNA. "It was extremely difficult to manage and it consumed numerous labor hours that could be used on other initiatives that add value to our operation."
Nissan conducted a thorough inventory of its servers and defined a refresh strategy for its system infrastructure. NNA used Microsoft Hyper-V software, which allows multiple virtual machines to operate on one physical machine. The virtualization technology helped to create a smaller and less complex system at the Smyrna and Decherd plants in less than 12 months. The smaller system improved manageability and reduced the amount of space and energy needed to operate, which also helped Nissan reduce its impact on the environment.
"The Hyper-V technology was designed to create a more efficient system and help reduce environmental impact," said David Graff, U.S. automotive industry solutions director at Microsoft. "That has helped Nissan achieve its main objectives."
D'Antonio said that as a result of the refresh, Nissan has realized the following benefits:
• Increased reliability with minimal system downtime;
• Reduced expenses for running the system;
• Standard disaster recovery plan;
• An efficient setup for redundancy;
• Improved manageability;
• Compatibility with new systems;
• Smaller footprint to house fewer servers; and
• Reduced energy costs by 34 percent.
"We were able to reduce the growing cost associated with a sprawling system as well as cut energy usage by a third," D'Antonio said. "As an Energy Star partner, Nissan is committed to improving the energy efficiency of our business and protecting the environment for future generations."
Nissan's Smyrna plant has seen its energy efficiency improve by as much as 32 percent since it began aggressively pursuing environmental initiatives in 2005. These energy-saving practices are currently saving the company more than $3.5 million per year.
Nissan Green Program 2010 aims to reduce CO2 and other emissions and increase recycling.
Microsoft Automotive and Industrial Equipment vertical works with industry partners to develop solutions based on Microsoft technologies that enable original equipment manufacturers (OEMs), suppliers and customers to improve efficiency, effectiveness and knowledge across the business.
Smyrna, TN - Nissan North America Inc. (NNA) reduced the number of computer servers for its manufacturing operations from 159 to 28 at its Smyrna and Decherd, TN, plants. The consolidation increased NNA's production efficiency and has cut energy usage by 34 percent, ultimately helping to create a "greener" Nissan, Microsoft reported on Aug. 19.
"Over the past two to three years, our server population had exploded to almost 160 and was continuing to grow," said Phil D'Antonio, manager of Conveyors and Controls Engineering, NNA. "It was extremely difficult to manage and it consumed numerous labor hours that could be used on other initiatives that add value to our operation."
Nissan conducted a thorough inventory of its servers and defined a refresh strategy for its system infrastructure. NNA used Microsoft Hyper-V software, which allows multiple virtual machines to operate on one physical machine. The virtualization technology helped to create a smaller and less complex system at the Smyrna and Decherd plants in less than 12 months. The smaller system improved manageability and reduced the amount of space and energy needed to operate, which also helped Nissan reduce its impact on the environment.
"The Hyper-V technology was designed to create a more efficient system and help reduce environmental impact," said David Graff, U.S. automotive industry solutions director at Microsoft. "That has helped Nissan achieve its main objectives."
D'Antonio said that as a result of the refresh, Nissan has realized the following benefits:
• Increased reliability with minimal system downtime;
• Reduced expenses for running the system;
• Standard disaster recovery plan;
• An efficient setup for redundancy;
• Improved manageability;
• Compatibility with new systems;
• Smaller footprint to house fewer servers; and
• Reduced energy costs by 34 percent.
"We were able to reduce the growing cost associated with a sprawling system as well as cut energy usage by a third," D'Antonio said. "As an Energy Star partner, Nissan is committed to improving the energy efficiency of our business and protecting the environment for future generations."
Nissan's Smyrna plant has seen its energy efficiency improve by as much as 32 percent since it began aggressively pursuing environmental initiatives in 2005. These energy-saving practices are currently saving the company more than $3.5 million per year.
Nissan Green Program 2010 aims to reduce CO2 and other emissions and increase recycling.
Microsoft Automotive and Industrial Equipment vertical works with industry partners to develop solutions based on Microsoft technologies that enable original equipment manufacturers (OEMs), suppliers and customers to improve efficiency, effectiveness and knowledge across the business.
REVIEW: Microsoft System Center Virtual Machine Manager 2008 R2 Bolsters Hyper-V
Any organization that goes beyond dabbling with Hyper-V should use System Center Virtual Machine Manager 2008 R2 to manage virtual resources in the Hyper-V component of Windows Server 2008 R2.
Hot on the heels of the release of Windows Server 2008 R2 is System Center Virtual Machine Manager 2008 R2, an essential management companion for the Hyper-V component of Microsoft’s server platform.
Any organization that goes beyond dabbling with Hyper-V should use System Center Virtual Machine Manager 2008 R2, or SC VMM, to manage virtual resources in Microsoft's revamped Hyper-V--including Hyper-V’s new ability to move running virtual machines from one physical host to another.
SC VMM has also gained the ability to manage both Microsoft Hyper-V and VMware environments, a feature not found in VMware's management tools. All told, the advances in SC VMM are significant but are not yet enough to dislodge frontrunner VMware from the leading position in server virtualization.
The most important new capability in Hyper-V is live migration. During tests conducted by eWEEK Labs’ Executive Editor Jason Brooks, running virtual machines could be "live migrated" with barely noticeable impact on application performance. During those tests, Microsoft's Failover Cluster Manager was used to initiate the live migration.
Using SC VMM, I was similarly able to orchestrate the live migration of virtual machines. But SC VMM goes further and also centralizes myriad virtual machine management tasks such as VM creation and teardown, as well as physical-to-virtual and virtual-to-physical machine conversions.Resource Library:
SC VMM also provides basic up/down status reporting on VM state and barebones information about VM utilization. For greater depth on VM utilization and reporting, SC VMM can be integrated with Microsoft's System Center Operations Manager.
When I reviewed SC VMM 2008 in January, I noted that one of its most important features was cross-platform support for Hyper-V and VMware environments.
Since that time, VMware released the current virtualization platform champion, vSphere 4. eWEEK Labs is running a vSphere 4 environment on a pair of Dell R710 servers, each equipped with 24GB of RAM. My tests showed that SC VMM was able to work just fine with vSphere 4, although Microsoft officially supports only VMware Infrastructure 3 environments at this time.
This cross-platform support is still one of the most attractive features of SC VMM from an IT operations point of view. SC VMM actually proxies the desired action, such as VM startup/shutdown or VMotion calls to VMware's management console and reports of status in the SC VMM administrative console.
Overall, the interaction between SC VMM and both versions of VMware's management tools worked without a flaw in my tests. The end result was the centralized management of seven Intel Xeon 5500-based physical host systems running more than 20 virtual machines across both VMware and Microsoft Hyper-V environments.
As mentioned earlier, SC VMM can orchestrate the migration of virtual machines between physical hosts with similar but not identical processors. VMware also has this capability. In both cases, the hypervisor presents a processor to the virtual machine that represents the CPU capability of the lowest common denominator in the migration group. Neither product can yet migrate virtual machines to physical systems running processors from different manufacturers.
With SC VMM, a configuration check box comes up during VM creation that allows the system to move to physical hosts with similar physical processors. The primary consideration here is that IT managers must ensure that applications running on the VM don't use instructions provided in a more advanced chip before enabling this feature.
Microsoft also added storage enhancements to SC VMM to accommodate changes in the way that VMs can now use CSV (clustered shared volumes) and for provisioning changes to speed up VM deployments.
These features--along with a variety of convenience features, including a library to store resources such as virtual machines, virtual hard drives and other profile settings for hardware and guest OS settings--make SC VMM a workable complement to the improved Hyper-V role in Windows Server 2008 R2.
Hot on the heels of the release of Windows Server 2008 R2 is System Center Virtual Machine Manager 2008 R2, an essential management companion for the Hyper-V component of Microsoft’s server platform.
Any organization that goes beyond dabbling with Hyper-V should use System Center Virtual Machine Manager 2008 R2, or SC VMM, to manage virtual resources in Microsoft's revamped Hyper-V--including Hyper-V’s new ability to move running virtual machines from one physical host to another.
SC VMM has also gained the ability to manage both Microsoft Hyper-V and VMware environments, a feature not found in VMware's management tools. All told, the advances in SC VMM are significant but are not yet enough to dislodge frontrunner VMware from the leading position in server virtualization.
The most important new capability in Hyper-V is live migration. During tests conducted by eWEEK Labs’ Executive Editor Jason Brooks, running virtual machines could be "live migrated" with barely noticeable impact on application performance. During those tests, Microsoft's Failover Cluster Manager was used to initiate the live migration.
Using SC VMM, I was similarly able to orchestrate the live migration of virtual machines. But SC VMM goes further and also centralizes myriad virtual machine management tasks such as VM creation and teardown, as well as physical-to-virtual and virtual-to-physical machine conversions.Resource Library:
SC VMM also provides basic up/down status reporting on VM state and barebones information about VM utilization. For greater depth on VM utilization and reporting, SC VMM can be integrated with Microsoft's System Center Operations Manager.
When I reviewed SC VMM 2008 in January, I noted that one of its most important features was cross-platform support for Hyper-V and VMware environments.
Since that time, VMware released the current virtualization platform champion, vSphere 4. eWEEK Labs is running a vSphere 4 environment on a pair of Dell R710 servers, each equipped with 24GB of RAM. My tests showed that SC VMM was able to work just fine with vSphere 4, although Microsoft officially supports only VMware Infrastructure 3 environments at this time.
This cross-platform support is still one of the most attractive features of SC VMM from an IT operations point of view. SC VMM actually proxies the desired action, such as VM startup/shutdown or VMotion calls to VMware's management console and reports of status in the SC VMM administrative console.
Overall, the interaction between SC VMM and both versions of VMware's management tools worked without a flaw in my tests. The end result was the centralized management of seven Intel Xeon 5500-based physical host systems running more than 20 virtual machines across both VMware and Microsoft Hyper-V environments.
As mentioned earlier, SC VMM can orchestrate the migration of virtual machines between physical hosts with similar but not identical processors. VMware also has this capability. In both cases, the hypervisor presents a processor to the virtual machine that represents the CPU capability of the lowest common denominator in the migration group. Neither product can yet migrate virtual machines to physical systems running processors from different manufacturers.
With SC VMM, a configuration check box comes up during VM creation that allows the system to move to physical hosts with similar physical processors. The primary consideration here is that IT managers must ensure that applications running on the VM don't use instructions provided in a more advanced chip before enabling this feature.
Microsoft also added storage enhancements to SC VMM to accommodate changes in the way that VMs can now use CSV (clustered shared volumes) and for provisioning changes to speed up VM deployments.
These features--along with a variety of convenience features, including a library to store resources such as virtual machines, virtual hard drives and other profile settings for hardware and guest OS settings--make SC VMM a workable complement to the improved Hyper-V role in Windows Server 2008 R2.
Xangati Releases Software for Monitoring Virtualization Applications
Xangati's software allows IT administrators to monitor activity of virtualized and cloud applications connected to their network, allowing potential problems to be located and recorded. This added transparency may assist help desks in addressing problems in enterprise networks with a combination of on-premises and cloud applications and services running simultaneously.
Xangati announced the rollout of AppMonitor for Virtualization Management, an addition to its AppMonitor suite that gives users the ability to monitor activity of virtualized and cloud applications associated with their network. That ability, combined with the company’s Virtual Trouble Ticket portal, allows either a help desk or cloud provider to view and assess any performance issues with a particular virtualized or cloud application, and resolve the issue.
Specifically, those IT administrators at a help desk or with the cloud provider can view the past 15 minutes of end-user activity, allowing them to hopefully pinpoint the specific performance issue affecting an application. Executives at Xangati refer to this 15 minutes of playback as a "DVR-style recording." The added transparency could potentially boost the problem-solving efficiency of help desks for SMBs (small and midsize businesses) and the enterprise that support a variety of on-premises and cloud-based solutions. Resource Library:
"The new AppMonitor we’ve designed for the virtual world not only allows a power user to do their own self-help," Alan Robin, CEO of Xangati, said in a statement, "but shrinks the IT visibility gap by giving a more complete understanding of the interactions of applications and resources – seeing straight through to where a performance problem resides, helping to bound the problem and resolve issues 50 percent faster."
The AppMonitor offers two dashboards: one that displays individual end-user application activity and performance, and one that shows the bandwidth usage by applications. The latter, apparently, can be utilized to record cloud application usage, for billing purposes.
Rising IT incident volumes within the enterprise can often lead to inefficiencies, as they force IT administrators to spend increased amounts of time on issues ranging from resetting passwords to fixing device blunders. Extending simple tips to workers – such as exercising caution when opening email attachments and being proactive about managing inboxes – can save IT much wasted effort and time, as can introducing tools that streamline the ticket process for IT help-desk workers.
In an effort to stem the amount of help-desk resources being wasted on everyday issues, many IT administrators will enact practices such as having all customized enterprise applications run on a single “approved” browser rather than risk potential issues with alternates. However, such restrictions may prevent the enterprise from adopting more effective browsers and other tools.
Xangati announced the rollout of AppMonitor for Virtualization Management, an addition to its AppMonitor suite that gives users the ability to monitor activity of virtualized and cloud applications associated with their network. That ability, combined with the company’s Virtual Trouble Ticket portal, allows either a help desk or cloud provider to view and assess any performance issues with a particular virtualized or cloud application, and resolve the issue.
Specifically, those IT administrators at a help desk or with the cloud provider can view the past 15 minutes of end-user activity, allowing them to hopefully pinpoint the specific performance issue affecting an application. Executives at Xangati refer to this 15 minutes of playback as a "DVR-style recording." The added transparency could potentially boost the problem-solving efficiency of help desks for SMBs (small and midsize businesses) and the enterprise that support a variety of on-premises and cloud-based solutions. Resource Library:
"The new AppMonitor we’ve designed for the virtual world not only allows a power user to do their own self-help," Alan Robin, CEO of Xangati, said in a statement, "but shrinks the IT visibility gap by giving a more complete understanding of the interactions of applications and resources – seeing straight through to where a performance problem resides, helping to bound the problem and resolve issues 50 percent faster."
The AppMonitor offers two dashboards: one that displays individual end-user application activity and performance, and one that shows the bandwidth usage by applications. The latter, apparently, can be utilized to record cloud application usage, for billing purposes.
Rising IT incident volumes within the enterprise can often lead to inefficiencies, as they force IT administrators to spend increased amounts of time on issues ranging from resetting passwords to fixing device blunders. Extending simple tips to workers – such as exercising caution when opening email attachments and being proactive about managing inboxes – can save IT much wasted effort and time, as can introducing tools that streamline the ticket process for IT help-desk workers.
In an effort to stem the amount of help-desk resources being wasted on everyday issues, many IT administrators will enact practices such as having all customized enterprise applications run on a single “approved” browser rather than risk potential issues with alternates. However, such restrictions may prevent the enterprise from adopting more effective browsers and other tools.
Embotics Eases Management of Virtual Environments
Embotics is rolling out Version 3.0 of its V-Commander offering for the automation and management of virtual environments, with the idea of driving down operational costs and increasing automation of the infrastructure. V-Commander also comes in three modules, enabling enterprises to pick and choose which features to buy when they need them.
Embotics is looking to reduce the operational costs associated with server virtualization in the data center.
The company Aug. 25 rolled out V-Commander 3.0, which is aimed at increasing automation and management in virtualized environments. Resource Library:
V-Commander 3.0 gives IT professionals a deep look into their environments, enabling them to get a historical look at events in their virtualized environments and offering a host of reporting capabilities. In addition, V-Commander 3.0 can establish and enforce policies, suspend virtual machines that don’t comply with policies, assign policy attributes at various levels throughout the virtual infrastructure and alert IT managers via e-mail.
The enhanced software also includes better role-based access control, support for mixed VMware environments and better compatibility with VMware’s VirtualCenter.
Embotics is offering V-Commander in three modules, enabling businesses to pick and choose what they need, giving them greater control over their virtual infrastructure deployments and an easier way to pay for them.
The modules include Federated Inventory Management, a real-time inventory and reporting system; Resource and Cost Management: Automated, which offers resource management and cost containment features, improving accountability, reducing administrative time and optimizing resource utilization; and the Operational and Risk Management module, which offers process automation and control, offering a more consistent environment and improved oversight.
Embotics is looking to reduce the operational costs associated with server virtualization in the data center.
The company Aug. 25 rolled out V-Commander 3.0, which is aimed at increasing automation and management in virtualized environments. Resource Library:
V-Commander 3.0 gives IT professionals a deep look into their environments, enabling them to get a historical look at events in their virtualized environments and offering a host of reporting capabilities. In addition, V-Commander 3.0 can establish and enforce policies, suspend virtual machines that don’t comply with policies, assign policy attributes at various levels throughout the virtual infrastructure and alert IT managers via e-mail.
The enhanced software also includes better role-based access control, support for mixed VMware environments and better compatibility with VMware’s VirtualCenter.
Embotics is offering V-Commander in three modules, enabling businesses to pick and choose what they need, giving them greater control over their virtual infrastructure deployments and an easier way to pay for them.
The modules include Federated Inventory Management, a real-time inventory and reporting system; Resource and Cost Management: Automated, which offers resource management and cost containment features, improving accountability, reducing administrative time and optimizing resource utilization; and the Operational and Risk Management module, which offers process automation and control, offering a more consistent environment and improved oversight.
How to Implement Green Data Centers with IT Virtualization
The use of virtualization technology is usually the first and most important step companies can take to create energy-efficient and green data centers. Virtualization is the most promising technology to address both the issues of IT resource utilization and facilities space, power and cooling utilization. IT virtualization, along with cloud computing, is the key to energy-efficienct, flexible and green data centers. Here, Knowledge Center contributor John Lamb describes the concept of IT virtualization and indicates the significant impact that IT virtualization has on improving data center energy efficiency.
The most significant step most organizations can make in moving to green data centers is to implement virtualization for their IT data center devices. The IT devices include servers, data storage, and clients or desktops used to support the data center. There is also a virtual IT world of the future—via private cloud computing—for most of our data centers.
Although the use of cloud computing in your company's data center for mainstream computing may be off in the future, some steps towards private cloud computing for mainstream computing within your company are currently available. Server clusters are here now and are being used in many corporate data centers.
Although cost reduction usually drives the path to virtualization, often the most important reason to use virtualization is IT flexibility. The cost and energy savings due to consolidating hardware and software are very significant benefits and nicely complement the flexibility benefits. The use of virtualization technologies is usually the first and most important step we can take in creating energy efficient and green data centers.Resource Library:
Reasons for creating virtual servers
Consider this basic scenario: You're in charge of procuring additional server capacity at your company's data center. You have two identical servers, each running different Windows applications for your company. The first server—let's call it "Server A"—is lightly used, reaching a peak of only five percent of its CPU capacity and using only five percent of its internal hard disk. The second server—let's call it "Server B"—is using all of its CPU (averaging 95 percent CPU utilization) and has basically run out of hard disk capacity (that is, the hard disk is 95 percent full).
So, you have a real problem with Server B. However, if you consider Server A and Server B together, on average the combined servers are using only 50 percent of their CPU capacity and 50 percent of their hard disk capacity. If the two servers were actually virtual servers on a large physical server, the problem would be immediately solved since each server could be quickly allocated the resource each needs.
In newer virtual server technologies—for example, Unix Logical Partitions (LPARs) with micro-partitioning—each virtual server can dynamically (instantaneously) increase the number of CPUs available by utilizing the CPUs currently not in use by other virtual servers on the large physical machine. This idea is that each virtual server gets the resource required based on the virtual server’s immediate need.
Cloud computing: exciting future for IT virtualization
Cloud computing is a relatively new (circa late 2007) label for the subset of grid computing that includes utility computing and other approaches to the use of shared computing resources. Cloud computing is an alternative to having local servers or personal devices handling users' applications. Essentially, it is an idea that the technological capabilities should "hover" over everything and be available whenever a user wants.Resource Library:
Although the early publicity on cloud computing was for public offerings over the public Internet by companies such as Amazon and Google, private cloud computing is starting to come of age. A private cloud is a smaller, cloudlike IT system within a corporate firewall that offers shared services to a closed internal network. Consumers of such a cloud would include the employees across various divisions and departments, business partners, suppliers, resellers and other organizations.
Shared services on the infrastructure side such as computing power or data storage services (or on the application side such as a single customer information application shared across the organization) are suitable candidates for such an approach. Of course, IT virtualization would be the basis of the infrastructure design for the shared services, and this will help drive energy efficiency for our green data centers of the future.
Because a private cloud is exclusive in nature and limited in access to a set of participants, it has inherent strengths with respect to security aspects and control over data. Also, the approach can provide advantages with respect to adherence to corporate and regulatory compliance guidelines. These considerations for a private cloud are very significant for most large organizations.
Cluster architecture for virtual servers
There are now many IT vendors offering virtual servers and other virtual systems. Cluster architecture for these virtual systems provides another significant step forward in data center flexibility and provides an infrastructure for very efficient private cloud computing. By completely virtualizing servers, storage and networking, an entire running virtual machine can be moved instantaneously from one server to another.
The most significant step most organizations can make in moving to green data centers is to implement virtualization for their IT data center devices. The IT devices include servers, data storage, and clients or desktops used to support the data center. There is also a virtual IT world of the future—via private cloud computing—for most of our data centers.
Although the use of cloud computing in your company's data center for mainstream computing may be off in the future, some steps towards private cloud computing for mainstream computing within your company are currently available. Server clusters are here now and are being used in many corporate data centers.
Although cost reduction usually drives the path to virtualization, often the most important reason to use virtualization is IT flexibility. The cost and energy savings due to consolidating hardware and software are very significant benefits and nicely complement the flexibility benefits. The use of virtualization technologies is usually the first and most important step we can take in creating energy efficient and green data centers.Resource Library:
Reasons for creating virtual servers
Consider this basic scenario: You're in charge of procuring additional server capacity at your company's data center. You have two identical servers, each running different Windows applications for your company. The first server—let's call it "Server A"—is lightly used, reaching a peak of only five percent of its CPU capacity and using only five percent of its internal hard disk. The second server—let's call it "Server B"—is using all of its CPU (averaging 95 percent CPU utilization) and has basically run out of hard disk capacity (that is, the hard disk is 95 percent full).
So, you have a real problem with Server B. However, if you consider Server A and Server B together, on average the combined servers are using only 50 percent of their CPU capacity and 50 percent of their hard disk capacity. If the two servers were actually virtual servers on a large physical server, the problem would be immediately solved since each server could be quickly allocated the resource each needs.
In newer virtual server technologies—for example, Unix Logical Partitions (LPARs) with micro-partitioning—each virtual server can dynamically (instantaneously) increase the number of CPUs available by utilizing the CPUs currently not in use by other virtual servers on the large physical machine. This idea is that each virtual server gets the resource required based on the virtual server’s immediate need.
Cloud computing: exciting future for IT virtualization
Cloud computing is a relatively new (circa late 2007) label for the subset of grid computing that includes utility computing and other approaches to the use of shared computing resources. Cloud computing is an alternative to having local servers or personal devices handling users' applications. Essentially, it is an idea that the technological capabilities should "hover" over everything and be available whenever a user wants.Resource Library:
Although the early publicity on cloud computing was for public offerings over the public Internet by companies such as Amazon and Google, private cloud computing is starting to come of age. A private cloud is a smaller, cloudlike IT system within a corporate firewall that offers shared services to a closed internal network. Consumers of such a cloud would include the employees across various divisions and departments, business partners, suppliers, resellers and other organizations.
Shared services on the infrastructure side such as computing power or data storage services (or on the application side such as a single customer information application shared across the organization) are suitable candidates for such an approach. Of course, IT virtualization would be the basis of the infrastructure design for the shared services, and this will help drive energy efficiency for our green data centers of the future.
Because a private cloud is exclusive in nature and limited in access to a set of participants, it has inherent strengths with respect to security aspects and control over data. Also, the approach can provide advantages with respect to adherence to corporate and regulatory compliance guidelines. These considerations for a private cloud are very significant for most large organizations.
Cluster architecture for virtual servers
There are now many IT vendors offering virtual servers and other virtual systems. Cluster architecture for these virtual systems provides another significant step forward in data center flexibility and provides an infrastructure for very efficient private cloud computing. By completely virtualizing servers, storage and networking, an entire running virtual machine can be moved instantaneously from one server to another.
Tuesday, August 25, 2009
VirtenSys Announces General Availability of Its I/O Virtualization Switches
VirtenSys, Ltd, a leader in next generation I/O solutions for data centers, today announced the general availability of its flagship I/O virtualization switches, the VirtenSys VIO 4000 Series based on its Virtual Connectivity Cloud platform. With this announcement, VirtenSys became the first company to release production units of I/O virtualization switches based on PCI Express standards. The VIO 4000 switches are the first products on the market to consolidate, virtualize and share the major types of server networking and storage connectivity, including Ethernet, Fibre Channel over Ethernet (FCoE), SAS/SATA, and Fibre Channel, without requiring any changes be made to the servers, networks, or I/O adapters. The switches are available through the VirtenSys worldwide partner network, and the products’ full capabilities will be demonstrated at VMworld 2009, booth #2231, in San Francisco, Calif.
The VIO 4000 switches reduce rack and blade server management complexity and costs by more than 60 percent, improve I/O utilization to greater than 80 percent, deliver full connectivity bandwidth to servers, halve equipment cost, and reduce I/O power consumption by more than 60 percent. This results in providing servers with the best price/performance and lowest energy consumption for accessing not only the local area networks (LAN), but also the storage infrastructures, including direct-attached storage (DAS) and storage area networks (SAN).
What Industry Analysts Are Saying
“With budgets under pressure and the need to quickly migrate to a greener IT infrastructure, CIOs continuously need to improve data center management efficiency and reduce capital expenditures and power consumption without disrupting their existing processes,” said Joe Skorupa, research VP at Gartner, Inc. “I/O virtualization has the potential to enhance data center operations and help speed the transition to web-based applications by reducing costs, enabling consolidated and remote I/O management without physical reconfiguration, and removing the I/O bottlenecks that may result from increased server utilization.”
“The I/O infrastructure in many of today's datacenters is over-provisioned, underutilized, complex, inflexible, expensive and power-hungry,” commented Matt Eastwood, group vice president at IDC. “The VirtenSys approach to virtualize and share the I/O resources will help remove these I/O bottlenecks and allow IT organizations to run traditional and cloud-based applications more efficiently and increase their servers’ workloads while reducing costs and power requirements.”
What Customers and Partners Are Saying
“The VirtenSys VIO switches enable us to provide a complete, end-to-end virtualization solution to our customers," stated Dale Foster, president at Promark Technology. "The switches are totally transparent to the existing servers and data centers, and can be deployed immediately. The VirtenSys solution is a great fit to all of our customers, from SMBs to large enterprises, and enables them to realize tremendous benefits of cost reduction and power savings.”
“VirtenSys I/O virtualization technology and products are unique, non-disruptive, and preserve investments in our IT infrastructure,” said Kevin Cantoni, vice president of product development at PayPal. “I/O virtualization is an important part of our green IT strategy.”
“The current I/O bottleneck is constraining our capability to respond to the explosion of services and applications and to take full advantage of other data center optimization initiatives,” said Ajay Srivastava, VP OnDemand Platform at Oracle. “I/O virtualization gives us a better flexibility to migrate applications within a grid.”
Virtualized operating systems can leverage the capabilities of I/O virtualization to further improve data centers operations. “The VirtenSys approach to I/O virtualization complements VMware vSphere™ 4 to help customers achieve greater flexibility, manageability and efficiency in IT infrastructure deployments,” said Shekar Ayyar, vice president of infrastructure alliances, VMware. “VirtenSys solutions help customers create an easily managed converged infrastructure to cut down on operational costs while increasing staff productivity and improving service-level agreements.”
VirtenSys Statement
“I am very excited to launch our products in the channel, following our recent funding announcement at the beginning of the month,” said Ahmet Houssein, president and CEO at VirtenSys. “There is strong customer demand for I/O virtualization to solve I/O bottleneck issues. Our products are introduced at the perfect time – the beginning of a new cycle of IT infrastructure optimization – and will become a catalyst to the next-generation green IT data centers. We are in an excellent position to continue building on the traction we have generated at key OEM customers and end-users.”
About VirtenSys IOV Products
VirtenSys I/O virtualization products create virtualized I/O clouds where servers’ I/O resources are pooled, consolidated, and dynamically allocated on demand based on the applications needs. The VIO 4000 switches connect directly to multiple physical servers and support Intel® 10 GE NICs, Neterion® 10GE NICs, QLogic® Fibre Channel HBAs and LSI® SAS/SATA MegaRAID Controllers. The switches can be deployed immediately and provide additional key benefits to customers, including:
Extend the data center life cycle by eliminating multiple layers of aggregation switches, I/O adapters, disk drives, and cables
Reduce management expenses by minimizing human intervention and removing the need for physical reconfiguration
Protect organizations’ investments in their IT infrastructure by being totally transparent to servers, networks, and management processes
Enable new usage models by seamlessly integrating with traditional or virtualized data centers
Enhance data center reliability by supporting multiple failover schemes, including active-active, active-passive and N+1 redundancy
The VIO 4000 switches reduce rack and blade server management complexity and costs by more than 60 percent, improve I/O utilization to greater than 80 percent, deliver full connectivity bandwidth to servers, halve equipment cost, and reduce I/O power consumption by more than 60 percent. This results in providing servers with the best price/performance and lowest energy consumption for accessing not only the local area networks (LAN), but also the storage infrastructures, including direct-attached storage (DAS) and storage area networks (SAN).
What Industry Analysts Are Saying
“With budgets under pressure and the need to quickly migrate to a greener IT infrastructure, CIOs continuously need to improve data center management efficiency and reduce capital expenditures and power consumption without disrupting their existing processes,” said Joe Skorupa, research VP at Gartner, Inc. “I/O virtualization has the potential to enhance data center operations and help speed the transition to web-based applications by reducing costs, enabling consolidated and remote I/O management without physical reconfiguration, and removing the I/O bottlenecks that may result from increased server utilization.”
“The I/O infrastructure in many of today's datacenters is over-provisioned, underutilized, complex, inflexible, expensive and power-hungry,” commented Matt Eastwood, group vice president at IDC. “The VirtenSys approach to virtualize and share the I/O resources will help remove these I/O bottlenecks and allow IT organizations to run traditional and cloud-based applications more efficiently and increase their servers’ workloads while reducing costs and power requirements.”
What Customers and Partners Are Saying
“The VirtenSys VIO switches enable us to provide a complete, end-to-end virtualization solution to our customers," stated Dale Foster, president at Promark Technology. "The switches are totally transparent to the existing servers and data centers, and can be deployed immediately. The VirtenSys solution is a great fit to all of our customers, from SMBs to large enterprises, and enables them to realize tremendous benefits of cost reduction and power savings.”
“VirtenSys I/O virtualization technology and products are unique, non-disruptive, and preserve investments in our IT infrastructure,” said Kevin Cantoni, vice president of product development at PayPal. “I/O virtualization is an important part of our green IT strategy.”
“The current I/O bottleneck is constraining our capability to respond to the explosion of services and applications and to take full advantage of other data center optimization initiatives,” said Ajay Srivastava, VP OnDemand Platform at Oracle. “I/O virtualization gives us a better flexibility to migrate applications within a grid.”
Virtualized operating systems can leverage the capabilities of I/O virtualization to further improve data centers operations. “The VirtenSys approach to I/O virtualization complements VMware vSphere™ 4 to help customers achieve greater flexibility, manageability and efficiency in IT infrastructure deployments,” said Shekar Ayyar, vice president of infrastructure alliances, VMware. “VirtenSys solutions help customers create an easily managed converged infrastructure to cut down on operational costs while increasing staff productivity and improving service-level agreements.”
VirtenSys Statement
“I am very excited to launch our products in the channel, following our recent funding announcement at the beginning of the month,” said Ahmet Houssein, president and CEO at VirtenSys. “There is strong customer demand for I/O virtualization to solve I/O bottleneck issues. Our products are introduced at the perfect time – the beginning of a new cycle of IT infrastructure optimization – and will become a catalyst to the next-generation green IT data centers. We are in an excellent position to continue building on the traction we have generated at key OEM customers and end-users.”
About VirtenSys IOV Products
VirtenSys I/O virtualization products create virtualized I/O clouds where servers’ I/O resources are pooled, consolidated, and dynamically allocated on demand based on the applications needs. The VIO 4000 switches connect directly to multiple physical servers and support Intel® 10 GE NICs, Neterion® 10GE NICs, QLogic® Fibre Channel HBAs and LSI® SAS/SATA MegaRAID Controllers. The switches can be deployed immediately and provide additional key benefits to customers, including:
Extend the data center life cycle by eliminating multiple layers of aggregation switches, I/O adapters, disk drives, and cables
Reduce management expenses by minimizing human intervention and removing the need for physical reconfiguration
Protect organizations’ investments in their IT infrastructure by being totally transparent to servers, networks, and management processes
Enable new usage models by seamlessly integrating with traditional or virtualized data centers
Enhance data center reliability by supporting multiple failover schemes, including active-active, active-passive and N+1 redundancy
Dell Enhances Virtualization, Consolidation for Next-Generation Datacenters
Dell today announced new high-speed storage and networking options -- including availability of next-generation 10 Gigabit Ethernet (10GbE) -- to help customers consolidate and virtualize their datacenters by addressing network bottleneck challenges, reducing complexity and lowering costs.
Dell Point of View:
Consolidation allows customers to reduce the number of physical servers and storage pools and help lower overall operating costs by reducing management time, power consumption and floor space used. This explains the rapid adoption of blade servers and storage area networks (SANs). But with greater compute density come greater input/output (I/O) needs and more scalable storage requirements.
Virtualization is instrumental in enabling consolidation and in improving both the availability of enterprise applications and server utilization by creating virtual machines within a server. Just like with consolidation, virtualization can bring increased network traffic, including dedicated traffic for management of the virtual machines. As the number of virtual machines per physical server increases, the physical network layer must be expanded to accommodate the increased network traffic.
The News:
New Features for Dell/EMC CX4 Storage Arrays
Dell introduced upgrades to its Dell/EMC CX4 storage line that continue the history of innovation that have been a part of Dell/EMC arrays for four generations. The new features enhance the ability of the Dell/EMC CX4 arrays to provide a full-featured storage platform that helps customers manage and consolidate their storage environments.
Dell expects availability of these additions next month:
10Gb iSCSI is the next-generation of the iSCSI protocol, extending performance beyond the 1Gb connectivity available today. UltraFlex™ Modular I/O on the CX4 arrays allows customers to easily add additional ports when needed. Since the arrays are dual-mode, supporting 8Gb and 4Gb Fibre Channel and 10Gb and 1Gb iSCSI ports, customers can choose the interconnect that best suits their needs. Adding 10Gb iSCSI enables customers to affordably consolidate stranded servers onto an existing SAN, support more virtual servers and aggregate multiple 1Gb iSCSI connections to fewer 10Gb ones.
Virtualization-aware Navisphere® management software helps simplify storage in virtual environments with automation that has the potential to drastically reduce reporting time. Navisphere enhancements for virtualized environments provides automatic discovery of virtual machines and VMWare ESX servers, end-to-end virtual-to-physical machine mapping, and advanced search for instant virtual machine discovery.
Drive spin down has been added as a standard feature of the CX4 series arrays, further enhancing the ability of the CX4 to help reduce power and cooling requirements for your storage that started with the introduction of features such as low power SATA and flash drives, virtual provisioning and variable speed fans. Drive spin down enables customers to easily set policies for drives to power down when not in use.
New Storage Consolidation Consulting Services
In order to help customers deal with growing data requirements, reduce complexity and prevent storage sprawl, Dell/EMC ProConsult services provide customers action-oriented plans with specific, predictable and measureable outcomes to help better optimize their existing storage investments. With Dell ProConsult’s Workshop, Assess, Deploy and Implement (WADI) model, customers can understand the potential of consolidation, select the right technological components and integrate them into the data center with the appropriate level of performance and data protection.
The new ProConsult Storage Consolidation Solutions include:
Dell/EMC SAN Solution Design Services: To help ensure performance, service level and TCO goals are met, Dell helps customers design and build an optimized storage architecture that makes the most of current and future investments. This includes optional features for back-up protection:
Dell/EMC Local Data Protection Design: Reducing downtime is a key concern for many customers today. By utilizing the EMC SAN software snapshot capability, Dell can offer services for data protection that map to specific service level requirements and help customers employ a sound replication strategy.
Dell/EMC Remote Data Protection Design: Customers often need to protect data across one or more SAN. This solution helps ensure that customers can easily protect data no matter where it is located.
Dell/EMC Back-Up Integration Design: By helping customers integrate back-up software, Dell can assist customers in selecting the right strategy and solutions for their environment.
In addition to ProConsult, Dell offers comprehensive, customizable support via ProSupport solutions, while Dell robust, modular ProManage solutions help ease the configuration, deployment, operation, management and, protection of IT environments.
New Server Blade Networking Options
Dell’s High Port Count Gigabit Ethernet Solution featuring PowerConnect M6348 48-port Gigabit Ethernet Blade Switch is the blade industry’s highest port count single switch I/O module compared to other top-tier x86 server vendors. Combined with current four port integrated network interface cards (NIC) available on PowerEdge M710, M805, M905 full-height blade servers and new Dell blade server quad port network interface mezzanine cards from Broadcom and Intel, customers can get more lanes of gigabit Ethernet ideal for today’s virtual server environments. Dell expects to begin shipping next month. New quad port blade mezzanine cards include:
Broadcom NetExtreme II™ 5709 Quad Port Mezzanine
Intel Gigabit ET Quad Port Ethernet Mezzanine Card
The Dell 10GbE Pass-Through Blade I/O Module which directly connects blade servers using the Dell dual port 10Gb blade mezzanine NICs to external 10Gb resources such as servers or switches. This provides scalable connectivity of a few - to many - blade servers and uses Broadcom’s dual port 10Gb mezzanine card, the new Broadcom NetExtreme II™ 57711 Dual Port 10GbE Mezzanine Card, or future converged network adapters. Dell plans to make this available in the fourth quarter of the year.
Ethernet is increasingly being chosen as the networking technology for storage as customers look to consolidate and virtualize their data centers. With a 10 gigabit option and its inherent advantages in virtualized environments, Ethernet’s case gets even stronger as the most simple and capable networking fabric. -- Praveen Asthana, vice president, Dell Enterprise storage and networking.
Dell Point of View:
Consolidation allows customers to reduce the number of physical servers and storage pools and help lower overall operating costs by reducing management time, power consumption and floor space used. This explains the rapid adoption of blade servers and storage area networks (SANs). But with greater compute density come greater input/output (I/O) needs and more scalable storage requirements.
Virtualization is instrumental in enabling consolidation and in improving both the availability of enterprise applications and server utilization by creating virtual machines within a server. Just like with consolidation, virtualization can bring increased network traffic, including dedicated traffic for management of the virtual machines. As the number of virtual machines per physical server increases, the physical network layer must be expanded to accommodate the increased network traffic.
The News:
New Features for Dell/EMC CX4 Storage Arrays
Dell introduced upgrades to its Dell/EMC CX4 storage line that continue the history of innovation that have been a part of Dell/EMC arrays for four generations. The new features enhance the ability of the Dell/EMC CX4 arrays to provide a full-featured storage platform that helps customers manage and consolidate their storage environments.
Dell expects availability of these additions next month:
10Gb iSCSI is the next-generation of the iSCSI protocol, extending performance beyond the 1Gb connectivity available today. UltraFlex™ Modular I/O on the CX4 arrays allows customers to easily add additional ports when needed. Since the arrays are dual-mode, supporting 8Gb and 4Gb Fibre Channel and 10Gb and 1Gb iSCSI ports, customers can choose the interconnect that best suits their needs. Adding 10Gb iSCSI enables customers to affordably consolidate stranded servers onto an existing SAN, support more virtual servers and aggregate multiple 1Gb iSCSI connections to fewer 10Gb ones.
Virtualization-aware Navisphere® management software helps simplify storage in virtual environments with automation that has the potential to drastically reduce reporting time. Navisphere enhancements for virtualized environments provides automatic discovery of virtual machines and VMWare ESX servers, end-to-end virtual-to-physical machine mapping, and advanced search for instant virtual machine discovery.
Drive spin down has been added as a standard feature of the CX4 series arrays, further enhancing the ability of the CX4 to help reduce power and cooling requirements for your storage that started with the introduction of features such as low power SATA and flash drives, virtual provisioning and variable speed fans. Drive spin down enables customers to easily set policies for drives to power down when not in use.
New Storage Consolidation Consulting Services
In order to help customers deal with growing data requirements, reduce complexity and prevent storage sprawl, Dell/EMC ProConsult services provide customers action-oriented plans with specific, predictable and measureable outcomes to help better optimize their existing storage investments. With Dell ProConsult’s Workshop, Assess, Deploy and Implement (WADI) model, customers can understand the potential of consolidation, select the right technological components and integrate them into the data center with the appropriate level of performance and data protection.
The new ProConsult Storage Consolidation Solutions include:
Dell/EMC SAN Solution Design Services: To help ensure performance, service level and TCO goals are met, Dell helps customers design and build an optimized storage architecture that makes the most of current and future investments. This includes optional features for back-up protection:
Dell/EMC Local Data Protection Design: Reducing downtime is a key concern for many customers today. By utilizing the EMC SAN software snapshot capability, Dell can offer services for data protection that map to specific service level requirements and help customers employ a sound replication strategy.
Dell/EMC Remote Data Protection Design: Customers often need to protect data across one or more SAN. This solution helps ensure that customers can easily protect data no matter where it is located.
Dell/EMC Back-Up Integration Design: By helping customers integrate back-up software, Dell can assist customers in selecting the right strategy and solutions for their environment.
In addition to ProConsult, Dell offers comprehensive, customizable support via ProSupport solutions, while Dell robust, modular ProManage solutions help ease the configuration, deployment, operation, management and, protection of IT environments.
New Server Blade Networking Options
Dell’s High Port Count Gigabit Ethernet Solution featuring PowerConnect M6348 48-port Gigabit Ethernet Blade Switch is the blade industry’s highest port count single switch I/O module compared to other top-tier x86 server vendors. Combined with current four port integrated network interface cards (NIC) available on PowerEdge M710, M805, M905 full-height blade servers and new Dell blade server quad port network interface mezzanine cards from Broadcom and Intel, customers can get more lanes of gigabit Ethernet ideal for today’s virtual server environments. Dell expects to begin shipping next month. New quad port blade mezzanine cards include:
Broadcom NetExtreme II™ 5709 Quad Port Mezzanine
Intel Gigabit ET Quad Port Ethernet Mezzanine Card
The Dell 10GbE Pass-Through Blade I/O Module which directly connects blade servers using the Dell dual port 10Gb blade mezzanine NICs to external 10Gb resources such as servers or switches. This provides scalable connectivity of a few - to many - blade servers and uses Broadcom’s dual port 10Gb mezzanine card, the new Broadcom NetExtreme II™ 57711 Dual Port 10GbE Mezzanine Card, or future converged network adapters. Dell plans to make this available in the fourth quarter of the year.
Ethernet is increasingly being chosen as the networking technology for storage as customers look to consolidate and virtualize their data centers. With a 10 gigabit option and its inherent advantages in virtualized environments, Ethernet’s case gets even stronger as the most simple and capable networking fabric. -- Praveen Asthana, vice president, Dell Enterprise storage and networking.
Burton Group Analysts Promote Virtualization as the Keystone to Cloud Computing at
Burton Group, a research and consulting firm focused on in-depth analysis of enterprise information technologies, will join virtualization enthusiasts at VMworld 2009 to present vendor hypervisor evaluation scorecards, cloud computing models, and opportunities with virtualization.
The growing reality of “IT externalization” and the transition to a dynamic, service-oriented data center are causing IT architects to reevaluate capital expenditures, traditional IT operations, support, and management methods. Virtualization is a logical piece of many organizations’ externalization and dynamic data center visions. Burton Group’s presentations at VMworld are geared to assisting IT organizations through sticking points with virtualization technologies, licensing, and cloud computing implantation.
Burton Group VP and service director Richard Jones said, “This year Burton Group’s presentations include a strong Cloud computing theme promoting virtualization as the foundation for cloud delivery.” Burton Group’s Cloud computing overview report has quickly become the de facto standard for cloud definition and terminology. But successful clouds still require the right building blocks, including virtualization hypervisors. However, finding the right virtualization capabilities requires looking beyond the vendor data sheets.
Analysts Chris Wolf and Richards Jones will present an updated Burton Group Catalyst Conference presentation highlighting detailed requirements for production worthy hypervisors and rankings for VMware vSphere 4, Citrix XenServer 5.5, and Microsoft Hyper-V R2 Tuesday at 4:00 p.m. in Espanade 305. Come to the Burton Group booth to see how major hypervisor products rank in Burton Group’s enterprise production class hypervisor evaluation criteria.
Follow Burton Group’s Data Center Strategies analyst’s thoughts and conference insights on the latest industry announcements via Twitter and the Burton Group Data Center Blog. Chris Wolf has a complimentary VMworld preview webcast and blog post.
The growing reality of “IT externalization” and the transition to a dynamic, service-oriented data center are causing IT architects to reevaluate capital expenditures, traditional IT operations, support, and management methods. Virtualization is a logical piece of many organizations’ externalization and dynamic data center visions. Burton Group’s presentations at VMworld are geared to assisting IT organizations through sticking points with virtualization technologies, licensing, and cloud computing implantation.
Burton Group VP and service director Richard Jones said, “This year Burton Group’s presentations include a strong Cloud computing theme promoting virtualization as the foundation for cloud delivery.” Burton Group’s Cloud computing overview report has quickly become the de facto standard for cloud definition and terminology. But successful clouds still require the right building blocks, including virtualization hypervisors. However, finding the right virtualization capabilities requires looking beyond the vendor data sheets.
Analysts Chris Wolf and Richards Jones will present an updated Burton Group Catalyst Conference presentation highlighting detailed requirements for production worthy hypervisors and rankings for VMware vSphere 4, Citrix XenServer 5.5, and Microsoft Hyper-V R2 Tuesday at 4:00 p.m. in Espanade 305. Come to the Burton Group booth to see how major hypervisor products rank in Burton Group’s enterprise production class hypervisor evaluation criteria.
Follow Burton Group’s Data Center Strategies analyst’s thoughts and conference insights on the latest industry announcements via Twitter and the Burton Group Data Center Blog. Chris Wolf has a complimentary VMworld preview webcast and blog post.
Gear6 Memcached Distribution Now Available on VMware
Gear6 (http://www.gear6.com/) today announced that Gear6 Web Cache, the company's popular distribution for Memcached, is available as a Virtual Appliance download. Customers can register and download Gear6 Web Cache Virtual Appliance and easily experience the high availability, scalability and manageability of Gear6 Web Cache.
Released in April and deployed by leading top 50 high traffic web sites, Gear6 Web Cache helps fast growing media, social networking, content aggregation, and other dynamic websites to cost effectively scale their Memcached infrastructure. The results are improved site performance, increased visibility into application and infrastructure tiers, and more than a 60% reduction in typical capital and operational expenses.
"By making the Gear6 Web Cache available as a Virtual Appliance, customers can rapidly experience our enhanced Memcached distribution within their environment," said Joaquin Ruiz, Executive Vice President of Products for Gear6. "The fact is, today's dynamic websites absolutely require a robust distributed caching tier to provide millisecond responsiveness or they face losing users and customers. Memcached delivers this responsiveness, and only the Gear6 distribution for Memcached provides the scalable high availability services that sites architected around Memcached need."
Gear6 Web Cache Virtual Appliance is available immediately for operations teams to test it out for themselves. To register for a virtual Gear6 Web Cache experience, please visit http://www.gear6.com/vmapplp1. Web Cache Virtual Appliance supports VMware player version 2.5. In addition, Gear6 has developed enhanced open source tools for managing, monitoring, and reporting on Memcached services. Download them at http://dev.gear6.com.
To get started on Memcached, Gear6 offers an educational series archived at http://www.gear6.com/learn-more/webinar-archive. For the latest from Gear6, follow us on Twitter at http://www.twitter.com/g6memcached.
Released in April and deployed by leading top 50 high traffic web sites, Gear6 Web Cache helps fast growing media, social networking, content aggregation, and other dynamic websites to cost effectively scale their Memcached infrastructure. The results are improved site performance, increased visibility into application and infrastructure tiers, and more than a 60% reduction in typical capital and operational expenses.
"By making the Gear6 Web Cache available as a Virtual Appliance, customers can rapidly experience our enhanced Memcached distribution within their environment," said Joaquin Ruiz, Executive Vice President of Products for Gear6. "The fact is, today's dynamic websites absolutely require a robust distributed caching tier to provide millisecond responsiveness or they face losing users and customers. Memcached delivers this responsiveness, and only the Gear6 distribution for Memcached provides the scalable high availability services that sites architected around Memcached need."
Gear6 Web Cache Virtual Appliance is available immediately for operations teams to test it out for themselves. To register for a virtual Gear6 Web Cache experience, please visit http://www.gear6.com/vmapplp1. Web Cache Virtual Appliance supports VMware player version 2.5. In addition, Gear6 has developed enhanced open source tools for managing, monitoring, and reporting on Memcached services. Download them at http://dev.gear6.com.
To get started on Memcached, Gear6 offers an educational series archived at http://www.gear6.com/learn-more/webinar-archive. For the latest from Gear6, follow us on Twitter at http://www.twitter.com/g6memcached.
Parallels Desktop Switch to Mac Edition Offers Lifeline to Frustrated PC Users
Parallels, a worldwide leader in virtualization and automation software, today unveiled a complete solution designed to simplify the process of "switching" from a PC to Mac. Parallels Desktop Switch to Mac Edition empowers users to effortlessly make the move to Mac without the risk of losing familiar and important data and applications on their Microsoft Windows-based PCs. The product combines a set of easy-to-use tools and interactive tutorials with the industry leading Parallels Desktop 4.0 for Mac to help "Switchers" understand how to operate Mac OS X, transfer all PC data and applications, and seamlessly run their Windows applications on their new Macs.
"The growth in switching is partially due to the ease-of-use and cool capabilities of the Mac," said Serguei Beloussov, CEO of Parallels. "However, users don’t want to lose the data they have accumulated and the applications they are already familiar with. Building on our proven track record of Mac innovation, we have addressed this concern and made learning the new operating system even simpler through interactive on-demand tutorials. These are combined with intelligent moving tools and our industry-leading Parallels Desktop for Mac, which offers the greatest performance and stability for running Windows seamlessly on Mac."
Switching from PC to Mac is on the rise: analyst reports on operating system market share show that Mac OS X market growth comes at the expense of Windows’ market share. While the overall PC industry saw declines of 3% for the quarter ending in June 2009, Apple sales were up 4% year over year¹. According to Apple’s Q309 report, half of the Macs sold were to customers who had never owned a Mac before².
"Parallels is an industry pioneer in the development and delivery of Mac virtualization solutions," according to Laura DiDio, principal at Information Technology Intelligence Corp., a Boston-based research firm. ITIC’s research indicates that three out of 10 corporations (30%) use Mac hardware and software in conjunction with Windows to dual boot and virtualize their desktop and server environments. "Now more than ever, businesses require products and tools like Parallels Desktop Switch to Mac Edition, that will assist IT managers and support integration and interoperability among heterogeneous environment," DiDio added.
Parallels Desktop Switch to Mac Edition is an industry-first solution that addresses the challenges facing prospective PC-to-Mac Switchers:
Learning Mac OS X — The Switch to Mac learning tools are designed to specifically address any questions or concerns associated with the transition from Windows to Mac. More than two hours of interactive video tutorials help users learn the new interface and functionality of the Mac platform step-by-step, starting with the Mac equivalent of tasks performed on Windows. A quick reference card identifies the most common Windows and Mac command/function differences and puts the correct keystrokes at users’ fingertips.
Making the Move — Also important to new Mac users is getting files and media from their old computer to their Mac. Parallels recognizes that many people need help with this process, and developed a "plug and click" method that moves the entire PC (licensed operating system, applications, files and data) to the new Mac. This includes the Parallels High Speed USB Transfer Cable that connects the two machines and the Enhanced Parallels Transporter: simple, wizard-driven software that walks the user through the move in a few easy clicks. The seamless Mac user experience now starts on the PC side.
Running Windows and Mac side-by-side — Parallels Desktop Switch to Mac Edition includes Parallels Desktop for Mac 4.0, the number one Mac system utility, currently used by more than two million people to run Windows side-by-side with Mac applications. This award-winning virtualization software provides a fully integrated seamless experience, offering users the greatest stability and performance available for running Windows on a Mac, as recognized in 3rd party industry benchmarks³. Parallels Desktop 4.0 for Mac incorporates a range of security, backup and power saving features to give Mac users the easiest way to run Windows on a Mac.
"For years I have worked with switchers coming into Apple stores with questions about how to use their new Mac," says Saied Ghaffari, Switch to Mac Advocate. "Parallels Desktop Switch to Mac Edition thoroughly addresses the concerns switchers have and the product is designed to make moving to Mac as fast and simple as possible, regardless of the level of technical knowledge of the switcher. Innovative learning features like Click to Learn, Watch Saied, and You Try shorten the time switchers need to become comfortable with their Mac from two weeks down to about two hours. It’s like a friend teaching you the Mac at your own pace."
Availability and Pricing
Parallels Desktop 4.0 Switch to Mac Edition is available from today at Apple stores, at Apple.com and through other preferred retail partners in English, German and French. The suggested retail price (SRP) of the product is $99.99.
Parallels offers a range of free support options for customers of Parallels Desktop Switch to Mac Edition, from knowledge bases and forums to phone and email support. For more information, visit www.parallels.com/support/.
In depth information, video demonstrations and screenshots of Parallels Desktop 4.0 Switch to Mac Edition are available at www.parallels.com/products/desktop/stm.
"The growth in switching is partially due to the ease-of-use and cool capabilities of the Mac," said Serguei Beloussov, CEO of Parallels. "However, users don’t want to lose the data they have accumulated and the applications they are already familiar with. Building on our proven track record of Mac innovation, we have addressed this concern and made learning the new operating system even simpler through interactive on-demand tutorials. These are combined with intelligent moving tools and our industry-leading Parallels Desktop for Mac, which offers the greatest performance and stability for running Windows seamlessly on Mac."
Switching from PC to Mac is on the rise: analyst reports on operating system market share show that Mac OS X market growth comes at the expense of Windows’ market share. While the overall PC industry saw declines of 3% for the quarter ending in June 2009, Apple sales were up 4% year over year¹. According to Apple’s Q309 report, half of the Macs sold were to customers who had never owned a Mac before².
"Parallels is an industry pioneer in the development and delivery of Mac virtualization solutions," according to Laura DiDio, principal at Information Technology Intelligence Corp., a Boston-based research firm. ITIC’s research indicates that three out of 10 corporations (30%) use Mac hardware and software in conjunction with Windows to dual boot and virtualize their desktop and server environments. "Now more than ever, businesses require products and tools like Parallels Desktop Switch to Mac Edition, that will assist IT managers and support integration and interoperability among heterogeneous environment," DiDio added.
Parallels Desktop Switch to Mac Edition is an industry-first solution that addresses the challenges facing prospective PC-to-Mac Switchers:
Learning Mac OS X — The Switch to Mac learning tools are designed to specifically address any questions or concerns associated with the transition from Windows to Mac. More than two hours of interactive video tutorials help users learn the new interface and functionality of the Mac platform step-by-step, starting with the Mac equivalent of tasks performed on Windows. A quick reference card identifies the most common Windows and Mac command/function differences and puts the correct keystrokes at users’ fingertips.
Making the Move — Also important to new Mac users is getting files and media from their old computer to their Mac. Parallels recognizes that many people need help with this process, and developed a "plug and click" method that moves the entire PC (licensed operating system, applications, files and data) to the new Mac. This includes the Parallels High Speed USB Transfer Cable that connects the two machines and the Enhanced Parallels Transporter: simple, wizard-driven software that walks the user through the move in a few easy clicks. The seamless Mac user experience now starts on the PC side.
Running Windows and Mac side-by-side — Parallels Desktop Switch to Mac Edition includes Parallels Desktop for Mac 4.0, the number one Mac system utility, currently used by more than two million people to run Windows side-by-side with Mac applications. This award-winning virtualization software provides a fully integrated seamless experience, offering users the greatest stability and performance available for running Windows on a Mac, as recognized in 3rd party industry benchmarks³. Parallels Desktop 4.0 for Mac incorporates a range of security, backup and power saving features to give Mac users the easiest way to run Windows on a Mac.
"For years I have worked with switchers coming into Apple stores with questions about how to use their new Mac," says Saied Ghaffari, Switch to Mac Advocate. "Parallels Desktop Switch to Mac Edition thoroughly addresses the concerns switchers have and the product is designed to make moving to Mac as fast and simple as possible, regardless of the level of technical knowledge of the switcher. Innovative learning features like Click to Learn, Watch Saied, and You Try shorten the time switchers need to become comfortable with their Mac from two weeks down to about two hours. It’s like a friend teaching you the Mac at your own pace."
Availability and Pricing
Parallels Desktop 4.0 Switch to Mac Edition is available from today at Apple stores, at Apple.com and through other preferred retail partners in English, German and French. The suggested retail price (SRP) of the product is $99.99.
Parallels offers a range of free support options for customers of Parallels Desktop Switch to Mac Edition, from knowledge bases and forums to phone and email support. For more information, visit www.parallels.com/support/.
In depth information, video demonstrations and screenshots of Parallels Desktop 4.0 Switch to Mac Edition are available at www.parallels.com/products/desktop/stm.
PrimaCloud Breaks Cloud Computing Performance Barriers Using Xsigo Virtual I/O
PrimaCloud, a pioneering managed cloud and virtual IT services provider, announced today that it has deployed the Xsigo I/O Director as the foundation of its data center interconnect strategy. The Xsigo I/O Director enables PrimaCloud to break the barriers of I/O throughput seen in existing cloud computing offerings, allowing end customers to experience application performance levels that would previously have been achievable only in purpose-built private datacenters. Additionally, Xsigo's virtual I/O infrastructure allows PrimaCloud to automatically provision cost-effective virtual private datacenters for its customers within minutes.
"Before we found Xsigo, we searched for a long time to find a solution that would allow us to provision high performance virtual private datacenters for our customers," said David Durkee, CEO of PrimaCloud. "We looked at 10GigE and physical Infiniband interconnects and found they provided insufficient manageability and throughput to guarantee our customers the fast and reliable service we needed to compete with existing purpose-built data centers. With Xsigo's high-speed 20Gb/sec connections to our servers, we can finally utilize the full I/O bandwidth capacity of our VMWare ESX clients."
With three years of experience providing cloud computing under the ENKI name, PrimaCloud management was seeing an increasing number of enterprise clients running database and transaction-intensive applications -- such as Oracle -- which required multi-gigabit connections to the vLAN and NAS storage to avoid I/O contention. In these applications the bandwidth required per virtual machine exceeded 3Gb/sec, which meant that a single cloud server running ten virtual machines required over 30Gb/sec total I/O bandwidth. Only Xsigo virtual I/O, with dual redundant 20Gb/sec I/O connections per server, provided the required performance. The Xsigo I/O Director's low-latency bandwidth also serves to take maximum advantage of PrimaCloud's SSD-cached NAS systems to delivery outstanding application throughput.
PrimaCloud's automatically managed, hypervisor-agnostic cloud architecture requires a high level of automation to create and manage virtual private datacenters (VPDCs). The Xsigo I/O Director is able to automatically provision and manage virtual I/O and VLANs associated with virtual instances running VMWare ESX, Citrix, HyperV, and 3Tera's AppLogic, under the control of PrimaCloud's implementation of Enigmatec's EMS, a cross-platform, policy-based automation engine. Using EMS to configure the I/O Director eliminates manual labor in managing VPDCs, as well as permitting automatic scaling of VPDCs to respond to changes in load, resulting in significant end-customer cost savings.
"Our customers expect to take a data center Visio diagram and make it real within minutes," added Durkee. "We could not have done this without the rapid scalability and management flexibility that Xsigo enables. Virtual I/O allowed us to realize the 'virtual' part of the on-demand virtual private data center as well as provision only the resources the customer requests."
Xsigo virtual I/O is a critical element of the reference datacenter architecture PrimaCloud uses to deliver on the promise of cloud computing: cost-effective, on-demand computing delivered on a pay-as-you-go-basis while meeting enterprise requirements for performance and uptime. The simplicity of deploying Xsigo Virtual I/O has enabled PrimaCloud's reference architecture to be deployed in any one of its 65 datacenters worldwide for public, or hosted private cloud computing, as well as at customer sites.
"Before we found Xsigo, we searched for a long time to find a solution that would allow us to provision high performance virtual private datacenters for our customers," said David Durkee, CEO of PrimaCloud. "We looked at 10GigE and physical Infiniband interconnects and found they provided insufficient manageability and throughput to guarantee our customers the fast and reliable service we needed to compete with existing purpose-built data centers. With Xsigo's high-speed 20Gb/sec connections to our servers, we can finally utilize the full I/O bandwidth capacity of our VMWare ESX clients."
With three years of experience providing cloud computing under the ENKI name, PrimaCloud management was seeing an increasing number of enterprise clients running database and transaction-intensive applications -- such as Oracle -- which required multi-gigabit connections to the vLAN and NAS storage to avoid I/O contention. In these applications the bandwidth required per virtual machine exceeded 3Gb/sec, which meant that a single cloud server running ten virtual machines required over 30Gb/sec total I/O bandwidth. Only Xsigo virtual I/O, with dual redundant 20Gb/sec I/O connections per server, provided the required performance. The Xsigo I/O Director's low-latency bandwidth also serves to take maximum advantage of PrimaCloud's SSD-cached NAS systems to delivery outstanding application throughput.
PrimaCloud's automatically managed, hypervisor-agnostic cloud architecture requires a high level of automation to create and manage virtual private datacenters (VPDCs). The Xsigo I/O Director is able to automatically provision and manage virtual I/O and VLANs associated with virtual instances running VMWare ESX, Citrix, HyperV, and 3Tera's AppLogic, under the control of PrimaCloud's implementation of Enigmatec's EMS, a cross-platform, policy-based automation engine. Using EMS to configure the I/O Director eliminates manual labor in managing VPDCs, as well as permitting automatic scaling of VPDCs to respond to changes in load, resulting in significant end-customer cost savings.
"Our customers expect to take a data center Visio diagram and make it real within minutes," added Durkee. "We could not have done this without the rapid scalability and management flexibility that Xsigo enables. Virtual I/O allowed us to realize the 'virtual' part of the on-demand virtual private data center as well as provision only the resources the customer requests."
Xsigo virtual I/O is a critical element of the reference datacenter architecture PrimaCloud uses to deliver on the promise of cloud computing: cost-effective, on-demand computing delivered on a pay-as-you-go-basis while meeting enterprise requirements for performance and uptime. The simplicity of deploying Xsigo Virtual I/O has enabled PrimaCloud's reference architecture to be deployed in any one of its 65 datacenters worldwide for public, or hosted private cloud computing, as well as at customer sites.
Fortisphere hires Siki Giunta as CEO
Fortisphere Inc. has replaced Michael Harper as CEO with the hiring of Siki Giunta. According to the Baltimore Business Journal, Giunta joined the virtualization company a month ago.
It says she was hired for her background at Managed Objects, a Northern Virginia firm bought by Novell last year. Giunta built Managed Objects from a pre-revenue startup 10 years ago to its $50 million sale. Prior to that, she was senior vice president of Computer Associates Marketing for OS/390 Solutions. Giunta also sits on the boards of Layer 7 Technologies and BEZ Systems
It says she was hired for her background at Managed Objects, a Northern Virginia firm bought by Novell last year. Giunta built Managed Objects from a pre-revenue startup 10 years ago to its $50 million sale. Prior to that, she was senior vice president of Computer Associates Marketing for OS/390 Solutions. Giunta also sits on the boards of Layer 7 Technologies and BEZ Systems
Virtual Computer Challenges Desktop IT Administrators and VMworld 2009 Attendees to 'Get Smart About Desktop Virtualization'
Virtual Computer Inc., the company redefining PC lifecycle management through virtualization, today announced its ‘Get Smart About Desktop Virtualization’ initiative, encouraging IT professionals to take a deeper look at the economics of desktop virtualization and in the process become eligible to win a 2009 smart car. The program, to kick-off August 31 at the VMworld 2009 show in San Francisco, will continue via webinars through the end of September. A cornerstone of the initiative is Virtual Computer’s new Total Cost of Ownership (TCO) calculator, designed to help IT decision-makers move beyond the hype and look at the actual costs of the various proposed solutions for desktop management.
“Desktop virtualization based on a distributed computing model will dramatically reduce the cost of deploying and managing PCs,” said Dan McCall, president and CEO, Virtual Computer. “Until now, the only option for desktop virtualization was a server-based approach that suffered from lack of mobility and a poor user experience. We look forward to introducing IT teams to a smarter approach to desktop virtualization backed by hard cost savings numbers and a deployment model that end-users will embrace.”
The Road to Smarter Computing Begins at VMworld 2009
Virtual Computer will coordinate a series of activities at VMworld 2009 to demonstrate how NxTop, its award-winning PC management platform, applies virtualization technology to make it as easy to manage thousands of PCs as it is to manage one. VMworld attendees can visit Virtual Computer at booth #1940 to see a demo of NxTop and pick up their ‘Get Smart About Desktop Virtualization’ report card. The report card will lead show attendees through a series of activities where they can interact with Virtual Computer and its partners to amass points towards a grand prize giveaway of a brand new smart fortwo automobile.
Exploring the New Economics of PC Management
As part of the ‘Get Smart About Desktop Virtualization’ program,’ Virtual Computer has launched a new online community site featuring an interactive TCO calculator that compares three distinct approaches to desktop management:
Traditional agent-based PC management tools
Server-centric virtual desktop infrastructure (VDI) approaches
NxTop’s approach of centralized virtual desktop management with distributed execution
The cost-saving benefits of NxTop, as well as the Virtual Computer TCO calculator, will also be demonstrated on a series of webinars titled “The New Economics of PC Management.”
Sharing online feedback on the TCO tool, as well as attending one of the webinars, will increase VMworld attendees’ chances of winning the smart car grand prize and also provide non-attendees with a chance to win.
Further details about the ‘Get Smart About Desktop Virtualization’ program will be revealed on the Virtual Computer blog throughout the VMworld show. To follow the blog, try the TCO tool, or to register for one of the upcoming webinars, visit Virtual Computer’s new online community at: http://orbit.virtualcomputer.com
“Desktop virtualization based on a distributed computing model will dramatically reduce the cost of deploying and managing PCs,” said Dan McCall, president and CEO, Virtual Computer. “Until now, the only option for desktop virtualization was a server-based approach that suffered from lack of mobility and a poor user experience. We look forward to introducing IT teams to a smarter approach to desktop virtualization backed by hard cost savings numbers and a deployment model that end-users will embrace.”
The Road to Smarter Computing Begins at VMworld 2009
Virtual Computer will coordinate a series of activities at VMworld 2009 to demonstrate how NxTop, its award-winning PC management platform, applies virtualization technology to make it as easy to manage thousands of PCs as it is to manage one. VMworld attendees can visit Virtual Computer at booth #1940 to see a demo of NxTop and pick up their ‘Get Smart About Desktop Virtualization’ report card. The report card will lead show attendees through a series of activities where they can interact with Virtual Computer and its partners to amass points towards a grand prize giveaway of a brand new smart fortwo automobile.
Exploring the New Economics of PC Management
As part of the ‘Get Smart About Desktop Virtualization’ program,’ Virtual Computer has launched a new online community site featuring an interactive TCO calculator that compares three distinct approaches to desktop management:
Traditional agent-based PC management tools
Server-centric virtual desktop infrastructure (VDI) approaches
NxTop’s approach of centralized virtual desktop management with distributed execution
The cost-saving benefits of NxTop, as well as the Virtual Computer TCO calculator, will also be demonstrated on a series of webinars titled “The New Economics of PC Management.”
Sharing online feedback on the TCO tool, as well as attending one of the webinars, will increase VMworld attendees’ chances of winning the smart car grand prize and also provide non-attendees with a chance to win.
Further details about the ‘Get Smart About Desktop Virtualization’ program will be revealed on the Virtual Computer blog throughout the VMworld show. To follow the blog, try the TCO tool, or to register for one of the upcoming webinars, visit Virtual Computer’s new online community at: http://orbit.virtualcomputer.com
New VKernel Software Optimizes Virtual Infrastructures, Enabling Organizations to Realize Promised ROI of Virtualization
As IT organizations look to implement the second wave of server virtualization programs, many are finding they have still not achieved the full cost savings they had anticipated. VKernel Corporation, a provider of powerful, easy-to-use, and affordable virtualization management and optimization software, today announced the new VKernel Optimization Pack to help organizations achieve maximum ROI from their virtualization projects.
VKernel's new Optimization Pack includes three powerful applets, Wastefinder, Rightsizer and Inventory, that help users improve the efficiency of their virtual infrastructures. The new applets allow organizations to run more virtual machines (VMs) with the same hardware, maximize the utilization of infrastructure resources, reclaim terabytes of wasted storage, reduce VM sprawl and assure optimal VM performance.
"Based on our research, the average organization runs five to seven VMs per processor while the real ROI from virtualization occurs at densities of 10 to 12 VMs," said Alex Bakman, founder and CEO of VKernel. "Our products help users achieve the optimal balance of resource utilization and VM performance so they can safely achieve these ROI goals. Beta users of our Wastefinder applet, for example, reclaimed between 15 and 40 percent of expensive wasted storage resources. That's tens of thousands of dollars in savings for an average $5,000 investment."
Optimization Pack Details
A successful virtualization project balances proper resource allocations and utilizations with VM performance and cost per VM. With the Optimization Pack, VKernel enables organization to rapidly achieve their goals by providing a very affordable and simple-to-use toolset. Delivered as a virtual appliance, deployment is instant and users immediately begin solving their critical needs. The VKernel Optimization Pack includes three powerful management applets:
Wastefinder - quickly finds where resource capacity (CPU, memory, and storage) are being wasted in the virtual infrastructure. By identifying zombie VMs, expired snapshots, and other wasteful consumers, users can reclaim expensive capacity to optimize virtual environments and achieve a better, faster ROI.
VM Rightsizer - a simple tool for tuning your VMs with the right amount of resources (CPU, memory, and storage) to drive maximum VM densities without impacting performance. Rightsizer is unique in its ability to make recommendations and automatically implement changes to find improperly allocated resources and optimally configure VMs.
Inventory - automatically collects important information about all VMs in the virtual infrastructure and creates a detailed inventory report showing VM name, created by and when, resource allocations, and much more. The inventory is continually updated to match the dynamic environment and is searchable by different criteria to quickly find specific information.
"EMA is finding that a majority of enterprises have virtualized about 25 percent of their environment, and have not achieved their full ROI potential," said Andi Mann, VP of Research, Systems & Storage Management at EMA. "Our research shows that by using capacity management to optimize the virtual infrastructure customers can achieve greater VM density, and higher resource utilization while achieving SLAs. This is why EMA believes that virtual infrastructure optimization is an important discipline to ensuring achievement of key virtualization goals."
VKernel currently supports VMware ESX and vSphere and plans to support Microsoft Hyper-V (later this year) as well as Citrix XEN Server. The company believes that a heterogeneous capacity management and optimization offering will be increasingly important as the enterprise virtual infrastructure becomes a mix of hypervisor platforms.
Pricing and Availability
The VKernel Optimization Pack is currently available in a bundle with Capacity Analyzer 4.1 for $399 per CPU socket including the first year of maintenance and support. Subscription pricing is also offered at $179 annually per CPU socket including maintenance and support. A fully-functional 14-day free trial of Capacity Analyzer and the Optimization Pack are available to download at www.vkernel.com. For additional questions, contact VKernel by phone at 1.866.370.2733 (603-610-4300) or email sales@vkernel.com.
VKernel will be exhibiting in Booth #1832 at VMworld taking place Aug. 31 - Sept. 3 in the Moscone Center, San Francisco.
VKernel's new Optimization Pack includes three powerful applets, Wastefinder, Rightsizer and Inventory, that help users improve the efficiency of their virtual infrastructures. The new applets allow organizations to run more virtual machines (VMs) with the same hardware, maximize the utilization of infrastructure resources, reclaim terabytes of wasted storage, reduce VM sprawl and assure optimal VM performance.
"Based on our research, the average organization runs five to seven VMs per processor while the real ROI from virtualization occurs at densities of 10 to 12 VMs," said Alex Bakman, founder and CEO of VKernel. "Our products help users achieve the optimal balance of resource utilization and VM performance so they can safely achieve these ROI goals. Beta users of our Wastefinder applet, for example, reclaimed between 15 and 40 percent of expensive wasted storage resources. That's tens of thousands of dollars in savings for an average $5,000 investment."
Optimization Pack Details
A successful virtualization project balances proper resource allocations and utilizations with VM performance and cost per VM. With the Optimization Pack, VKernel enables organization to rapidly achieve their goals by providing a very affordable and simple-to-use toolset. Delivered as a virtual appliance, deployment is instant and users immediately begin solving their critical needs. The VKernel Optimization Pack includes three powerful management applets:
Wastefinder - quickly finds where resource capacity (CPU, memory, and storage) are being wasted in the virtual infrastructure. By identifying zombie VMs, expired snapshots, and other wasteful consumers, users can reclaim expensive capacity to optimize virtual environments and achieve a better, faster ROI.
VM Rightsizer - a simple tool for tuning your VMs with the right amount of resources (CPU, memory, and storage) to drive maximum VM densities without impacting performance. Rightsizer is unique in its ability to make recommendations and automatically implement changes to find improperly allocated resources and optimally configure VMs.
Inventory - automatically collects important information about all VMs in the virtual infrastructure and creates a detailed inventory report showing VM name, created by and when, resource allocations, and much more. The inventory is continually updated to match the dynamic environment and is searchable by different criteria to quickly find specific information.
"EMA is finding that a majority of enterprises have virtualized about 25 percent of their environment, and have not achieved their full ROI potential," said Andi Mann, VP of Research, Systems & Storage Management at EMA. "Our research shows that by using capacity management to optimize the virtual infrastructure customers can achieve greater VM density, and higher resource utilization while achieving SLAs. This is why EMA believes that virtual infrastructure optimization is an important discipline to ensuring achievement of key virtualization goals."
VKernel currently supports VMware ESX and vSphere and plans to support Microsoft Hyper-V (later this year) as well as Citrix XEN Server. The company believes that a heterogeneous capacity management and optimization offering will be increasingly important as the enterprise virtual infrastructure becomes a mix of hypervisor platforms.
Pricing and Availability
The VKernel Optimization Pack is currently available in a bundle with Capacity Analyzer 4.1 for $399 per CPU socket including the first year of maintenance and support. Subscription pricing is also offered at $179 annually per CPU socket including maintenance and support. A fully-functional 14-day free trial of Capacity Analyzer and the Optimization Pack are available to download at www.vkernel.com. For additional questions, contact VKernel by phone at 1.866.370.2733 (603-610-4300) or email sales@vkernel.com.
VKernel will be exhibiting in Booth #1832 at VMworld taking place Aug. 31 - Sept. 3 in the Moscone Center, San Francisco.
Powerful SysTrack VDI Assessment & Planning Software Tool Suite on Display at VMworld 2009
Lakeside Software announces it will be showing its SysTrack® Virtual Machine Planner, the industry’s most powerful tool suite for helping organizations migrate to virtual desktop infrastructure (VDI) at VMworld 2009.
“SysTrack VMP shows you a comprehensive, real-world picture of the activity taking place across your current desktop computing environment,” states Mike Schumacher, Chief Technical Officer at Lakeside Software. “With our tool, you will not only have the most accurate assessment information to plan and implement a successful virtual desktop platform, but it will be packaged in a report format that provides easy-to-understand answers to the various stakeholders involved in the process.”
SysTrack VMP identifies good and bad candidates for VDI based on many criteria, including resource demands (CPU, memory, disk, network), graphics use, mobility, latency, private devices (USB, printers, etc) and other considerations. It helps you plan for capacity by dealing with mixed target hardware -- automatically benchmarking the current environment and projecting loads onto target platforms.
The virtualization modeling and predictive analysis feature constructs models based on measured probability, statistical analysis, overhead projection and calculation, and real user and application behaviors. It automatically accounts for hypervisor type, desktop model (pooled, assigned, etc), security model, user desired model confidence and VM state management.
For migration planning, SysTrack VMP automatically constructs a POC plan that documents current environmental concerns, suggested clients, provisioning plan and projected POC results -- allowing validation of POC results through objective criteria to determine POC success/failure.
“Without question the hardest information for organizations to obtain in the VDI planning process is accurate user and application behavior,” continued Schumacher. “Our tool analyzes real usage of applications by users and then automatically constructs pool layouts to minimize application costs. It delivers everything necessary to optimize pool design.”
SysTrack VMP also provides: storage throughput and space planning – using a detailed probability model, power planning – projecting power demand and potential energy saving, and firewall and latency analysis.
“VDI is a game-changer that significantly drives down the cost of ownership,” concluded Schumacher. “Our goal is make it easier for more people to pursue VDI and get from where they are today to a virtual desktop environment that optimizes savings opportunities available in their unique computing environments.”
“SysTrack VMP shows you a comprehensive, real-world picture of the activity taking place across your current desktop computing environment,” states Mike Schumacher, Chief Technical Officer at Lakeside Software. “With our tool, you will not only have the most accurate assessment information to plan and implement a successful virtual desktop platform, but it will be packaged in a report format that provides easy-to-understand answers to the various stakeholders involved in the process.”
SysTrack VMP identifies good and bad candidates for VDI based on many criteria, including resource demands (CPU, memory, disk, network), graphics use, mobility, latency, private devices (USB, printers, etc) and other considerations. It helps you plan for capacity by dealing with mixed target hardware -- automatically benchmarking the current environment and projecting loads onto target platforms.
The virtualization modeling and predictive analysis feature constructs models based on measured probability, statistical analysis, overhead projection and calculation, and real user and application behaviors. It automatically accounts for hypervisor type, desktop model (pooled, assigned, etc), security model, user desired model confidence and VM state management.
For migration planning, SysTrack VMP automatically constructs a POC plan that documents current environmental concerns, suggested clients, provisioning plan and projected POC results -- allowing validation of POC results through objective criteria to determine POC success/failure.
“Without question the hardest information for organizations to obtain in the VDI planning process is accurate user and application behavior,” continued Schumacher. “Our tool analyzes real usage of applications by users and then automatically constructs pool layouts to minimize application costs. It delivers everything necessary to optimize pool design.”
SysTrack VMP also provides: storage throughput and space planning – using a detailed probability model, power planning – projecting power demand and potential energy saving, and firewall and latency analysis.
“VDI is a game-changer that significantly drives down the cost of ownership,” concluded Schumacher. “Our goal is make it easier for more people to pursue VDI and get from where they are today to a virtual desktop environment that optimizes savings opportunities available in their unique computing environments.”
Sunday, August 23, 2009
New IDC Viewpoint Research "Removing Storage-Related Barriers to Server and Desktop Virtualization" - Now Available for Download at DataCore Software
DataCore Software, a leading provider of storage virtualization, business continuity and disaster recovery software solutions, today announced that a new IDC Viewpoint research paper titled “Removing Storage-Related Barriers to Server and Desktop Virtualization” is now available for free download.
IDC Viewpoint Report Availability
The IDC Viewpoint report “Removing storage-Related Barriers to Server and Desktop Virtualization” is available now and may be downloaded by going to: http://www.datacore.com/forms/form_request.asp?id=IDCview
The IDC Viewpoint discusses, “An alternative to costly investments in high-end storage systems. It proposes using storage virtualization software to create scalable, robust SANs using equipment already in place. This hardware-independent approach complements server and desktop virtualization without compromising availability, speed, or project schedules…Just as importantly, it can significantly lower capital and operational expenditure for physical and virtual environments alike, making such transitional initiatives viable.” *
Extending Virtualization to the SAN
“In addition to server virtualization, industry analysts are now grasping the real benefits of storage virtualization,” states George Teixeira, president and CEO, DataCore Software. “Software-based storage virtualization is important because it helps IT organizations get more out of their existing hardware investments – and it does so by enabling IT organizations to turn existing storage arrays from multiple vendors into a shared pool of disk storage. Creating virtual storage pools out of existing storage investments, which easily marry with virtual servers, represents the real value that storage virtualization software delivers.”
The research report covers the following topics:
What makes server, desktop, and storage virtualization attractive?
What are the Challenges to Implementing Virtualization?
Extending Virtualization to the SAN
Key Considerations When Choosing a Storage Virtualization Software Solution
Key IDC recommendations included in the report:
Choose storage virtualization software that is not tied to any one hardware vendor so that you will have the most latitude when selecting future devices.
Ensure that the storage virtualization software you pick for virtual systems also addresses your physical servers and competing server virtualization platforms. Otherwise, you may end up fragmenting the IT environment that you are eager to consolidate.
Adds Teixeira, “It is nice to see that after all the rush to embrace server virtualization there is now an increasing interest in storage virtualization. Most storage hardware vendors require customers to buy new storage arrays that support storage virtualization. But in these difficult economic times, it's hard to make an argument for capital expenditures that are so dear.”
* Source: “Removing Storage-Related Barriers to Server and Desktop Virtualization,” an IDC Viewpoint research document published as part of an IDC continuous intelligence service. Author: Carla Arend, European Storage Software and Services, IDC EMEA. Publication date: July 2009.
Free 30-day trial – Try DataCore Today!
For a free 30-day test drive, please visit: http://www.datacore.com/trialsoftware.
IDC Viewpoint Report Availability
The IDC Viewpoint report “Removing storage-Related Barriers to Server and Desktop Virtualization” is available now and may be downloaded by going to: http://www.datacore.com/forms/form_request.asp?id=IDCview
The IDC Viewpoint discusses, “An alternative to costly investments in high-end storage systems. It proposes using storage virtualization software to create scalable, robust SANs using equipment already in place. This hardware-independent approach complements server and desktop virtualization without compromising availability, speed, or project schedules…Just as importantly, it can significantly lower capital and operational expenditure for physical and virtual environments alike, making such transitional initiatives viable.” *
Extending Virtualization to the SAN
“In addition to server virtualization, industry analysts are now grasping the real benefits of storage virtualization,” states George Teixeira, president and CEO, DataCore Software. “Software-based storage virtualization is important because it helps IT organizations get more out of their existing hardware investments – and it does so by enabling IT organizations to turn existing storage arrays from multiple vendors into a shared pool of disk storage. Creating virtual storage pools out of existing storage investments, which easily marry with virtual servers, represents the real value that storage virtualization software delivers.”
The research report covers the following topics:
What makes server, desktop, and storage virtualization attractive?
What are the Challenges to Implementing Virtualization?
Extending Virtualization to the SAN
Key Considerations When Choosing a Storage Virtualization Software Solution
Key IDC recommendations included in the report:
Choose storage virtualization software that is not tied to any one hardware vendor so that you will have the most latitude when selecting future devices.
Ensure that the storage virtualization software you pick for virtual systems also addresses your physical servers and competing server virtualization platforms. Otherwise, you may end up fragmenting the IT environment that you are eager to consolidate.
Adds Teixeira, “It is nice to see that after all the rush to embrace server virtualization there is now an increasing interest in storage virtualization. Most storage hardware vendors require customers to buy new storage arrays that support storage virtualization. But in these difficult economic times, it's hard to make an argument for capital expenditures that are so dear.”
* Source: “Removing Storage-Related Barriers to Server and Desktop Virtualization,” an IDC Viewpoint research document published as part of an IDC continuous intelligence service. Author: Carla Arend, European Storage Software and Services, IDC EMEA. Publication date: July 2009.
Free 30-day trial – Try DataCore Today!
For a free 30-day test drive, please visit: http://www.datacore.com/trialsoftware.
GlassHouse Technologies and Splunk Outline Steps to Secure Virtual Environments
GlassHouse Technologies, the leading independent IT infrastructure consulting and services firm, today announced the availability of a whitepaper that provides insight on securing virtual environments. Co-authored by Splunk, the foremost IT Search company, the paper entitled “Does Virtualization Change Your Approach to Enterprise Security?” focuses on how enterprises can mitigate security risks in their virtual settings in an efficient and cost-effective manner. Consultants from GlassHouse Technologies and representatives from Splunk will also be available to discuss these findings and other emerging virtualization trends at the VMworld conference on August 31 – September 3.
While organizations have rushed to implement virtualization and achieve the promised benefits, many have overlooked the proper strategy necessary to secure this environment. To help enterprises combat the growing concerns over virtual security, this research focuses on best practices that should be implemented to ensure virtual components are meeting all organization security protocols without hindering the performance of the infrastructure.
Specifically, the whitepaper explores the following components:
Aligning security strategy with business risk tolerance
Securing virtual machines, including like physical machines
Security monitoring of virtual environments including administrative virtualization management interface, and access to the virtual machine files
These specific strategies will be discussed in greater detail by Splunk and GlassHouse consultants at VMworld. This year’s conference will bring together attendees from across the globe to discuss trends and challenges in the virtualization space. Make sure to look for the GlassHouse “Conversation Cloud” at the show to hear more about virtual security as well the consultants views on emerging cloud trends in storage, security and data center management. GlassHouse will also host an event at the show bringing together customers, partners and industry experts to continue VMworld discussions.
While organizations have rushed to implement virtualization and achieve the promised benefits, many have overlooked the proper strategy necessary to secure this environment. To help enterprises combat the growing concerns over virtual security, this research focuses on best practices that should be implemented to ensure virtual components are meeting all organization security protocols without hindering the performance of the infrastructure.
Specifically, the whitepaper explores the following components:
Aligning security strategy with business risk tolerance
Securing virtual machines, including like physical machines
Security monitoring of virtual environments including administrative virtualization management interface, and access to the virtual machine files
These specific strategies will be discussed in greater detail by Splunk and GlassHouse consultants at VMworld. This year’s conference will bring together attendees from across the globe to discuss trends and challenges in the virtualization space. Make sure to look for the GlassHouse “Conversation Cloud” at the show to hear more about virtual security as well the consultants views on emerging cloud trends in storage, security and data center management. GlassHouse will also host an event at the show bringing together customers, partners and industry experts to continue VMworld discussions.
VMware vSphere Training Video Now Available from TrainSignal
Great news! For those of you looking at, moving to or already running VMware's latest virtualization platform, vSphere 4.0, TrainSignal has announced the launch of its latest virtualization training video, VMware vSphere Training.
Like other virtualization training series from TrainSignal, this one was created and presented by David Davis. This particular video contains 17 hours of video training that includes multiple formats like AVI, WMV, iPod/iPhone, and MP3 - so, a format that should please most everyone. The video starts from the planning and implementation of vSphere 4 and moves all the way into advanced features like Fault Tolerance (FT), Data Recovery, and vDS.
I've been at this virtualization game now for more than 10 years. And I must say, there are so few people in this industry that can create and pull off these types of training videos as well as David Davis and TrainSignal. These TrainSignal videos are put together extremely well - top notch in my mind. And David Davis has a unique way of explaining his topics in a single video series that reaches across a wide audience: beginners, novice and advanced users alike. No matter what path you find yourself on in your virtualization journey, I believe there is something for everyone in these videos. And I highly recommend them to everyone.
You can find out more information and purchase the new TrainSignal VMware vSphere Training video now.
TrainSignal will also be at VMworld this year.
Like other virtualization training series from TrainSignal, this one was created and presented by David Davis. This particular video contains 17 hours of video training that includes multiple formats like AVI, WMV, iPod/iPhone, and MP3 - so, a format that should please most everyone. The video starts from the planning and implementation of vSphere 4 and moves all the way into advanced features like Fault Tolerance (FT), Data Recovery, and vDS.
I've been at this virtualization game now for more than 10 years. And I must say, there are so few people in this industry that can create and pull off these types of training videos as well as David Davis and TrainSignal. These TrainSignal videos are put together extremely well - top notch in my mind. And David Davis has a unique way of explaining his topics in a single video series that reaches across a wide audience: beginners, novice and advanced users alike. No matter what path you find yourself on in your virtualization journey, I believe there is something for everyone in these videos. And I highly recommend them to everyone.
You can find out more information and purchase the new TrainSignal VMware vSphere Training video now.
TrainSignal will also be at VMworld this year.
Verizon Business Helps Customers Unlock the Power of Virtualization
With virtualization in high demand by enterprises looking to boost efficiency and flexibility while controlling costs, Verizon Business is offering a series of tips for effectively planning and organizing the often-complex of task of implementing virtualization technology.
Virtualization uses technology to remove the physical barriers associated with servers and applications, enabling the consolidation or replacement of servers, storage, network and other physical devices. As a result, companies can better use computing capacity and drive more value from IT resources as well as consolidate data centers and lower energy consumption.
According to analysts at IDC, virtualization is one of the most sought-after IT technologies today, with services aimed at delivering virtualization projected to grow to nearly $16 billion by 2013, up from $8.7 billion in 2008.
Virtualization uses technology to remove the physical barriers associated with servers and applications, enabling the consolidation or replacement of servers, storage, network and other physical devices. As a result, companies can better use computing capacity and drive more value from IT resources as well as consolidate data centers and lower energy consumption.
According to analysts at IDC, virtualization is one of the most sought-after IT technologies today, with services aimed at delivering virtualization projected to grow to nearly $16 billion by 2013, up from $8.7 billion in 2008.
Zeus Highlights Results of VMware vSphere 4 Test
Zeus Technology, the only software-based application traffic management company, today announced the results of a performance test on VMware vSphere™ 4.
Compared to the performance of Zeus Traffic Manager software running directly on standard hardware, the Zeus Virtual Appliance offered outstanding results. The Zeus software on VMware vSphere™ 4 out-performed the native hardware by 15 - 20% in some tests, while achieving at least 85% - 90% in every test case.
The tests considered network-limited activities (requests-per-second, bandwidth and caching performance) and CPU-limited activities (Secure Socket Layer performance). Compared to VMware ESX 3.5, VMware vSphere™ 4 was on average 25% faster in all network tests.
David Day, CTO, Zeus Technology, comments: “We have recently undertaken some rigorous testing on VMware vSphere™ 4 and have achieved outstanding results. These tests demonstrate the Zeus Virtual Appliance software on VMware vSphere™ 4, can deliver a much higher performance than is required by the vast majority of websites, even during peak periods. The analysis provides further evidence that using Zeus in a Virtualized environment to handle load-balancing and application traffic management is achievable without the need to compromise on performance.”
“VMware provides the ideal infrastructure for customers to efficiently run their business-critical applications and for technology partners like Zeus, to deploy complementary solutions for application traffic management in the form of virtual appliances,” said Shekar Ayyar, vice president, infrastructure alliances, VMware. “This new benchmark from Zeus further validates that applications can run with superior performance in VMware Virtualized environments.”
The performance figures were obtained using Zeus software running on a Dell PowerEdge 2950 server equipped with an Intel(R) Quad-core Xeon(r) E5450 processor.
For further information and to view the performance figures the Zeus Virtual Appliance software gained on VMware vSphere™ 4
Compared to the performance of Zeus Traffic Manager software running directly on standard hardware, the Zeus Virtual Appliance offered outstanding results. The Zeus software on VMware vSphere™ 4 out-performed the native hardware by 15 - 20% in some tests, while achieving at least 85% - 90% in every test case.
The tests considered network-limited activities (requests-per-second, bandwidth and caching performance) and CPU-limited activities (Secure Socket Layer performance). Compared to VMware ESX 3.5, VMware vSphere™ 4 was on average 25% faster in all network tests.
David Day, CTO, Zeus Technology, comments: “We have recently undertaken some rigorous testing on VMware vSphere™ 4 and have achieved outstanding results. These tests demonstrate the Zeus Virtual Appliance software on VMware vSphere™ 4, can deliver a much higher performance than is required by the vast majority of websites, even during peak periods. The analysis provides further evidence that using Zeus in a Virtualized environment to handle load-balancing and application traffic management is achievable without the need to compromise on performance.”
“VMware provides the ideal infrastructure for customers to efficiently run their business-critical applications and for technology partners like Zeus, to deploy complementary solutions for application traffic management in the form of virtual appliances,” said Shekar Ayyar, vice president, infrastructure alliances, VMware. “This new benchmark from Zeus further validates that applications can run with superior performance in VMware Virtualized environments.”
The performance figures were obtained using Zeus software running on a Dell PowerEdge 2950 server equipped with an Intel(R) Quad-core Xeon(r) E5450 processor.
For further information and to view the performance figures the Zeus Virtual Appliance software gained on VMware vSphere™ 4
New Distributed Desktop Virtualization to Transform Enterprise Desktop Management
Wanova, Inc. today announced Distributed Desktop Virtualization (DDV) - an entirely new architecture that transforms how companies manage, support and protect desktops and laptops, particularly remote and mobile endpoints. The Wanova DDV solution centralizes the entire desktop contents in the data center for management and protection purposes while distributing the execution of desktop workloads to the endpoints for superior user experience. In related news, the company has emerged from stealth mode and announced $13 million in A-round funding.
“Despite its promises, adoption of desktop virtualization has been limited, largely due to the constraints of today’s point solutions. The problem can’t be solved solely by targeting the client, the server or even the WAN,” said Issy Ben-Shaul, CTO, Wanova. “Our virtualization architecture offers a new approach that integrates all three components – IT managers get powerful centralized management and control, the network is utilized efficiently, and remote workers get the performance they expect."
Because of this unique architecture, Wanova has demonstrated the ability to significantly reduce IT costs and improve support service level agreements. In one field test, Wanova was able to re-image an entire desktop over the WAN in just seven minutes, and conduct a complete PC restore over the WAN with the end-user up and running in 10 minutes. Typical IT support processes might take hours or even days to diagnose and repair the same computer.
"We’ve been seeing a gradual shift towards worker mobility evidenced by the notebook sales beginning to surpass those of desktop PCs. At the same time that workers are becoming increasingly mobile and distributed, IT is being tasked with reducing costs and increasing control and compliance. Wanova’s new architecture is a holistic solution that addresses these challenges and can generate serious attention in distributed enterprises,” said Michael Rose, Research Analyst at IDC.
How Wanova's Distributed Desktop Virtualization Works
Wanova’s Distributed Desktop Virtualization provides a Centralized Virtual Desktop (CVD) in the data center. At the endpoint, Wanova’s DeskCache™ client executes a complete, local desktop instance, while Distributed Desktop Optimization (DDO) enables real-time, bi-directional transfers between the CVD and the DeskCache. Wanova also provides single image management, including mass provisioning and continuous enforcement of the base image on all computers, while enabling persistent personalization including user-installed applications.
Execution of desktop workloads is performed directly on the desktop or the laptop using the local DeskCache, resulting in a superior end-user experience with native performance and full support for offline use. Additionally, Wanova does not require a client hypervisor, so IT benefits from a complete solution that does not add additional management complexity.
Wanova’s DDV architecture is unique in that it combines advanced network optimization, desktop streaming over the WAN and image layering technologies to provide an extremely fast and optimal transport of desktop workloads. It is the first desktop virtualization approach that effectively bridges the gap between centralized management and distributed execution. Technical details can be found at www.wanova.com/pages/wanova-products.html.
Wanova’s solution is currently in field testing with early customers. Wanova will also be introduced in the New Innovators Pavilion at the VMworld 2009 Conference, August 31-Sepetmber 3 at the Moscone Center in San Francisco.
“Despite its promises, adoption of desktop virtualization has been limited, largely due to the constraints of today’s point solutions. The problem can’t be solved solely by targeting the client, the server or even the WAN,” said Issy Ben-Shaul, CTO, Wanova. “Our virtualization architecture offers a new approach that integrates all three components – IT managers get powerful centralized management and control, the network is utilized efficiently, and remote workers get the performance they expect."
Because of this unique architecture, Wanova has demonstrated the ability to significantly reduce IT costs and improve support service level agreements. In one field test, Wanova was able to re-image an entire desktop over the WAN in just seven minutes, and conduct a complete PC restore over the WAN with the end-user up and running in 10 minutes. Typical IT support processes might take hours or even days to diagnose and repair the same computer.
"We’ve been seeing a gradual shift towards worker mobility evidenced by the notebook sales beginning to surpass those of desktop PCs. At the same time that workers are becoming increasingly mobile and distributed, IT is being tasked with reducing costs and increasing control and compliance. Wanova’s new architecture is a holistic solution that addresses these challenges and can generate serious attention in distributed enterprises,” said Michael Rose, Research Analyst at IDC.
How Wanova's Distributed Desktop Virtualization Works
Wanova’s Distributed Desktop Virtualization provides a Centralized Virtual Desktop (CVD) in the data center. At the endpoint, Wanova’s DeskCache™ client executes a complete, local desktop instance, while Distributed Desktop Optimization (DDO) enables real-time, bi-directional transfers between the CVD and the DeskCache. Wanova also provides single image management, including mass provisioning and continuous enforcement of the base image on all computers, while enabling persistent personalization including user-installed applications.
Execution of desktop workloads is performed directly on the desktop or the laptop using the local DeskCache, resulting in a superior end-user experience with native performance and full support for offline use. Additionally, Wanova does not require a client hypervisor, so IT benefits from a complete solution that does not add additional management complexity.
Wanova’s DDV architecture is unique in that it combines advanced network optimization, desktop streaming over the WAN and image layering technologies to provide an extremely fast and optimal transport of desktop workloads. It is the first desktop virtualization approach that effectively bridges the gap between centralized management and distributed execution. Technical details can be found at www.wanova.com/pages/wanova-products.html.
Wanova’s solution is currently in field testing with early customers. Wanova will also be introduced in the New Innovators Pavilion at the VMworld 2009 Conference, August 31-Sepetmber 3 at the Moscone Center in San Francisco.
The SCO Group Releases Virtualized Version of Popular OpenServer 5.0.7 UNIX Operating System
The SCO Group, Inc., a leading provider of UNIX software technology and mobility solutions, today announced that it has released OpenServer 5.0.7V, a virtualized version of its popular UNIX operating system that is optimized for the VMware environment. OpenServer 5.0.7V gives customers a familiar environment while increasing the power and efficiency of a virtualized infrastructure. With OpenServer's renowned stability and reliability, now available in a virtualized environment, customers can avoid costly migration and retooling costs in order to take advantage of newer hardware and applications.
"With OpenServer 507V, SCO is protecting our customer's investment in their OpenServer applications by extending their life cycle without the need to migrate," said Jeff Hunsaker, president and chief operating officer, SCO Operations. "This provides a superior Total Cost of Ownership to an OpenServer 5 application while at the same time taking advantage of the significant performance gains with new modern hardware. We expect, in the near future, to release virtualized versions for OpenServer 6 and UnixWare 7.1.4 as well."
OpenServer 5.0.7V is released as a Virtual Appliance image that can be easily imported onto VMware ESX 3.5, VMware ESXi 3.5 and VMware Workstation 6.5.2 for Windows((R)) platforms. Importation of the Virtual Appliance usually takes between 10 and 60 minutes to complete, depending on configuration, and configuration of the imported Virtual Appliance takes a further 5-10 minutes. Once installed, the system behaves just like a natively-installed OpenServer 5.0.7 system with all of the latest maintenance installed. For convenience, many of the VMware tools have also been included to improve integration between SCO OpenServer 5.0.7V and the host VMware system.
"Using SCO OpenServer 5.0.7 as a base, SCO Engineering has built an optimized Virtual Appliance for VMware," said Andy Nagle, senior director of development, The SCO Group. "This Virtual Appliance uses a subset of existing and updated device drivers that provides optimal performance in a virtual environment."
For more information about OpenServer 5.0.7V, please visit:http://sco.com/products/unix/virtualization/
"With OpenServer 507V, SCO is protecting our customer's investment in their OpenServer applications by extending their life cycle without the need to migrate," said Jeff Hunsaker, president and chief operating officer, SCO Operations. "This provides a superior Total Cost of Ownership to an OpenServer 5 application while at the same time taking advantage of the significant performance gains with new modern hardware. We expect, in the near future, to release virtualized versions for OpenServer 6 and UnixWare 7.1.4 as well."
OpenServer 5.0.7V is released as a Virtual Appliance image that can be easily imported onto VMware ESX 3.5, VMware ESXi 3.5 and VMware Workstation 6.5.2 for Windows((R)) platforms. Importation of the Virtual Appliance usually takes between 10 and 60 minutes to complete, depending on configuration, and configuration of the imported Virtual Appliance takes a further 5-10 minutes. Once installed, the system behaves just like a natively-installed OpenServer 5.0.7 system with all of the latest maintenance installed. For convenience, many of the VMware tools have also been included to improve integration between SCO OpenServer 5.0.7V and the host VMware system.
"Using SCO OpenServer 5.0.7 as a base, SCO Engineering has built an optimized Virtual Appliance for VMware," said Andy Nagle, senior director of development, The SCO Group. "This Virtual Appliance uses a subset of existing and updated device drivers that provides optimal performance in a virtual environment."
For more information about OpenServer 5.0.7V, please visit:http://sco.com/products/unix/virtualization/
AFORE Unveils Long Distance Virtualization
AFORE Solutions, Inc., today unveiled the first purpose built networking solution for extending virtualization between geographically distributed data centers. Built upon the ASE3300 platform, the Company's new Virtual Fiber and Virtual Wire capabilities enable the migration of Virtual Machines and storage across IP and Ethernet wide area networks. This technology allows enterprises and cloud computing/disaster recovery service providers to establish extended virtual data centers, creating new levels of availability and paving the way for advanced hosting and managed service offerings.
"Enterprises struggle with the high cost and limited availability of dark fiber, yet increasingly need to interconnect data centers within the enterprise or between their data centers and cloud service providers," states Jonathan Reeves, AFORE's Chairman and Chief Strategy Officer. "Our Virtual Fiber and Virtual Wire technology provides a significant advancement for enterprises and cloud computing operators alike enabling data centers to be extended across great distances and bandwidth to be re-allocated on demand to meet changing application requirements."
Ensuring seamless Virtual Machine (VM) migration over a wide area network creates specific challenges. VM migration events require significant bandwidth and resources, with low latency and secure Layer 2 connectivity between hosts. Previous solutions limited wide area connectivity to dark fiber, which may be costly and impractical for a wide range of applications and business models. AFORE's Virtual Fiber technology enables lossless and secure communications over IP and Metro Ethernet wide area networks, while Virtual Wire provides transparent Layer 2 connectivity with end-to-end flow control and dynamic packet re-sizing to adapt data center packet sizes to wide area packet network capabilities as required by FC, FCoE or Jumbo frame based applications. The solution also provides time of day re-allocation of bandwidth, enabling connectivity between sites to be increased or decreased as required.
AFORE will be demonstrating long distance virtualization at VMWorld, booth 1438J, August 31 - September 3, 2009, at the Moscone Center in San Francisco.
Virtual Fiber and Virtual Wire technology are immediately available with AFORE's ASE3300 service delivery platform.
"Enterprises struggle with the high cost and limited availability of dark fiber, yet increasingly need to interconnect data centers within the enterprise or between their data centers and cloud service providers," states Jonathan Reeves, AFORE's Chairman and Chief Strategy Officer. "Our Virtual Fiber and Virtual Wire technology provides a significant advancement for enterprises and cloud computing operators alike enabling data centers to be extended across great distances and bandwidth to be re-allocated on demand to meet changing application requirements."
Ensuring seamless Virtual Machine (VM) migration over a wide area network creates specific challenges. VM migration events require significant bandwidth and resources, with low latency and secure Layer 2 connectivity between hosts. Previous solutions limited wide area connectivity to dark fiber, which may be costly and impractical for a wide range of applications and business models. AFORE's Virtual Fiber technology enables lossless and secure communications over IP and Metro Ethernet wide area networks, while Virtual Wire provides transparent Layer 2 connectivity with end-to-end flow control and dynamic packet re-sizing to adapt data center packet sizes to wide area packet network capabilities as required by FC, FCoE or Jumbo frame based applications. The solution also provides time of day re-allocation of bandwidth, enabling connectivity between sites to be increased or decreased as required.
AFORE will be demonstrating long distance virtualization at VMWorld, booth 1438J, August 31 - September 3, 2009, at the Moscone Center in San Francisco.
Virtual Fiber and Virtual Wire technology are immediately available with AFORE's ASE3300 service delivery platform.
Rackspace Private Cloud Leverages VMware For Enterprise Computing Offering
Rackspace Hosting, has announced its new Private Cloud offering, which allows customers to run the centrally managed VMware virtualisation platform on private dedicated hardware environments.
Rackspace recognises the demand from enterprises for a more flexible and scalable hosting solution. Although multi-tenant cloud solutions are very flexible and cost-effective, they are not always right for every segment. The Rackspace Private Cloud’s single-tenant architecture offers increased control and security, while still maintaining the scalability, flexibility and resource optimisation that make shared cloud offerings so compelling.
Rackspace Private Cloud is an evolution of its popular dedicated virtual server (DVS) offering within the managed hosting business unit. In the last year, revenue from virtualisation solutions has grown substantially, driven mainly by the increased flexibility, improved asset utilisation and lower capital and operating costs that VMware’s virtualisation provides
Rackspace recognises the demand from enterprises for a more flexible and scalable hosting solution. Although multi-tenant cloud solutions are very flexible and cost-effective, they are not always right for every segment. The Rackspace Private Cloud’s single-tenant architecture offers increased control and security, while still maintaining the scalability, flexibility and resource optimisation that make shared cloud offerings so compelling.
Rackspace Private Cloud is an evolution of its popular dedicated virtual server (DVS) offering within the managed hosting business unit. In the last year, revenue from virtualisation solutions has grown substantially, driven mainly by the increased flexibility, improved asset utilisation and lower capital and operating costs that VMware’s virtualisation provides
NetEx Takes HyperIP Virtual with Broad Application Support for WAN Optimization on VMware Infrastructures
NetEx today announced that its HyperIP for VMware offers the broadest range of third-party support for applications. These include all of the leading providers of disaster recovery, data migration and replication software, such as Data Domain, Dell/EqualLogic, EMC, FalconStor, Hewlett-Packard/LeftHand, Hitachi Data Systems, IBM, Microsoft, Network Appliance and many others.
The move by NetEx to virtualize the HyperIP WAN optimization software is part of an industry trend with more companies opting to deploy applications as software-only implementations to take advantage of the cost, scalability and flexibility of the VMware infrastructure. Virtualizing applications for VMware eliminates the need for specialized appliances while allowing IT organizations to quickly re-allocate computing and storage resources as needed to accommodate business priorities.
HyperIP for VMware is the industry’s only software-based WAN optimizer that operates on a VMware ESX server to boost the performance of third-party storage replication applications. Virtual HyperIP mitigates TCP performance issues that are common when moving stored data over wide area network connections because of bandwidth restrictions, latency due to distance and/or router hop counts, packet loss and network errors. HyperIP increases end-to-end performance of replication applications by 3 to 10 times, reducing VMotion and Storage VMotion transfer windows with enhanced efficiency by utilizing 80 to 90 percent of available bandwidth between data centers or branch offices up to OC12 rates.
NetEx was one of the early adopters in recognizing the impact of the virtual infrastructure, how it could benefit IT operations, and speed up data migration and replication operations when combining HyperIP for VMware with data movement applications from top tier IT storage vendors. VMware has enhanced the ESX infrastructure by redesigning the Hypervisor to support multiple cores, opening the way for all applications to be offered as virtualized pure software plays and eliminating the need for expensive appliances and expensive IP network upgrades.
The applications supported by HyperIP for VMware include: DataCore AIM, Data Domain Replicator Software; Avamar, SRDF Adaptive Copy, SRDF/DM, SRDF/A (DMX), Centera Replicator, and Celerra Replicator, RecoverPoint CRR and DL3D from EMC; Dell/EqualLogic PS Series Replication; FalconStor Software’s IPStor, Disksafe and FileSafe; HP/Lefthand Networks SANiQ; TrueCopy for iFCP from HDS; IBM Tivoli Storage Manager and Global Mirror (FCIP), Microsoft NetBios and Data Protection Manager; SnapMirror and SnapVault from NetApp; NSI DoubleTake; DataGuard, DB Rsync and Streams from Oracle; SANRAD Global Data Replication; Softek Replicator; NetBackup, ReplicationExec and Volume Replicator by Symantec; Veeam Replication; and VMware VMotion. In addition, HyperIP fully supports WAN optimization for the industry standard FTP and iSCSI protocols.
The move by NetEx to virtualize the HyperIP WAN optimization software is part of an industry trend with more companies opting to deploy applications as software-only implementations to take advantage of the cost, scalability and flexibility of the VMware infrastructure. Virtualizing applications for VMware eliminates the need for specialized appliances while allowing IT organizations to quickly re-allocate computing and storage resources as needed to accommodate business priorities.
HyperIP for VMware is the industry’s only software-based WAN optimizer that operates on a VMware ESX server to boost the performance of third-party storage replication applications. Virtual HyperIP mitigates TCP performance issues that are common when moving stored data over wide area network connections because of bandwidth restrictions, latency due to distance and/or router hop counts, packet loss and network errors. HyperIP increases end-to-end performance of replication applications by 3 to 10 times, reducing VMotion and Storage VMotion transfer windows with enhanced efficiency by utilizing 80 to 90 percent of available bandwidth between data centers or branch offices up to OC12 rates.
NetEx was one of the early adopters in recognizing the impact of the virtual infrastructure, how it could benefit IT operations, and speed up data migration and replication operations when combining HyperIP for VMware with data movement applications from top tier IT storage vendors. VMware has enhanced the ESX infrastructure by redesigning the Hypervisor to support multiple cores, opening the way for all applications to be offered as virtualized pure software plays and eliminating the need for expensive appliances and expensive IP network upgrades.
The applications supported by HyperIP for VMware include: DataCore AIM, Data Domain Replicator Software; Avamar, SRDF Adaptive Copy, SRDF/DM, SRDF/A (DMX), Centera Replicator, and Celerra Replicator, RecoverPoint CRR and DL3D from EMC; Dell/EqualLogic PS Series Replication; FalconStor Software’s IPStor, Disksafe and FileSafe; HP/Lefthand Networks SANiQ; TrueCopy for iFCP from HDS; IBM Tivoli Storage Manager and Global Mirror (FCIP), Microsoft NetBios and Data Protection Manager; SnapMirror and SnapVault from NetApp; NSI DoubleTake; DataGuard, DB Rsync and Streams from Oracle; SANRAD Global Data Replication; Softek Replicator; NetBackup, ReplicationExec and Volume Replicator by Symantec; Veeam Replication; and VMware VMotion. In addition, HyperIP fully supports WAN optimization for the industry standard FTP and iSCSI protocols.
Wednesday, August 12, 2009
How to Maximize Performance and Utilization of Your Virtual Infrastructure
Most Fortune 1000 companies are currently between 15 to 30 percent virtualized. There are still a lot of obstacles to overcome to move more virtualization projects forward. The biggest virtualization challenge facing organizations is how to manage the virtual infrastructure. Here, Knowledge Center contributor Alex Bakman explains how IT staffs can dramatically improve performance and utilization efficiencies in their virtualization projects.
Organizations today are rapidly virtualizing their infrastructures. In doing so, they are experiencing a whole new set of systems management challenges. These challenges cannot be solved with traditional toolsets in an acceptable timeframe to match the velocity at which organizations are virtualizing. In a virtual server infrastructure where all resources are shared, optimal performance can only be achieved with proactive capacity management and proper allocation of shared resources.
The biggest challenge is finding the vast amount of time or automated technology to do this. Not allocating enough resources can cause bottlenecks in CPU, memory, storage and disk I/O, which can lead to performance problems and costly downtime events. However, over-allocating resources can drive up your cost per virtual machine, making a ROI harder to achieve and halting future projects.
To address this, organizations should consider a life cycle approach to performance assurance in order to proactively prevent performance issues—starting in preproduction and continually monitoring the production environments. By modeling, validating, monitoring, analyzing and charging, the Performance Assurance Lifecycle (PAL) addresses resource allocation and management. It significantly reduces performance problems, ensures optimal performance of the virtual infrastructure and helps organizations to continually meet service-level agreements (SLAs).Resource Library:
The following are the five components of the PAL. These components allow organizations to maximize the performance and utilization of their virtual infrastructures, while streamlining costs and delivering a faster ROI.
Component No. 1: Modeling
Modeling addresses preproduction planning to post-production additions, as well as changes to the virtual infrastructure. With capabilities to quickly model thousands of "what if" scenarios—from adding more virtual machines to changing configuration settings—IT staff can immediately see whether or not resource constraints will be exceeded and if performance issues will occur. In this way, modeling provides proactive prevention.
Four common modeling scenarios are:
1. See the effect on resource capacity and utilization of adding a new host/virtual machine or removing existing ones.
2. What will happen when a host is suspended for maintenance or a virtual machine is powered down?
3. Pre-testing VMotion scenarios to make sure sufficient resources exist.
4. How will performance be affected if resource changes are made to hosts, clusters and/or resource pools?
Component No. 2: Validating
While modeling "what if" scenarios is an important first step to continually ensuring optimal performance, it is equally important to validate that changes will not have a negative impact on infrastructure performance. Resource Library:
Validation spans between the modeling stage and the monitoring stage of the PAL, because it is equally critical to initially validate performance-impacting changes in preproduction, as well as to continually monitor and validate performance over time. If you cannot validate that a certain change will impact infrastructure performance in either a negative or positive way, there is significant risk to making that change.
Component No. 3: Monitoring
The ongoing monitoring of shared resource utilization and capacity is absolutely essential to knowing how the virtual environment will perform. When monitoring resource utilization, IT staff will know whether resources are being over or underutilized. Not allocating enough resources (based on usage patterns and trends derived from 24/7 monitoring) will cause performance bottlenecks, leading to costly downtime and SLA violations. Over-allocating resources can drive up the cost per virtual machine, making a ROI much harder to achieve.
By continually monitoring shared resource utilization and capacity in virtual server environments, IT can significantly reduce the time and cost of identifying current capacity bottlenecks that are causing performance problems, tracking the top resource consumers in your environment, alerting you when capacity utilization trends exceed thresholds, and optimizing performance to meet established SLAs.
Organizations today are rapidly virtualizing their infrastructures. In doing so, they are experiencing a whole new set of systems management challenges. These challenges cannot be solved with traditional toolsets in an acceptable timeframe to match the velocity at which organizations are virtualizing. In a virtual server infrastructure where all resources are shared, optimal performance can only be achieved with proactive capacity management and proper allocation of shared resources.
The biggest challenge is finding the vast amount of time or automated technology to do this. Not allocating enough resources can cause bottlenecks in CPU, memory, storage and disk I/O, which can lead to performance problems and costly downtime events. However, over-allocating resources can drive up your cost per virtual machine, making a ROI harder to achieve and halting future projects.
To address this, organizations should consider a life cycle approach to performance assurance in order to proactively prevent performance issues—starting in preproduction and continually monitoring the production environments. By modeling, validating, monitoring, analyzing and charging, the Performance Assurance Lifecycle (PAL) addresses resource allocation and management. It significantly reduces performance problems, ensures optimal performance of the virtual infrastructure and helps organizations to continually meet service-level agreements (SLAs).Resource Library:
The following are the five components of the PAL. These components allow organizations to maximize the performance and utilization of their virtual infrastructures, while streamlining costs and delivering a faster ROI.
Component No. 1: Modeling
Modeling addresses preproduction planning to post-production additions, as well as changes to the virtual infrastructure. With capabilities to quickly model thousands of "what if" scenarios—from adding more virtual machines to changing configuration settings—IT staff can immediately see whether or not resource constraints will be exceeded and if performance issues will occur. In this way, modeling provides proactive prevention.
Four common modeling scenarios are:
1. See the effect on resource capacity and utilization of adding a new host/virtual machine or removing existing ones.
2. What will happen when a host is suspended for maintenance or a virtual machine is powered down?
3. Pre-testing VMotion scenarios to make sure sufficient resources exist.
4. How will performance be affected if resource changes are made to hosts, clusters and/or resource pools?
Component No. 2: Validating
While modeling "what if" scenarios is an important first step to continually ensuring optimal performance, it is equally important to validate that changes will not have a negative impact on infrastructure performance. Resource Library:
Validation spans between the modeling stage and the monitoring stage of the PAL, because it is equally critical to initially validate performance-impacting changes in preproduction, as well as to continually monitor and validate performance over time. If you cannot validate that a certain change will impact infrastructure performance in either a negative or positive way, there is significant risk to making that change.
Component No. 3: Monitoring
The ongoing monitoring of shared resource utilization and capacity is absolutely essential to knowing how the virtual environment will perform. When monitoring resource utilization, IT staff will know whether resources are being over or underutilized. Not allocating enough resources (based on usage patterns and trends derived from 24/7 monitoring) will cause performance bottlenecks, leading to costly downtime and SLA violations. Over-allocating resources can drive up the cost per virtual machine, making a ROI much harder to achieve.
By continually monitoring shared resource utilization and capacity in virtual server environments, IT can significantly reduce the time and cost of identifying current capacity bottlenecks that are causing performance problems, tracking the top resource consumers in your environment, alerting you when capacity utilization trends exceed thresholds, and optimizing performance to meet established SLAs.
Hyper9 VOS Helps Battle Virtual Machine Sprawl
Hyper9 is rolling out the second version of its flagship Virtualization Optimization Suite, which is designed to give businesses improved insight into their virtualized environments and better ways to manage their VMs. While many businesses have embraced virtualization to save money in such areas as hardware, space and power, the result has been a virtualization environment that is not always easy to manage. Hyper9 VOS offers a host of new features tied together by an intuitive user interface.
Hyper9 officials want to give businesses better insight into their virtual environments.
The company July 29 rolled out the second generation of its flagship Virtualization Optimization Suite—or VOS—which is designed to help businesses create virtual environments that are suitable to their business needs, according to Bill Kennedy, executive vice president of research and development for Hyper9.
Enterprises over the past few years have embraced virtualization with the hope of reducing hardware, space and power costs by moving workloads onto virtual machines, Kennedy said in an interview. However, those same businesses are now finding that costs generated by the “VM sprawl” are going up, causing what Kennedy calls “ROI erosion.”
“It’s become harder to manage [these virtual environments],” he said.Resource Library:
Hyper9’s VOS is designed to give businesses greater insight into those environments, enabling them to not only see what VMs are running what workloads, but also giving them the ability to more easily search, organize and analyze data from the virtual environments. That data is displayed through an intuitive user interface, Kennedy said.
A recent survey of customers by the vendor found that at least 20 percent of existing VMs are superfluous to a company’s operations, which is resulting in businesses spending more money than needed on their virtual environments. Through VOS, businesses can more easily find those underutilized or unneeded VMs.
Hyper9 earlier this year rolled out the first version of its VOS offering, which was primarily aimed at virtualization administrators and offered some data collection capabilities, Kennedy said.
The latest version offers greater business insights and analytics, and is aimed at a wider array of people, including data center administrators as well as virtualization administrators.
A key new feature is Hyper9’s Workspaces, which lets users organize and share content, as well as gain better insight into the virtual machines and how they’re being used, Kennedy said.
Hyper9 also put in a feature called Active Links, which gives users one-click access to everything from data to reports to common tasks.
“You can find rogue VMs [that are not being used or are underutilized] through one click,” he said.
There also is automated monitoring and alerting, which gives users a heads up on such issues as change tracking, rogue VMs and VM sprawl.
Hyper9’s VDMA feature analyzes historical performance and configuration data.
Hyper9 officials want to give businesses better insight into their virtual environments.
The company July 29 rolled out the second generation of its flagship Virtualization Optimization Suite—or VOS—which is designed to help businesses create virtual environments that are suitable to their business needs, according to Bill Kennedy, executive vice president of research and development for Hyper9.
Enterprises over the past few years have embraced virtualization with the hope of reducing hardware, space and power costs by moving workloads onto virtual machines, Kennedy said in an interview. However, those same businesses are now finding that costs generated by the “VM sprawl” are going up, causing what Kennedy calls “ROI erosion.”
“It’s become harder to manage [these virtual environments],” he said.Resource Library:
Hyper9’s VOS is designed to give businesses greater insight into those environments, enabling them to not only see what VMs are running what workloads, but also giving them the ability to more easily search, organize and analyze data from the virtual environments. That data is displayed through an intuitive user interface, Kennedy said.
A recent survey of customers by the vendor found that at least 20 percent of existing VMs are superfluous to a company’s operations, which is resulting in businesses spending more money than needed on their virtual environments. Through VOS, businesses can more easily find those underutilized or unneeded VMs.
Hyper9 earlier this year rolled out the first version of its VOS offering, which was primarily aimed at virtualization administrators and offered some data collection capabilities, Kennedy said.
The latest version offers greater business insights and analytics, and is aimed at a wider array of people, including data center administrators as well as virtualization administrators.
A key new feature is Hyper9’s Workspaces, which lets users organize and share content, as well as gain better insight into the virtual machines and how they’re being used, Kennedy said.
Hyper9 also put in a feature called Active Links, which gives users one-click access to everything from data to reports to common tasks.
“You can find rogue VMs [that are not being used or are underutilized] through one click,” he said.
There also is automated monitoring and alerting, which gives users a heads up on such issues as change tracking, rogue VMs and VM sprawl.
Hyper9’s VDMA feature analyzes historical performance and configuration data.
VM6 Software Releases Virtual Machine ex Server 2.0
A virtualization solution from VM6 Software comes with features like rebuild functionality and improvements in the network components layer.
Virtualization company VM6 Software announced the release of Virtual Machine ex (VM6 VMex) Version 2.0 for remote office and branch locations. VMex leverages Microsoft Hyper-V to create an internal cloud to provision, consolidate, manage and protect all of the ROBO workloads. The company said the solution does not require any specialized skill sets other than Microsoft Certified System Engineers.
New features in Virtual Machine ex 2.0 include monitoring and alert capabilities that are fully integrated into the management console, so administrators can use the predefined templates or build their own to capture errors and write in log files, send e-mails or run a script; advanced security settings systems for administrators to assign delegation to allow users to have read, write or limited access to the various objects in the VMex cloud; and improved performance for virtual shared storage rebuild.
"Enterprise organizations that have realized the benefits of virtualization in the data center are struggling with ways to extend those same benefits to remote locations and branch offices as the costs are too high and the specialized skill sets required are unavailable or cost-prohibitive." said VM6 founder and CEO Claude Goudreault. "Enterprise leaders now seek solutions that make it easier to manage, provision, consolidate and protect the workloads across all of their locations. VM6 VMex addresses the challenges of virtualization adoption in remote office locations, providing an affordable and easy way to create a competitive advantage." Resource Library:
The VMex virtual SAN rebuild function automatically rebuilds a virtual SAN in less than 5 minutes without impacting the performance, the company said, even if the RAID was unavailable or down for up to a week. The solution also boasts reduced setup time with an improved install wizard accelerating the installation of VMex on a two-node cluster, now taking less than 15 minutes.
VM6 has also improved the network components layer. Removal of the dependency to PGM and the addition of VMex proprietary network drivers eliminated the stress on the Windows kernel, adding to performance and stability, the company claims. Rounding out the features is integrated quota management and thin provisioning, where VMex administrators can provision more storage than is physically available and set proper quota alerts to prevent over allocation of physical resources.
Christian Boivin, R D director at JLR Real Estate Data Builders, said the company has been using VM6 VMex 1.0 since it became available and is pleased to see this latest version, specifically for its integrated monitoring and alerting.
"As a search engine for real estate and property information, it's critical that our IT infrastructure be robust and available at all times, while being flexible as we're essentially transforming the mission of our servers between day and night," he said. "When we looked at available solutions in the market, they were all at least five times more expensive and required a lot of independently developed solutions to work together, which further added to the complexity.”
Virtualization company VM6 Software announced the release of Virtual Machine ex (VM6 VMex) Version 2.0 for remote office and branch locations. VMex leverages Microsoft Hyper-V to create an internal cloud to provision, consolidate, manage and protect all of the ROBO workloads. The company said the solution does not require any specialized skill sets other than Microsoft Certified System Engineers.
New features in Virtual Machine ex 2.0 include monitoring and alert capabilities that are fully integrated into the management console, so administrators can use the predefined templates or build their own to capture errors and write in log files, send e-mails or run a script; advanced security settings systems for administrators to assign delegation to allow users to have read, write or limited access to the various objects in the VMex cloud; and improved performance for virtual shared storage rebuild.
"Enterprise organizations that have realized the benefits of virtualization in the data center are struggling with ways to extend those same benefits to remote locations and branch offices as the costs are too high and the specialized skill sets required are unavailable or cost-prohibitive." said VM6 founder and CEO Claude Goudreault. "Enterprise leaders now seek solutions that make it easier to manage, provision, consolidate and protect the workloads across all of their locations. VM6 VMex addresses the challenges of virtualization adoption in remote office locations, providing an affordable and easy way to create a competitive advantage." Resource Library:
The VMex virtual SAN rebuild function automatically rebuilds a virtual SAN in less than 5 minutes without impacting the performance, the company said, even if the RAID was unavailable or down for up to a week. The solution also boasts reduced setup time with an improved install wizard accelerating the installation of VMex on a two-node cluster, now taking less than 15 minutes.
VM6 has also improved the network components layer. Removal of the dependency to PGM and the addition of VMex proprietary network drivers eliminated the stress on the Windows kernel, adding to performance and stability, the company claims. Rounding out the features is integrated quota management and thin provisioning, where VMex administrators can provision more storage than is physically available and set proper quota alerts to prevent over allocation of physical resources.
Christian Boivin, R D director at JLR Real Estate Data Builders, said the company has been using VM6 VMex 1.0 since it became available and is pleased to see this latest version, specifically for its integrated monitoring and alerting.
"As a search engine for real estate and property information, it's critical that our IT infrastructure be robust and available at all times, while being flexible as we're essentially transforming the mission of our servers between day and night," he said. "When we looked at available solutions in the market, they were all at least five times more expensive and required a lot of independently developed solutions to work together, which further added to the complexity.”
Do Hyper-V's Improvements Make It a Stronger VMware Rival?
Hyper-V, part of the Windows Server 2008 R2 platform, provides some improvements that were absolutely necessary for Microsoft to even think of competing with VMware's latest offerings. Are they enough? eWEEK Labs' early look at the new Hyper-V shows that Microsoft still has a lot of ground to cover.
Microsoft released Windows Server 2008 R2 with a newly improved version of Hyper-V. Even so, VMware is still miles ahead in terms of the features and innovation that lay the foundation for sustainable virtualization for midsize and large enterprises.
In fact, I think VMware—with its just-released vSphere 4—has raised the bar so high that Microsoft's best hope is to be the low-cost leader. But while cheap, "You get what you pay for" products might work in a consumer category, they won't play too well in IT shops that depend on high-performance data operations to stay in business.
That said, here's what's new and compelling in Hyper-V.
The previous version of Hyper-V had Quick Migration to move virtual machines from one physical host to another. Now, Quick Migration is gone and Live Migration is here.Resource Library:
Click here for a look at improvements in the Hyper-V implementation.
In the weeks ahead, I'll be conducting extensive Live Migration tests on the Labs' Hewlett-Packard and Sun Xeon 5500 ("Nehalem")-based systems. But, for now, let's just say that Quick Migration was so inferior to VMware's VMotion that Microsoft had to shore up this function in Hyper-V.
I suspect that Live Migration has some catching up to do with similar VMware features that have been in field use for several years. When it comes to failover, high availability and load balancing, there is no substitute for production experience. This is one area in which cheap and OK is trumped by market-priced and reliable.
Cluster Shared Volumes are also improved in this version of Hyper-V and play an important role in making VMs highly available. The fact that these clustering enhancements support Live Migration makes them important, but they are no means innovative.
Included among the improvements is a best-practices tool to help ensure proper system configuration. I'm anxious to get started putting a clustered Hyper-V environment together here in the lab. I'll be making extensive use of this tool to see how helpful it is in putting my storage and computing resources into correct alignment.
Microsoft does have a leg up on VMware in at least one area.
Sometime in the next couple of months, Microsoft will release the next version of its System Center Virtual Machine Manager. Microsoft has years of experience in managing large numbers of Windows systems, as well as an almost equal number of years in working with third-party tool makers. Even though most of Microsoft's management experience is with Microsoft-only tools, this could be the edge it needs to win over the virtualization hearts and minds of IT managers, who will soon be measured on how well they manage their virtualized data centers (if they aren't already).
Look for my review of Hyper-V as part of eWEEK Labs' extensive coverage of the Windows Server 2008 R2 platform and Windows 7.
Microsoft released Windows Server 2008 R2 with a newly improved version of Hyper-V. Even so, VMware is still miles ahead in terms of the features and innovation that lay the foundation for sustainable virtualization for midsize and large enterprises.
In fact, I think VMware—with its just-released vSphere 4—has raised the bar so high that Microsoft's best hope is to be the low-cost leader. But while cheap, "You get what you pay for" products might work in a consumer category, they won't play too well in IT shops that depend on high-performance data operations to stay in business.
That said, here's what's new and compelling in Hyper-V.
The previous version of Hyper-V had Quick Migration to move virtual machines from one physical host to another. Now, Quick Migration is gone and Live Migration is here.Resource Library:
Click here for a look at improvements in the Hyper-V implementation.
In the weeks ahead, I'll be conducting extensive Live Migration tests on the Labs' Hewlett-Packard and Sun Xeon 5500 ("Nehalem")-based systems. But, for now, let's just say that Quick Migration was so inferior to VMware's VMotion that Microsoft had to shore up this function in Hyper-V.
I suspect that Live Migration has some catching up to do with similar VMware features that have been in field use for several years. When it comes to failover, high availability and load balancing, there is no substitute for production experience. This is one area in which cheap and OK is trumped by market-priced and reliable.
Cluster Shared Volumes are also improved in this version of Hyper-V and play an important role in making VMs highly available. The fact that these clustering enhancements support Live Migration makes them important, but they are no means innovative.
Included among the improvements is a best-practices tool to help ensure proper system configuration. I'm anxious to get started putting a clustered Hyper-V environment together here in the lab. I'll be making extensive use of this tool to see how helpful it is in putting my storage and computing resources into correct alignment.
Microsoft does have a leg up on VMware in at least one area.
Sometime in the next couple of months, Microsoft will release the next version of its System Center Virtual Machine Manager. Microsoft has years of experience in managing large numbers of Windows systems, as well as an almost equal number of years in working with third-party tool makers. Even though most of Microsoft's management experience is with Microsoft-only tools, this could be the edge it needs to win over the virtualization hearts and minds of IT managers, who will soon be measured on how well they manage their virtualized data centers (if they aren't already).
Look for my review of Hyper-V as part of eWEEK Labs' extensive coverage of the Windows Server 2008 R2 platform and Windows 7.
Subscribe to:
Comments (Atom)