Tuesday, April 21, 2009
IT Infrastructure Slideshow: HP BladeSystem Matrix Combines Server, Storage, Networking
Virtualization Slideshow: Sun xVM VirtualBox 2.2 Is a Tempting Alternative to VMware Fare
Unisys Improves Virtualization Capabilities in Its Servers
Unisys is ramping up the virtualization capabilities of its enterprise x86 servers by upgrading to Intel’s Nehalem EP processors and VMware’s latest version of its virtualization platform.
Unisys announced April 21 that it is incorporating the Xeon 5500 series chips into its ES3000 rack systems and ES5000 blade servers, and will support the new VMware vSphere 4 platform in all of its lines of enterprise servers.
The vendor is looking to help businesses grow their virtualization environments as well as enhance their cloud computing management and deployment capabilities.
“It’s focused around doing virtualization right,” said Rod Sapp, marketing director for Unisys’ enterprise servers. “We can help clients with their end-to-end deployment of virtualized environments.”
Sapp said Unisys can now help businesses with any type of virtualization environment. Intel’s new Nehalem EP processors—which were announced March 30—not only help businesses improve performance and reduce costs, but also come with improved virtualization capabilities, which Unisys wants to take advantage of in its rack and blade servers.
“This is very much a scale-out virtualization [environment],” Sapp said.
Unisys joins a host of other system makers, including Dell, Hewlett-Packard, IBM and Rackable Systems, that unveiled new servers based on the processor.
The new VMware vSphere platform—which also was announced April 21—lifts the ceiling on many of the scalability limits on the hypervisor, making it more of a scale-up play, he said. The new platform enables Unisys to offer customers the capability to run significantly more virtual machines on a single physical server, according to Sapp.
The new systems also help bridge the cost gap between scale-out and scale-up, he said. Previously, the cost per virtual machine was five times more in scale-out environments. With the new generation of servers, that cost difference has dropped to three times more for scale-out deployments. The cost savings in scale-up environments center around licensing, server management, and power and cooling.
Sapp said the new capabilities give Unisys a strong framework to help enterprises create virtualization environments, from the hardware and software to services and best practices.
The new capabilities come at an opportune time for IT administrators, who—thanks in large part to the global recession—are being asked to improve service levels while reducing costs.
“There’s more pressure being put onto IT to start using virtualization more to cut costs,” Sapp said.
The new rack and blade servers will be available starting April 28, he said. A 5U tower server with the new technologies will be available around July 30.
VMware Launches vSphere 4, Its Operating System for the Cloud
VMware on April 21 launched what it claims to be the first operating system specifically engineered for cloud computing with vSphere 4, the first major upgrade to its frontline product since 2006.
vSphere 4, formerly called VMware Infrastructure, will be made available in the second quarter of 2009, the company said.
vSphere 4 is designed to facilitate delivery of IT infrastructure as a service to enterprises, so IT departments can build their own private cloud systems to provide business services internally for the company and for trusted partners, supply chain participants and other business associates.
In short, VMware wants to become the system of choice to run enterprise data centers, and further, to enable these complex systems to reach out and touch others in order to gain business advantages.
"Cloud computing has become known as the next big thing and is now sort of a buzzword, but we believe that with vSphere 4, we can make cloud computing a reality," Bogomil Balkonsky, VMware's vice president of product marketing for servers, told eWEEK.
"It's the first iteration of VMware's virtualization as an enabler for cloud infrastructure. It scales higher, runs faster, offers more automated management technologies."
A lot of the recent talk about this new computing services model, Balkonsky said, has been focused on public external clouds -- such as Amazon EC2 and S3, GoogleApps, Salesforce.com and others.
"Those will all have a very interesting effect on the industry, but we believe where the action is going to be in cloud computing, in the next few years, is helping companies build and transform their internal infrastructure into internal clouds, or internal cloud providers," Balkonsky said..
"A company data center can act with efficiency [using this new operating system] and with the reliablity of an internal utility provider, if they want."
vSphere 4 also provides the foundation for enterprise IT departments to connect their own homemade private clouds behind a firewall with those of partners -- or established public cloud services, like those noted earlier.
Why is VMware calling this an operating system, rather than a cloud computing architecture? Operating systems, in the classical sense of the IT term, refer to products such as Microsoft Windows, Apple's Mac OS, Linux, Unix, and AIX.
"We're calling this an operating system is because at a high level, an operating system does two things: It manages the hardware looking downward, and it provides interfaces or services with applications, looking upward," Balkonsky said.
"An operating system typically is the mediator between applications and the hardware. Our technology is the first software layer that installs on the bare metal. It provides two classes of services: A set of services to manage the hardware -- the servers, the storage, and the network -- and a set of application services to provide availability, security, and scalability to applications."
VMware designed vSphere 4 to be a non-disruptive force in the data center, Balkonsky said. The company's virtualization software works with virtually all other data center systems; vSphere 4 is designed to slip into its own layer without disrupting workflows.
vSphere 4 will be available in the second quarter in six editions, starting at $995 for three physical servers for small offices.
Sunday, April 12, 2009
New Microsoft Windows Licensing Aids Desktop Virtualization, Report Says
Microsoft is making it easier for enterprises to embrace desktop virtualization, according to a report from Forrester Research.
In a report issued April 9, Forrester analyst Natalie Lambert said new Windows licensing from Microsoft—which in the past had been a deterrent for businesses interested in desktop virtualization—could help fuel a surge in the
"With the latest licensing rules, Microsoft has now made possible popular [desktop virtualization] scenarios that IT ops pros have been clamoring for," Lambert wrote in the report, which was developed along with Forrester analyst Simon Yates, Christopher Voce and Margaret Ryan.
Microsoft updated its Windows licensing for desktop virtualization at the beginning of 2009, Lambert said. However, although the new licensing plan will help enterprises interested in desktop virtualization, the key continues to be Microsoft's Software Assurance program, she said.
For licensing local desktop virtualization on company-owned PCs, not much has changed since Forrester last looked at the issue in June 2008, Lambert said in her report. If a company subscribes to the Software Assurance program, it can run up to four virtual machines on top of the Windows host operating system—including Vista, XP, 2000 and earlier versions—on a single physical PC. If a company needs five or more virtual desktops, it will need to buy more copies of Windows. Can virtualization make Red Hat's Linux desktop pay off? However, without Software Assurance, a company can't run virtual desktops on top of the PCs. For all other desktop virtualization scenarios, enterprises will need Microsoft's VECD (Vista Enterprise Centralized Desktop) license. This encompasses initiatives that include running a Windows client OS in a hosted desktop environment—on a server in a data center—or on a PC that isn't owned by the company, such as a contractor's machine or one owned by an employee. With the VECD, costs vary depending on the device the operating system is running on, Lambert said. If the device is owned by the company and covered by Software Assurance, the cost is $23 per device per year. This includes any system connected to a hosted desktop, which includes users who want to connect to the central hosted desktop not only at work, but also from home. However, for thin-client devices connected to a hosted desktop, or for PCs that aren't covered by Software assurance, the cost is $110 per device per year. A change in this scenario is that, unlike in the past, the VECD license lets noncorporate PCs—for example, those used by outsourcers and contractors, or those owned by the employee—connect to the hosted desktop environment. That cost is still below the list price of $199.95 for Vista Home Basic and XP Home, and $319.95 for Vista Ultimate. With the VECD, users can store an unlimited number of virtual desktop systems on physical disks in the data center, which will enable IT staff members to "create, play with and destroy VMs without worrying about complying with their license agreement," Lambert said. "In addition, these VMs can move between servers and storage as needed, allowing for a dynamic environment that caters to user performance needs at any given time." However, the license limits to four the number of virtual desktop machines that can connect to the hosted desktop at a single time. Overall, the new licensing rules open up options that were hindered under the older licenses. Contractors can now use their own PCs in a company's office. In addition, employees can now bring their own Windows-based PCs to work, a scenario that is gaining in popularity, though it does present headaches for IT administrators over such issues as security. The new licensing also increases the mobility of a company by enabling employees with corporate PCs to bring their Windows desktop home on a removable media device. Lambert said for enterprises looking to expand their use of desktop virtualization, the new licensing scheme from Microsoft removes a key hurdle. "As Windows licensing makes more and more scenarios possible, desktop virtualization is coming to the forefront of computing," Lambert wrote in the report. "While just one year ago, Microsoft handcuffed many organizations that attempted to legally license Windows Client in a virtualized world, they have made steady improvements to pave the way for new computing models—specifically, models that move away from a standard, physical corporate PC."
Server virtualization technology broadens reach for agencies
New virtualization technology from HyTrust has the potential to enable government agencies to broaden their server virtualization deployment with increased security for VMware ESX hypervisors, according to company officials.
The HyTrust Appliance is a new automated technology creates a virtual infrastructure that serves as a central point of control, management and visibility for virtualized environments. It ensures that security and operational readiness is equal to that of physical environments by providing directory service integration, role-based access controls, secure management traffic, and security logging and auditing of administrative actions.
Agencies can use the HyTrust Appliance to build a manageable virtual infrastructure foundation from the ground up while demonstrating that adequate processes and controls are in place to comply with regulations or security standards such as HIPAA, SOX and PCI-DSS
The HyTrust Appliance is priced according to the number of protected ESX hosts; a license for a single two-CPU host costs $1,000; The HyTrust Appliance is priced as a physical device at $7,500, and also is available as a software program for $3,000.More information
The five defining characteristics of cloud computing
For instance, there are myriad variations on the definition of the cloud — William Fellows and John Barr at the 451 Group define cloud computing as the intersection of grid, virtualization, SaaS, and utility computing models. James Staten of Forrester Research describes it as a pool of abstracted, highly scalable, and managed compute infrastructure capable of hosting end-customer applications and billed by consumption.
Let’s take it a step further and examine the five defining characteristics of cloud computing.
Characteristic 1: Dynamic computing infrastructure
Cloud computing requires a dynamic computing infrastructure. The foundation for the dynamic infrastructure is a standardized, scalable, and secure physical infrastructure. There should be levels of redundancy to ensure high levels of availability, but mostly it must be easy to extend as usage growth demands it, without requiring architecture rework. Next, it must be virtualized.
Today, virtualized environments leverage server virtualization (typically from VMware, Microsoft, or Xen) as the basis for running services. These services need to be easily provisioned and de-provisioned via software automation. These service workloads need to be moved from one physical server to another as capacity demands increase or decrease. Finally, this infrastructure should be highly utilized, whether provided by an external cloud provider or an internal IT department. The infrastructure must deliver business value over and above the investment.
A dynamic computing infrastructure is critical to effectively supporting the elastic nature of service provisioning and de-provisioning as requested by users while maintaining high levels of reliability and security. The consolidation provided by virtualization, coupled with provisioning automation, creates a high level of utilization and reuse, ultimately yielding a very effective use of capital equipment.
Characteristic 2: IT service-centric approach
Cloud computing is IT (or business) service-centric. This is in stark contrast to more traditional system- or server- centric models. In most cases, users of the cloud generally want to run some business service or application for a specific, timely purpose; they don’t want to get bogged down in the system and network administration of the environment. They would prefer to quickly and easily access a dedicated instance of an application or service. By abstracting away the server-centric view of the infrastructure, system users can easily access powerful pre-defined computing environments designed specifically around their service.
An IT Service Centric approach enables user adoption and business agility – the easier and faster a user can perform an administrative task the more expedient the business moves, reducing costs or driving revenue.
Characteristic 3: Self-service based usage model
Interacting with the cloud requires some level of user self-service. Best of breed self-service provides users the ability to upload, build, deploy, schedule, manage, and report on their business services on demand. Self-service cloud offerings must provide easy-to-use, intuitive user interfaces that equip users to productively manage the service delivery lifecycle.
The benefit of self service from the users’ perspective is a level of empowerment and independence that yields significant business agility. One benefit often overlooked from the service provider’s or IT team’s perspective is that the more self service that can be delegated to users, the less administrative involvement is necessary. This saves time and money and allows administrative staff to focus on more strategic, high-valued responsibilities.
Characteristic 4: Minimally or self-managed platform
In order for an IT team or a service provider to efficiently provide a cloud for its constituents, they must leverage a technology platform that is self managed. Best-of-breed clouds enable self-management via software automation, leveraging the following capabilities:
- A provisioning engine for deploying services and tearing them down recovering resources for high levels of reuse
- Mechanisms for scheduling and reserving resource capacity
- Capabilities for configuring, managing, and reporting to ensure resources can be allocated and reallocated to multiple groups of users
- Tools for controlling access to resources and policies for how resources can be used or operations can be performed
All of these capabilities enable business agility while simultaneously enacting critical and necessary administrative control. This balance of control and delegation maintains security and uptime, minimizes the level of IT administrative effort, and keeps operating expenses low, freeing up resources to focus on higher value projects.
Characteristic 5: Consumption-based billing
Finally, cloud computing is usage-driven. Consumers pay for only what resources they use and therefore are charged or billed on a consumption-based model. Cloud computing platforms must provide mechanisms to capture usage information that enables chargeback reporting and/or integration with billing systems.
The value here from a user’s perspective is the ability for them to pay only for the resources they use, ultimately helping them keep their costs down. From a provider’s perspective, it allows them to track usage for charge back and billing purposes.
In summary, all of these defining characteristics are necessary in producing an enterprise private cloud capable of achieving compelling business value which includes savings on capital equipment and operating costs, reduced support costs, and significantly increased business agility. All of these enable corporations to improve their profit margins and competitiveness in the markets they serve.
VMware offers to slash server costs in half
Customers participating in the promotional program will work with the VMware Professional Services organization to deploy VMware's virtualization platform on existing servers using established best practices. Customers will not have to pay for the design and implementation services until at least a 50 percent savings on server hardware have been achieved.
"In today's difficult economic climate, this new promotion shows our commitment to customers and our confidence that we can help them reduce cost with no risk," said Bogomil Balkansky, vice president, product marketing, server business unit, VMware. "Based on our experience in thousands of customer engagements and data collected over more than three years with our Capacity Planner tool, we are confident that many customers will be able to achieve more than 50 percent savings on server hardware costs."
"VMware has been consistent in delivering value to its customers, but this new program raises the bar," said Mark Bowker, analyst for Enterprise Strategy Group. "In the current economic climate, no IT project -- no matter how critical -- will be funded without significant management review and approval. This is a welcome scenario for prospective and existing VMware customers -- they get a proven virtualization platform with a substantial guaranteed and accelerated payback. VMware is helping its customers make frugal investments today that align with the overall IT spending patterns."
"This nicely ties results to IT spend which has been attractive in the past," said Rob Enderle, principal analyst, The Enderle Group. "I recall programs from both IBM and HP which attempted to share in the savings, the issues often come down to being able to reliably showcase the savings making it difficult to sustain the programs, however, it builds results into the effort and those results are golden both to IT who can then demonstrate progress and to the vendor who gets validated examples they can use to sell future projects."
In addition to the guaranteed savings promotion, VMware will also offer customers assistance with a server buyback referral program. This third-party service will enable customers to obtain appraised value for the server hardware that is no longer needed because of server consolidation.
Warren Shiau, senior associate, IT research, The Strategic Counsel, said VMware's promo is a very good "pay-upon-results" type of deal. But there's potentially more to this than just the cost savings
"One of the biggest issues for many IT organizations, big ones included, is having the metrics and measurement capabilities in place to understand what ROI they are generating from IT investments," he said. "Tying a contract to 50 percent cost savings on server hardware implies the implementation of some form of measurement. It could be a relatively simple measurement, but if it's getting along the way to something like a server hardware lifecycle cost measurement then you can see how it would help get metrics and measurement capabilities into the client organization."
VMware is still the virtualization market leader and the company remains in the lead with regard to operating system independent companies which will also be important to them long term, Enderle noted. Meanwhile, Microsoft is providing its virtual machine management software to its small business specialist partners for free.
"Their biggest problem is Microsoft who has clearly taken the leadership position in Windows Server shops, however most shops are now mixed and VMware should appear to IT managers in mixed shops to be a better solution because it can consistently be used regardless of whether they are running Linux, UNIX or Windows," he said. "VMware is also more trusted than Microsoft in this role at the moment but Microsoft has improved their trust scores in this area dramatically and they are putting a massive investment into improving their already competitive offering making this a real horse race. Having Paul Maritz, who knows Microsoft's play book, running VMware was a brilliant move."
Shiau added VMware has a lot of room for maneuvering to protect its market position.
"They have good margins; what the financial analysts are going to need to understand going forward is that as the virtualization market broadens out, VMware is going to have to gradually open up in certain market segments with promos and deals because that's how Microsoft/Citrix will be competing."
The VMware Guarantee Promotional Program is currently open to both existing and new customers in the United States that qualify and are seeking to virtualize between 200 and 750 existing physical servers.
For more information and applicable terms and conditions, visit: http://vwww.vmware.com/go/guarantee/. All VMware prospects or customers can leverage VMware's free TCO calculator at http://www.vmware.com/go/calculator.Sun launches VirtualBox 2.2 and adds OVF support
Sun is adding support for the Open Virtualization Format standard to its VirtualBox software. Can it become the free or cheap solution that takes away market share from VMware Workstation?
As Sun Microsystems tries to move past the IBM acquisition talk, the company just announced the release of Sun VirtualBox 2.2, its free and open source virtualization software. The latest release incorporates support for the Open Virtualization Format (OVF), a standard created by the Distributed Management Task Force (DMFT) initiative. OVF enables virtual machines or virtual appliances to easily be imported and exported. Support for OVF also helps to ensure VirtualBox 2.2 software is interoperable with other technologies that follow the standard.
[ Find out more about how the DMTF is meeting virtualization management challenges | And learn more about the latest DMTF OVF 1.0 standard ]
Sun touted the growth of the use of VirtualBox, stating it has had more than 11 million downloads worldwide since October 2007. And during that time, there were also 3.5 million registrations, which provided over 25,000 downloads of the product per day. Impressive numbers, but will Sun have a difficult time translating those numbers into production, enterprise environments? How many of these downloads are end-users only? According to Sun, the tool has been greatly accepted in the developer world.
"VirtualBox has always been a fantastic tool for developers to create multiple virtual machines, network them together and deploy them using nearly any operating system," said Jim McHugh, vice president for datacenter software marketing at Sun. "Now, with the new import and export features of the VirtualBox 2.2 release, users can quickly and easily put their development environments into production -- on the desktop, the server or even in the cloud."
Some of the new features found in this release include optimizations for the software, which makes it the fastest version of the product to date, and Sun also increased 3-D graphics acceleration for Linux and Solaris applications by using the OpenGL standard. To help with guest performance, it increased the maximum memory size available for guest operating systems to 16GB.
In addition, VirtualBox 2.2 now supports Apple's upcoming 64-bit Snow Leopard platform. Also new is a new host-interface networking mode, which should make it easier to run server applications within a virtual machine.
Sun VirtualBox 2.2 is going to be compared with and find itself going up against products like VMware Workstation and Parallels Workstation. And while you can't beat the fact that it is free and open source, it could be missing some of the key features needed in today's enterprise environment -- specifically speaking, VMware's use of Linked Clones. As storage space continues to climb, features like Linked Clones and differencing disks off of base VMDKs will continue to prove itself a high-value offering.
Again, for personal use, the software is free of charge. For larger deployments within a business, subscriptions are being made available starting at $30 per user per year, and that includes support from Sun's technical team.
VMware wheels and deals on server virtualization
The low-hanging fruit for server virtualization - customers who already knew they needed it on their x64 iron whether the economy was in good shape or bad - must be starting to dry up as the competition among virtualization-hypervisor providers heats up.
Why? Because VMware is starting to wheel and deal like IBM, Hewlett-Packard, and Sun Microsystems.
VMware wants to keep its dominant position in server virtualization, and to that end announced Monday a promotional program that guarantees that VMware's techies will cut x64 server costs in half or the professional services used to deploy VMware's Virtual Infrastructure 3 software stack will be free.
The Guarantee Promotional Program is clever in that VMware is compelling customers to sign up for a services engagement, deploy its virtualization software, and then if server costs don't fall by 50 per cent or more, they get the services for free.
VMware is not, you will notice, offering any discounts on the VI3 software stack. Which seems a bit odd given that the vSphere stack - what we would normally think of as the VI4 tools, including the ESX Server 4.0 hypervisor and its related management tools and add-ons - will be launched on April 21.
I think it's highly unlikely that VMware actually expects companies to deploy the VI3 tools when much better (and presumably more aggressively priced) software is only weeks away. It is likely that the server-cost guarantee will continue long past the vSphere launch - but the VMware site says that the deal only runs from April 8 through June 30. This is the kind of deal that a vendor can - and often does - extend even as a new generation of products comes out.
Especially when the economy is rotten.
VMware is not letting just anyone take advantage of this deal. First, you have to want to virtualize between 200 and 750 physical servers, and they have to be located in the United States. Then you have to commit to buy a bunch of services from VMware.
The first of these is the Operational Readiness Accelerator Service, which is an overall analysis of the business and IT environment. Then you have to buy a service called Jumpstart with Physical to Virtual Migrations, which does a few physical-to-virtual conversions at your site to give you some hands-on experience. Then you have to shell out some cash for a Virtualization Assessment, which takes a look at the environment and does an analysis of the expected cost savings from virtualization and consolidation of servers.
Add onto that the Plan and Design for VMware Infrastructure service, which maps out the new architecture for your virtualized servers, and the Configuration/Build service, which does the P2V conversions as laid out in the plan, and then a Wrap Up service to calculate the realized savings from the virtualization project.
VMware did not provide pricing information for these services at press time. But I suspect they're not cheap.
To participate in the deal, customers have to have or acquire servers and storage that are on the VMware VI3 compatibility list, and they have to have host machines with a minimum of four processor cores and 32GB of main memory. That's a fairly hefty piece of x64 iron.
You also have to get VI3 Enterprise 3.5 software licenses, and you cannot disable transparent memory page-sharing settings, which allows VMware to stack up more VMs on a machine than you might otherwise be able to get away with.
To calculate the savings, VMware insists that customers perform a before-and-after configuration in its online TCO Calculator, which we told you about last month and which you can play with here.
It's not clear if the TCO calculator accounts for the cost of any new iron, but it certainly should if it's required. Mileage will certainly vary from shop to shop, but with x86 and x64 server utilization typically very low, even customers needing to buy new iron to virtualize efficiently can show good return on investment if the term is set long enough and operational, power, and cooling costs are tossed in.
The guarantee is open to new and existing customers in the United States. No word on when Europe will get a similar deal, but there is no reason why Europeans shouldn't demand equal treatment from VMware. ®
Sun's VirtualBox Hypervisor Silver-Lines Its Cloud
With OVF support, a VirtualBox virtual machine can be exported off a desktop and moved to a server to run under a different hypervisor. A VM converted to OVF-formatted files can be recognized by VMware's ESX Server, Citrix Systems (NSDQ: CTXS)' XenServer, or Microsoft (NSDQ: MSFT)'s Hyper-V. OVF, however, isn't a neutral runtime format. Each hypervisor, once it recognizes the OVF format, converts the imported files into one of its own virtual machines.
VirtualBox is frequently used on PCs and workstations to create an environment in which to develop an application. VirtualBox will work with Windows, Linux, Apple's OS X, Solaris, or OpenSolaris, making it a flexible development environment. An application developed under VirtualBox can be packaged with an operating system in the OVF format, then shipped out to run in the data center on a server, or shipped elsewhere to another data center or external cloud. Sun acquired the VirtualBox technology in 2007 and made it available for free download in October of that year. Since then it has counted off 11.5 million downloads. They continue to occur at the rate of 25,000 a day, said Andy Hall, senior xVM VirtualBox product manager, making it one of the most popular open source code choices on the Sun site. MySQL, the open source database, still enjoys a download rate more than double that of VirtualBox. Sun has increased the capabilities of VirtualBox so that a VM being run by it can use up to 16 GB of memory, more than four times the former limit, allowing for more powerful applications, Hall said in an interview. VirtualBox has stronger support for displaying 3-D graphics with its support for OpenGL graphics acceleration. "You can run Google Earth inside a virtual machine and make use of the host system's graphics acceleration," said Hall. "A whole new class of applications can be virtualized that couldn't be before," he added. VirtualBox virtual machines can be packaged as virtual appliances and shipped elsewhere for use through the JumpBox set of tools. JumpBox uses open source software to generate virtual appliances and has partnered with Sun to support VirtualBox, said Hall. VirtualBox will support Apple's 64-bit Snow Leopard operating system when it becomes available, he said. By supporting more operating systems than other VM vendors, Sun will seek to make VirtualBox "the best hypervisor for the cloud," he added. In addition to being available for free download, VirtualBox may be purchased through an enterprise subscription based on $30-per-user annual fee.
HP automates management of VMware, virtualization systems
The company updated two software applications and introduced one new offering in its BSA suite, which is built on technology HP acquired with Opsware.
"We see a huge opportunity to cut back on labor bloat. When you have very expensive resources, the storage administrator is one of the most expensive people in the data center. Logging into storage arrays and checking for capacity is a huge waste of their time," says Michel Feaster, senior director of products for Business Service Automation Software at HP. "We want to take all these low-level tasks and apply automation to take advantage of the huge opportunity for productivity improvements."
HP added new capabilities to its Storage Essentials and Operations Orchestration products and launched BSA Essentials, a set of subscription services that will help customers better manage their infrastructure in a standardized way by providing access to security alerts and updates on regulation policies and compliance auditing, the company says. BSA Essentials also includes a community portal that would provide an online forum for HP BSA customers.
The company enabled Storage Essentials to discover and map VMware hosts to storage and storage-area network (SAN) dependencies. HP says this will enable IT managers to reclaim unused storage resources from virtual machines and reallocate the capacity. The software can also now automate storage provisioning to VMware hypervisor and guest operating system. Industry watchers say HP recognizes the need to manage IT services in virtual environments that span across technology silos.
"One issue is expanding the reach of virtualization management to encompass and to coordinate with the management of technology silos - storage, networks, application and database," says Jasmine Noel, principal analyst and co-founder at Ptak, Noel & Associates. "For example, it is difficult for many enterprises to shift mission-critical virtual machines from one physical system to another during a hardware failure or upgrade because while their server stacks may be mobile, the network connections to SANs, databases and other legacy systems may not be mobile. That's why HP is trying to make their other management products more virtual machine-aware."
HP also upgraded its Operations Orchestration automated workflow software with additional support for VMware Virtual Infrastructure, Citrix XenServer and Microsoft HyperV. The updates will help integrate automated tasks across servers, network, storage and other IT elements.
"If you speed up the mean time to change without also giving IT managers a way to speed up all their other administrative tasks then you are not going to see the full benefits that you are expecting," Noel says. "What HP is trying to do with Operations Orchestration is speed up the other admin tasks with workflow automation."
HP Storage Essentials and HP Operations Orchestration are available now.
HP Releases Automation Tools for Virtual Data Centers
Hewlett-Packard hopes to play a bigger role in managing the virtual data center with updates to its Business Service Automation software announced on Tuesday.
HP released updates to two products in the suite, Storage Essentials and Operations Orchestration, and introduced a new subscription service, BSA Essentials, that it said will help keep systems patched and in compliance with auditing standards.
Virtualization has allowed companies to reduce hardware costs and conserve floor space through server consolidation. But it has also created headaches for large organizations that are struggling to manage hundreds of virtual hosts and their related storage and networking resources, said Bob Meyer, head of HP's virtualization group, in a press briefing at HP's offices.
The update to Storage Essentials means the software can now discover VMware hosts in a network and map out their related storage and storage-area-network dependencies, allowing admins to keep track of who is using which resources. It will also track how much capacity assigned to the virtual hosts is actually being used, so that unused storage can be reallocated.
The update is available now for VMware environments and HP is working on a version for Microsoft's Hyper-V. It plans to support Citrix XenServer in the future, though Hyper-V is its first priority after VMware, said Michel Feaster, senior product director for Business Service Automation.
Another challenge for IT departments is the time it takes to provision the storage and networking for virtual servers. A virtual server can be set up relatively quickly, but storage and networking admins are having to spend too much time provisioning other parts of the infrastructure, according to HP.
Its answer is an update to Operations Orchestration, a workflow tool for automating the provisioning of servers and storage. The tool now has templates to guide administrators through the server, network and storage configuration for virtual environments. This should make the process faster and ensure the work is done in a standard way, reducing errors, HP said.
The tool integrates with VMware Virtual Infrastructure, XenServer and HyperV, "so you can automate tasks using the management interfaces provided by those virtualization vendors," said Kalyan Ramanathan, HP director of product marketing.
Forrester analyst Glenn O'Donnell, who was at the HP briefing, agreed that as virtualization moves from test and development into production use, more automation is required. Otherwise capital savings will be lost through higher operational costs, he said.
"You shouldn't have high-priced network engineers Telnetting into a router doing grunt work; you have to automate it," he said.
Administrators will resist automation because it undermines their role, but it's a necessary change as businesses try to cut costs in today's economy, he said.
HP also introduced a new service called BSA Essentials. HP will monitor clients' systems to see that they comply with internal and external policies, like being up-to-date with security patches or meeting certain security or configuration requirements. The service is billed as a percentage of the software license fee, HP said.
It also launched the BSA Essentials Community, a Web site where BSA customers can share best practices and other tips.
The new products mean HP will be able to compete more directly with VMware, which also hopes to play a bigger role in data center management through its upcoming Virtual Data Center OS.
"VMware will be in 'coopetition' with HP and everybody else out there," O'Donnell said.
Virtualization Management Software Market to Grow, IDC Says
Though the global recession is pounding much of the IT industry, one area that is expected to see strong growth over the next three to four years is virtual server management software, according to research firm IDC.
In a report released April 9, IDC analysts said they expect to see revenues in this space—which they admitted was a newly defined part of the market—from $871 million in 2008 to almost $2.3 billion in 2013.
Businesses are growing their use of virtualization technology in the data center, moving it out of the test-and-development arena and into production environments. That shift comes at the same time that the recession is forcing IT departments to find ways to maintain or grow their levels of service while having to cut their budgets.
These large-scale deployments of virtual machines will fuel the growth of the virtual server management software, particularly in Windows, Unix and Linux platforms, IDC said. Initially, most of the attention will focus on products around change and configuration management, including discovery, configuration, provisioning, software distribution and change control, according to IDC analyst Mary Johnston Turner. “While change and configuration management will rule in the short term, performance management and event automation management capabilities will eventually take hold,” Turner said in a statement. By 2013, the idea of virtual server management capabilities will begin to disappear, blending with core systems management platforms to become part of the fabric of dynamic data centers, she said. Automation software already is becoming a key focus for many of the top players in the data center. Hewlett-Packard officials, in releasing enhancements to its automation software suite April 8, said that along with the benefits of reduced costs and increased agility, virtualization technology also adds complexity into the data center mix, which calls for greater automation capabilities. Cisco Systems, which is looking to grow its Unified Computing System data center strategy, April 9 announced it was buying Tidal Software for $105 million, not only for its application management software but also for products that let users automate various practices and tasks. In its study, IDC also found that many businesses have yet to integrate virtual and physical resource management processes, or aligned virtual server management practices with ITIL (IT Infrastructure Library) management standards. IDC also said that as the virtual server management space matures, it will open up opportunities for new competitors.
Wednesday, April 1, 2009
OCZ Officially Announces DIY Neutrino Netbook
As we first learned at CeBit, the Neutrino is part of OCZ's DIY product line, which allows the user to upgrade RAM and storage on his or her own without voiding the warranty.
That said, much of the Neutrino’s hardware is already set with the CPU, chipset, display, input devices, Wi-Fi chip, battery and even webcam being standardized. What potential owners would be able to install on their own would be RAM up to 2 GB and storage solutions that run all the way up to a 250 GB SSD. And naturally, without storage users won’t be forced to pay for a license for a pre-installed operating system.
“There are many consumers that desire the blend of essential functionalities and an ultra compact form factor, and our new Neutrino Do-It-Yourself netbooks based on Intel Atom technology allows users to design and configure their very own solution tailored to their unique needs,” commented Alex Mei, CMO of the OCZ Technology Group. “The Neutrino DIY netbook puts the control back in the hands of consumers by allowing them to configure a feature rich netbook with their own memory, storage, and preferred OS into a reasonably priced go-anywhere computing solution.”
The mass storage option of getting a 250 GB SSD into the Neutrino would instantly make it stand out, though at the end of the day, it’s still an Atom-based netbook at heart.
The OCZ Neutrino is shipping now with the skeletal model starting at $269.
Dell Mini 10 Finally Gets Fancy 720p Display
Just before the Mini 10 launched, we learned that the 720p display would come in a 1366 x 768 resolution -- a welcomed bump up from the usual 1024 x 600 of nearly all other netbooks. When the Mini 10 launched, however, it was only available in the less desirable 1024 x 576 resolution; and to make matters worse, none of the TV tuner or GPS features were available.
While the full featured Mini 10 still isn’t here, Dell is now offering customers the upgrade to the much better 1366 x 768 display for just $35. A pleasant surprise is that all the display options for the Mini 10 are with matte displays, which is a departure from the trend of glossy-is-better. It’s particularly interesting considering that both the Mini 9 and Mini 12 are outfitted only with glossy screens.
Thanks to the Intel GN40 chipset’s HD decode capabilities, owners of the top line display will be able to resolve all of 720p video. Is this enough to entice you to consider the Mini 10 as a netbook option?
Windows 7 to Usher in $200 Netbooks
What we do know, however, is that Windows 7 Starter Edition will be the cheapest one, which will be no doubt the option for OEMs looking to build the cheapest netbook running a Windows. Microsoft thinks that netbooks at the entry level could hit a new low price point -- something netbooks have been slowly moving further away from with ballooning feature sets.
“We have a couple of the OEMs continuing down a path to be very aggressive on price. It puts the pressure on everyone. We're anticipating opening price points to reach about $200 at least in the US market this holiday season,” said Mark Croft, the director of OEM Worldwide Marketing, according to a TechRadar story.
Interestingly, Croft added that Nvidia Ion machines could come in at just $50 more, making a $250 GeForce-equipped netbook sound very attractive.
Microsoft cautions, however, that pricing and specifications will like vary greatly. “There isn't a standard, uniform view of the world. Each OEM has nuances on this depending on what they think their brand value is, each one has a slightly different take on what they're trying to do in terms of market share or margin,” Croft added. “Some of them are trying to make $10 on this device or $20, and some are just trying to sell a unit and break even.”
While Windows 7 Starter Edition could become the usual flavor for the el cheapo netbook, Microsoft is pushing for the Home Premium edition to be the standard.
“We are clearly going to market to customers that Home Premium is the default,” said Croft. “We've made our case to the OEMs; we've shared some analyst data with them about customer preferences.”
Microsoft has said before that it would like for users of lesser versions of Windows 7 to upgrade. Artificial limits on Windows 7 Starter Edition, such as limiting the user to have only three programs running at once would quickly make a case for an upgrade. Encouraging OEMs to start with Home Premium would not only fulfill Microsoft’s business desire, but also give the end user a better experience. Sadly, that might not happen with a $200 netbook.
Intel Launches Nehalem Xeon Chips
Intel has officially launched its latest Nehalem-based Xeon processors, the single-socket 3500 and dual-socket 5500 series for servers and workstations.While the announcement may seem a bit stale in light of all the hubbub about the desktop Xeons such as the latest Mac Pros and the Lenovos D20 and S20 offering Nvidia Quadro or ATI FirePro graphics, and Nvidia's Tesla C1060 GPU co-processor platform.
CNet reports that fresh announcements are due today from the bigger server suppliers, among them IBM. "If you thoroughly maximize the capabilities of Nehalem, generation to generation you can get something like two times the performance capability," said Alex Yost, vice president IBM BladeCenter.
According to Mercury News, HP yesterday launched 11 products incorporating Xeon 5500 chips, including blade servers, rack servers and tower servers.
When asked where these new releases would leave AMD's Shanghai server processor, Intel’s Pat Gelsinger told the Financial Times that Intel sees Nehalem having a huge impact on AMD's four-socket business.
"The performance gains we showed over the previous 5400 processor, all 30 of them are new two-socket records, and every one of those benchmarks bar one beats the four-socket Shanghai," explained Gelsinger. "We see Nehalem having a much bigger impact on their four-socket business than our own four-socket one," he concluded.
The launch of Intel’s Xeon processors comes in the middle of the company’s legal battle with Nvidia over the its new line of processors. Intel doesn’t believe that Nvidia has the right to design integrated memory controllers. Intel filed a lawsuit back in February which stated that the chipset license agreement the two companies signed four years ago does not extend to its future generation CPUs with integrated memory controllers.
The new processors are now shipping for $188 to $1,600 for the Xeon 5500 and $284 to $999 for the Xeon 3500. Hit up the press release for the full scoop.
AMD's Radeon HD 4770 Specs Revealed
Next month will supposedly be a hot one for both Nvidia and AMD if all goes according to plan, as both companies will unleash highly anticipated graphics cards unto the market: AMD's Radeon HD 4890 and Nvidia's GeForce GTX 275. However, AMD has a few additional aces up its sleeve, including the forthcoming Radeon HD 4700 series based on the 40nm RV740 GPU.
Presentation screens have surfaced over on IT168.com, taken from a recent AMD presentation. One slide in particular compares the Radeon HD 4770 with Nvidia's GeForce 9800 GT, showing superiority in both features and performance. According to AMD, the Radeon HD 4770 will provide 9.7 GFLOPS per dollar, and 12.0 GFLOPS per watt. By comparison, the 9800 GT only provides 5.1 GFLOPS per dollar and 4.8 GFLOPS per watt. Additionally, the 40nm Radeon HD 4770 utilizes GDDR5 memory (512 MB, 128-bit) and provides 1960 GFLOPs of processing power; the 65nm/55nm 9800 GT uses GDDR3 (256-bit) memory and provides 504 GFLOPs of processing power.
Additionally, the Radeon HD 4770 will offer a core clock of 750 MHz, 640 stream processes, a memory clock of 800 MHz, and a memory bandwidth of 51.2 GB/s. The card is estimated to use around 80W thanks to the 40nm manufacturing process, somewhat smaller in power consumption when compared to the 55nm Radeon HD 4850 and 55nm 4830. However, the Radeon HD 4770 will come with 826 million transistors, meaning it contains 130 million less than the 4830 and the 4850 (both with 956 million).
Although the Radeon HD 4770 looks to be that last card to ship within this half of 2009, the card is expected to retail around $99 USD. Out of the nine AMD cards set to ship within Q1 and Q2 2009, the Radeon HD 4830 is the only other card offering a $99 USD pricetag. According to the slide taken from AMD's presentation, the HD 4870 X2 (2 GB) will retail for $399, the HD 4870 (512 MB) for $169, and the HD 4890 (1 GB) for around $260.
With nine cards hitting retail shelves during this half of the year, it's definitely a great time to upgrade the existing graphics card. AMD offers a great selection, with cards not only addressing both the enthusiast and the performance-seekers, but consumers wanting to maintain a tight budget. Look for the affordable Radeon HD 4770 to hit retail shelves on May 4.
QOTD: How Would You Change AMD?
AMD, arguably the CPU company that's gone through the most significant changes in the last several years, has yet to scratch the surface of what it can really do.
More recently, AMD decided to establish a separate company to handle semiconductor manufacturing, and is now focusing much on designing and engineering, as well as emerging technologies. It's purchase of ATI was definitely a big announcement for the industry several years ago. AMD hasn't always been this bold however. In its early days, it mainly followed in Intel's shadows. This is in stark contrast from the AMD we all know today.
Granted, not all is rosy. AMD is in the middle of a heated disagreement with Intel over the use of Intel's x86 technologies. Financially, it's still up and down.
The question of the day is: How would you run or change AMD?
Would you have purchased ATI? Perhaps another company, or none? Would you attempt to even create a whole new CPU architecture? Would you also have formed Globalfoundries? Let us know.
Circuit City Used Consoles Had Porn, CC #'s
It's truly unfortunate that Circuit City ultimately closed its doors. Locally, the building sits deserted, with shadows darting within its darkened windows like ghosts of previous consumers and employees trapped in time. Grass is beginning to reach up to the sun through cracks in the pavement. At one time, the store thrived with business and showed no sign of its economical troubles, it's bright sign a beacon of economic success and electronic wonder. It was a great place to pick up gadgets, gizmos, and much-needed hardware, and for some, a handy place to trade in used gaming consoles.
For consumers who actually did trade in used consoles such as the Nintendo Wii, PlayStation 3, and Xbox 360 before the corporation closed its doors, they may find themselves in a bit of a pickle. As it turns out, Circuit City liquidated everything--including gaming consoles. The catch is that Circuit City did not wipe the drives before selling off the used merchandise. What this means is that personal information stored on the hard drives (or other data storage devices) remained intact if the consumer did not remove the information prior to selling the device.
That, of course, is bad news. Third party buyers now have access to a plethora of personal data consisting of credit cards details, photos, videos, downloaded retail and arcade games, and even home-made porn. One firm that bought a good chunk of the used console stock from Circuit City even claimed that most of them were actually broken, or "non-functioning" as stated. Once the firm began to repair all the damaged consoles, it discovered loads of sensitive, personal data.
"The facility discovered this while repairing the damaged consoles," reports Kotaku. "They'd fix them, turn them on, test their network connectivity, then suddenly start receiving friend requests, chat requests, game invites, etc. What's more, with the user details still recorded on the system, they could have easily purchased game content on an unsuspecting former owner's credit card." To back up the claim, the unnamed firm sent images showing stacks and shelves of Xbox 360 and PlayStation 3 consoles.
While the idea is somewhat humorous--especially visualizing someone's homemade porn planted on an Xbox 360 hard drive--it's a good eye-opener to the fact that even gaming machines can compromise personal security. Even game-related stores such as EBGames are notorious for not inspecting used hardware thoroughly. As witness to that effect, it personally took three tries to get a working, used PlayStation Portable: the first one locked up completely, and the second one had a bad thumbstick and a dead pixel. Thankfully, the third PSP still functions well today, however unbeknown to the local EBGames shop, it contained a memory card housing the prior owner's pictures and other personal information.
Way to go.
The message here is that consumers should wipe all personal information from any electronic device before selling it, whether it's an iPod Touch, PSP, Xbox 360, or a 500 GB external hard drive. Obviously retail organizations will not thoroughly wipe personal data, and it may be that external shops hired by said corporations may actually access that data if the devices go in for repair. With that in mind, consumers should be extremely cautious.
Nvidia Introduces New Quadros, Multi-OS SLI
Nvidia has refreshed nearly its entire Quadro line with a handful of new cards fit for the those who use GPUs for work instead of play.Zoom
Pairing up with the new Intel Xeon chips based on the Nehalem architecture are five new Quadro products. Nvidia actually lists seven Quadros as being new, but the FX 5800 and FX 4800 have occupied the stratosphere since late last year with onboard memory and prices that rival entire gaming rigs.
New, and definitely more economical for the non-heavy industrial user are the FX 3800 ($900), FX 1800 ($600), FX 580 ($150), FX 380 ($100) and NVS 295 ($100).
Perhaps even more significant is Nvidia’s introduction of SLI Multi-OS, which enables use of multiple Quadro GPUs from a single graphics workstation in a virtualized environment.
"In today's economy, organizations are turning to virtualization to increase productivity and maximize cost savings," says Jeff Brown, general manager of professional solutions at Nvidia. "Now professionals working with visualization applications can benefit from virtualization."
SLI Multi-OS is available on the Quadro FX 4800, FX 5800 and the new FX 3800. According to Nvidia, SLI Multi-OS works in association with Parallels Workstation Extreme virtualization software and Intel's VT-d technology, assigning both the host and guest virtual machine its own dedicated GPU.
The new Quadro cards are available now from PNY Technologies, Leadtek and Elsa and systems from Dell, Fujistu-Siemens, HP and Lenovo.
Another RV790 in May, Possibly X2?
However, there has been some speculation that ATI is currently working on a dual-version of the HD 4890. Although the X2 version is neither denied nor confirmed, the end result would provide consumers a blazing fast card with a huge payload: nearly 380W of power consumption. It's highly unlikely that ATI will produce such a product, however consumers may see the X2 version produced by ATI partners instead, especially those who are accustomed to manufacturing overclocked boards. However, keep in mind that the previous HD 4870 X2 featured a TDP of 286W, whereas the Nvidia GeForce GTX 295--mounted with two GPUs--had a TDP of 289W. An X2 version of the Radeon HD 4890--with a TDP of 190W just for a single GPU--clearly push its power needs beyond those two cards.
Still, something seems to be in the midst. Apparently, the German tech website ATI Forum gained access to the release notes from the upcoming Catalyst 9.3 drivers due in May. The site discovered a year identification number (ASIC) associated with another RV790-based card. While the notes did not specify the applications, it's speculated that the RV790 offering could be a Mobility Radeon HD 4890 for notebooks, or the dual-GPU offering as previous believed. Either version is entirely possible at this point, and despite the power consumption the 2X version would generate, it's easier to put two and two together in regards to recent talks and the ASIC found in the Catalyst 9.3 drivers.
Look for more info to appear within the next few weeks, especially once the Radeon HD 4890 actually hits the market early next month.
Apple and McDonald's Team Up, Unveil iMc
In one of today’s more surprising announcements, Apple and McDonald’s are today partnering in a cross-brand product effort.
The two companies said that they are looking to leverage the similarities of each other’s brand to expand their respective client bases. Both companies had been in talks since late 2003 when a deal to distribute iTunes with Happy Meals fell through, making this deal more than half a decade in the making.
“We realized the overlap when we started selling the new 13- and 15-inch Unibody MacBooks last October. During our busy holiday season, customers would come in looking for the 17-inch version asking if we had the Big Mac,” explained Pullman Legwand, an Apple Genius at Apple’s flagship Apple Store.
The confusion eventually lead customers to McDonald’s restaurants, where they would express their desire for Apple products, only to be sold apple slices with caramel dip.
Apple and McDonald’s marketing teams saw the problem and came together to create a brand new product that would effectively bridge the two company’s clientele: the iMc.
The iMc is an all new food menu item that combines McDonald’s food-making-mastery and Apple’s simplistic and elegant design cues. Doing away with the traditional fixings in a burger that would distract from the overall experience, the iMc contains no ketchup, mustard, cheese, pickles, lettuce, tomato, or meat.
“We’re proud to carry the iMc as part of our permanent menu. The no-frills nature of the iMc represents the next evolution of fine dining,” said Joseph K. Ng, McManager at McDonald’s. “It’s just like the new iPod Shuffle.”
The news hit early in the day before Wall Street opened shop, but already stock analysts are seeing this as a “quick win” for the fast food restaurant giant.
“Before the advent of the boutique coffee shop with its fancy baristas, McDonald’s was the hang out of choice for the young and fashionable,” quipped analyst Lisa Tuu. “Through its partnership with Apple, McDonald’s will bring the yuppie MacBook-owning crowd back to enjoy a milkshake instead of their usual grande-quad-ristretto-nonfat-no whip-vanilla-latte-in-a-venti-cup.”
The iMc goes on sale today starting at $19.99 with add-ons (cutlery, napkins, pickles, mayo) available at an additional cost. No meat is available at this time, however reps say this is to help further the "slimmed-down" ideal of the iMc. Customers will also have the option to upgrade to the iMc Pro, a vegetarian friendly Quorn hamburger patty for a 50 percent price increase.