Thursday, January 29, 2009

Nvidia's Dual-GPU SFF Card Packs a Punch

Nvidia has released the industry’s only low-profile professional graphics solution designed for maximum display real estate in a Small Form Factor (SFF) package. The Quadro NVS 420 packs a punch.Zoom

The new Quadro NVS 420 compliments the ever increasing popularity of the small form factor PC. Although not mainstream yet, they could be in the near future – even in the workplace. Having a physically smaller computer means less physical space required. Not to mention that small form factor computer cases add a bit of elegance to your workspace on the job or at home.

With support for up to four 30-inch displays at a resolution of 2560x1600 pixels each, business professionals that require a lot of digital desktop real estate can maximize their productivity with the use of display management tools such as Nvidia nView.

Quoting Jeff Brown, general manager of Professional Solutions at Nvidia:

“The convenience of small form factor computers can now finally be matched with remarkable business graphics and digital display capabilities. The Quadro NVS 420 transforms these small form factor machines into business graphics powerhouses.”

Computer aided drafting (CAD) specialists or graphic design artists will surely enjoy the capabilities of the Quadro NVS 420 (if they are in need of a powerhouse low-profile card). This card comes equipped with a large frame buffer and high memory bandwidth, something that is definitely required by many digital signage installations.

The Nvidia Quadro NVS 420 should be available this coming February, carrying a price-tag of roughly US$499.00. More information on Nvidia Quadro technology can be obtained here.

AMD Launches 45-nm Opteron Line

Today AMD launched its 45-nm Quad-Core Opteron line of processors, featuring low power consumption with high-end results.

With the rising demand for low power consumption, AMD answers the call with its new line of Opteron (aka Shanghai) 55-watt ACP processors, now available in five flavors through global OEMs and "solution providers." Currently HP utilizes AMD's processor in eight server systems; Quad-Core AMD Opteron HE processor-based servers from Rackable Systems are also available with systems from Dell, Sun and other companies coming soon.

According to AMD, the Opteron HE (highly efficient) line offers speeds ranging from 2.1 to 2.4 GHz while saving money during idle time, consuming 20 percent less power than competing systems. AMD designed the Opteron HP processor for businesses needing heavy processor power during peak hours and energy saving solutions during low-utilization hours. The pricing for the three chips ranges from $316 to $1,514.

“In the current economic environment, datacenter managers are under more pressure than ever to reduce costs without compromising the latest features or performance,” said Patrick Patla, general manager, Server and Workstation Business, AMD. “The new Quad-Core AMD Opteron HE series processor offers unrivaled performance-per-watt and cost-efficiencies for a wide range of configurations without a potential front-side bus bottleneck. In the second quarter AMD plans to take energy-efficiency to the next level in introducing even lower ACP processors for the unique demands of cloud computing environments.”

AMD also unveiled the two meatier Quad-Core Opteron SE (special edition) processors, both featuring speeds of 2.8 GHz and a 105-watt ACP thermal envelope. The Opteron 2386 SE is priced at $1,165 (two-socket) and the Opteron 8386 SE at $2,649 (four- to eight-socket). Head here to get a complete list of AMD pricing.

AMD said that these new processors are also installed into three HP systems as well as other AMD technology partners. According to InformationWeek, AMD will ship faster models of the SE processor later this year. The company also said that both the HE and SE can plug into motherboards running AMD's older 65-nm Opteron processors, however consumers will first need to upgrade the BIOS.

"With IT decision-makers looking to do more with less, the newest Quad-Core AMD Opteron processor can help drive data center efficiencies and reduce complexities with innovations that offer superior virtualization performance and increased performance-per-watt," the company said.

AMD launched the first nine 45-nm Quad-Core Opteron processors back in mid-November, and then released its line of Phenom II desktop processors at the beginning of the month. To view more about the new Quad-Core AMD Opteron processor, head to the press kit here.

Intel Prepping 320 GB SSD for 4Q Launch

Details are scant, but Intel has leaked the news that eight new solid-state drives are on the way come summertime.

It’s not as big a story, capacity-wise, as Toshiba’s announcement that the company is working on a 512 GB solid-state drive. Nevertheless, Intel has leaked the tidbit that it intends to release eight new solid-state drives in the fourth quarter (July to September). Capacities for these drives will cap out at 320 GB of total storage as a result of Intel using to 32-nanometer manufacturing technology.

There’s no word yet on the actual release date or pricing of the drives. Nor has Intel offered any comment about the product launch, which the company has allegedly been chatting up with PC manufacturers. Just don’t expect the drives to run cheap. The company’s 160 GB drives sell in the $900 to $1,000 price range. Even assuming that the prices of all of Intel’s SSD offerings would shuffle around as a result of eight new product launches, expected costs for high-capacity SSDs remain high across the board.

At the very least, eight new drive models should be able to bump down the costs of Intel’s current core lineup of solid-state drives. With demand expected to ramp up for business-class solid-state laptops later this year, this could put Intel in a favorable position to compete against the less expensive solid-state drives on the low end of the capacity scale.

MSI Teases WindBox Release... Again

When we first got a peek at the MSI WindBox in November, we loved it. When we saw it for ourselves at CES 2009, we still loved it.

ZoomEssentially, the WindBox is an ingenious but simple device that mounts itself inconspicuously on the back of an LCD monitor, transforming it into a space-saving, self contained computer.

The hardware inside the little box is almost identical to that found inside the MSI Wind netbook, which is also little different from nearly the entire netbook market.

The WindBox is slightly different in thermal design in that it has no active cooling, which means it’ll operate in near silence, save for the hum of the hard disk drive.

While we had suspected much of it ourselves already, MSI today released specifications on the WindBox -- the Atom N270, Intel 945GSE + ICH7M, Intel GMA 950 graphics, LAN, b/g Wi-Fi and 1 GB RAM. Perhaps the one sore point is that it only features VGA out, meaning that monitors accepting DVI input wouldn’t be all that they could be.

Unfortunately, MSI’s press release today was still mostly a tease, offering a vague release date of 2009 Q1 without any price. We hope to get our hands on one soon.

Western Digital Launches 2 TB Hard Drive

Is your collection of “media” growing at an exponential rate? Western Digital could have what you’re looking for, as it launches the industry’s first 2 TB hard disk drive.

ZoomWestern Digital’s first 2 TB hard drive (model WD20EADS) makes use of the company’s 500 GB/platter technology (with 400 Gb/in2 areal density), with a set of four to make up the massive capacity. The drive will feature a 32 MB cache.

Most computer users are still running modestly sized drives in comparison to WD’s new offering, and seem to be managing fine. Will there ever be a real need to hit 2 TB when even 1 TB seems like a luxury? Mark Geenen, president of Trend Focus, says more and more people are taking to the roominess.

"While some in the industry wondered if the end consumer would buy a 1 TB drive, already some 10 percent of 3.5-inch hard drive sales are at the 1 TB level or higher, serving demand from video applications and expanding consumer media libraries," said Geenen. "The 2 TB hard drives will continue to satisfy end user's insatiable desire to store more data on ever larger hard drives."

The new 3.5-inch drive will be a part of the Caviar Green family, which, as the name suggests, is part of WD’s low-energy line. The drive will make use of IntelliPower, which WD says “fine-tunes the balance of spin speed, transfer rate and caching algorithms designed to deliver both significant power savings and solid performance."

Overall, however, the new 2 TB drive’s specialty is storage, not speed. The WD20EADS should be filling channels and carries with it an MSRP of $299.

What would you do with all that space?

Apple MacBook Review: Part 2 : The Mac OS X Operating System

The Mac OS X Operating System

You can’t talk about a Mac notebook or desktop without talking about Mac OS X. For the uninitiated, Apple has had a steady stream of operating system releases starting with 10.0 Cheetah, 10.1 Puma, 10.2 Jaguar, 10.3 Panther, 10.4 Tiger, and 10.5 Leopard. While 10.1 was a free upgrade for 10.0 users, each operating system release thereafter has required a purchase, typically $129 ($109 street) for a single user license, $199 for a 5-pack family license ($139 street), and $69 from an on-campus bookstore.

OS X 10.0 was released around the same time that Windows XP was released, and OS X 10.5 was released around the same time that Windows Vista was released. Unsurprisingly, the difference between Leopard and Cheetah are as significant as the difference between Windows Vista and Windows XP. The real difference is that OS X has had more incremental releases that can really be thought of as “service packs plus bonuses.” Not only does each “point one release” address bugs, it also added extra features and capabilities.

I would bore you if you I went over all of the nitty, gritty of the underlying technology of OS X and debate the pros and cons of the XNU kernel, the inferiority of Leopard’s Address Space Layout Randomization in comparison to Vista’s, or the differences between Xcode and Visual Studio. I won’t stress the little things like the fact that copy/paste is “command-C and command-V” as opposed to CTRL-C and CTRL-V, meaning that it feels like “ALT-C” and “ALT-V” because you’re using your thumb instead of your small finger. It takes a few days for you to adapt.

Instead, I’ll just touch upon some of the key features that make a Mac, a Mac.

Responsiveness

When the iPhone was first launched, one of the most impressive features was the speed and responsiveness compared to other smartphones with the mobile Web at the time. Working with Mac OS X provides the same overall level of fit-and-finish as the iPhone. Opening multiple windows, navigating between them, and launching applications is simply faster in Mac OS X as compared to Windows Vista or even Linux. This wasn’t always the case with Macs. Even when the GPU-accelerated interface was introduced in OS X 10.2, Windows XP was still the faster performing operating system. As GPUs have continued to get faster and faster, however, the OS X’s full GPU-dependent interface beats the capabilities of Windows Vista.

Additional architecture improvements have been made as well. Mac OS will automatically defrag small files under 20 MB each time the file is accessed in order to provide a better-maintained filesystem. By default, a Mac will automatically delete old log files and temporary files to maintain free space. Finally, since Mac OS X adopts a Unix-like approach with preferences stored in multiple files rather than a single registry hive, as you add new software, there is less fragmentation.

Acer to Unveil Smartphone on Feb. 16

Acer has officially confirmed the date for the launch of the company’s smartphone. February 16, make a note!

It’s not easy to keep a secret in the tech industry, especially when it comes to consumer electronics. Everyone gets all excited and starts poking around looking for extra information; someone lets something slip to a blog

and the next thing you know, it’s all over the place.

Whether no one cared enough to poke and pry, or if Acer is just really good at keeping secrets has yet to be determined, but there’s pretty much zip on what we’re going to get from this Acer brand smartphone. The company bought smartphone maker E-Ten last year and so an Acer phone was inevitable. Aside from the company spilling the beans and letting us know we’d see the first E-Ten/Acer baby in Q1 2009, we're eager to learn more.


This morning our mailboxes pinged with an invite to an Acer event at the Mobile World Congress in Barcelona. We’re excited to see what Acer has to offer but we’re not expecting anything groundbreaking. Another pony in the smartphone race. Woo. Or something.

Security Intelligence

Symantec has established some of the most comprehensive sources of Internet threat data in the world. The Symantec Global Intelligence Network encompasses worldwide security intelligence data gathered from a wide range of sources, including more than 40,000 sensors monitoring networks in more than 180 countries through Symantec products and services such as Symantec DeepSight™ Threat Management System and Symantec Managed Security Services, and from other third-party sources.


Symantec gathers malicious code reports from more than 120 million client, server, and gateway systems that have deployed its antivirus product, and also maintains one of the world’s most comprehensive vulnerability databases, currently consisting of more than 25,000 recorded vulnerabilities (spanning more than two decades) affecting more than 55,000 technologies from more than 8,000 vendors. Symantec also operates the BugTraq mailing list, one of the most popular forums for the disclosure and discussion of vulnerabilities on the Internet, which has approximately 50,000 direct subscribers who contribute, receive, and discuss vulnerability research on a daily basis.


As well, the Symantec Probe Network, a system of more than 2 million decoy accounts in more than 30 countries, attracts email from around the world to gauge global spam and phishing activity. Symantec also gathers phishing information through the Symantec Phish Report Network, an extensive antifraud community of enterprises and consumers whose members contribute and receive fraudulent Web site addresses for alerting and filtering across a broad range of solutions.


These resources give Symantec’s analysts unparalleled sources of data with which to identify, analyze, and provide informed commentary on emerging trends in attacks, malicious code activity, phishing, and spam.

Virtualization: Smart IT Investment in a Tough Economy

Given today's weakened economy, CIOs are naturally reconsidering their IT budgets, wondering if the spending priorities they set earlier in the year are still appropriate. But if the CIOs interviewed for the Goldman Sachs IT Spending Survey of July 2008 are representative, those priorities make even more sense now than they did before the recent downturn.

They identified their top three spending initiatives over the next 12 months (in order of priority) as server virtualization, server consolidation and cost cutting.

Considering the well-known advantages of data center virtualization, which arise in large part from its ability to support server consolidation and reduce operating expenses, there's certainly no reason to reprioritize that spending mix.

But CIOs who focus solely on those aspects of virtualization could still find they have the wrong investment mix after the economy emerges from its slump. That's because consolidation and cost-cutting are just the first step in the virtualization journey that leads eventually to cloud computing: a completely abstracted, highly flexible and agile IT infrastructure that can deliver any content to any device (servers, storage, applications) anytime, anywhere.

Streamlining IT Operations

That's still a ways off, but it is coming – and appropriate investments now can not only save money in the present, but also set up an organization to reap big benefits down the road with an infrastructure that's ready for the future.

"The biggest mistake CIOs are likely to make about virtualization is to think of it only in terms of getting more out of the physical infrastructure," says James Urquhart, Market Manager for Cloud Computing and Virtualization in the Data Center Solutions Group at Cisco.

Virtualization: Smart IT Investment in a Tough Economy

The consolidation that virtualization makes possible, putting 25 to 40 servers or more on a single physical box, can help businesses make better use of physical resources.

Although that's definitely an advantage – without server virtualization, businesses must overprovision compute resources to handle peaks, operating below capacity the rest of the time – it's only a starting point.
Virtualization can dramatically simplify and streamline IT operations. "You can create a standard server in software which has been tested against the applications you typically use, so rolling out a new server is just a software installation," says Andreas Antonopoulos, an analyst at Nemertes Research. "This produces a staggering reduction in operating expenses by increasing the number of servers each administrator can handle. You free up headcount, make fewer mistakes, deal with fewer exceptions, and generally increase reliability and stability."

Network Designed for Virtualization

But Urquhart points out that realizing this gain in efficiency requires a network infrastructure optimized for virtualization.

The Cisco Nexus family of data center-class switching products aims to provide this capability "We're getting strong feedback from our customers that the Nexus switches are not only a step towards virtualization, but a big part of their strategy to reduce device count in the data center, and along with it, both capital and operating expenses."

And, in fact, says William Charnock, Vice President of Technology at the global hosting provider ThePlanet.com, that's where his company has started. "While our internal IT department is aggressively evaluating virtualization, it's not yet a big part of our business model.

"For us, the high port density of the Nexus switches gives us a highly efficient use of our bandwidth resources, flattens and simplifies our network, and cuts down our device count, all of which reduces our capital investment and cuts our operating costs."

More Services for Customers

But Charnock agrees that this is just the start of benefits ThePlanet.com expects to see from the Nexus line.

"Once we have that kind of network density and flexibility, virtualization will be much easier to accomplish when customers demand it. We'll make virtualization tools available to our traditional customers as an add-on, giving us a lower price point for our services, while we build a virtualization platform to support more complex needs."

Bill Williams, Senior IP Architect at Terremark, a leading global provider of managed IT infrastructure services, sees similar benefits from the Nexus line. "The Cisco Nexus 5000 and 7000 serve as the foundation architecture of Terremark's Data Center managed services portfolio, and will help us reduce costs while providing customers greater bandwidth and services."

Infrastructure Urquhart notes that a lot of the impetus for hosted virtualization – virtualized data center infrastructure offered by a third-party service provider – is coming from smaller companies rather than large enterprises.

Infrastructure on Demand

"There's what I call a 'barrier to exit' in the enterprise, which tends to have a huge investment in traditional data center models – not only in terms of capital, but in operational processes and even the business model. Smaller firms don't have that, and tend to be more open to hosted virtualization."[

Charnock agrees. The bulk of his company's customers are small businesses – more than 20,000 small companies, many with no more five employees.

"They come to us for infrastructure on demand, and we end up being almost a lending arm for them, offering a month-to-month model that lets them get in easily without a huge investment," he says. "Virtualization is just more of the same, in some sense."

But, he notes, there's still a barrier, in terms of trust. "Customers like the idea of the 'locked cage,' and they're still wary of sharing resources, despite the price advantage it might offer." As well, in some cases, regulatory compliance forbids the kind of resource sharing that comes with virtualization.

Low Cost, High Quality

Charnock expects those barriers to fall, and says that for ThePlanet.com, virtualization is a necessary technology, one that the company will adopt due to both customer demand and business issues.

"It's a perfect fit with our fundamental business model: very low-cost, high-quality services for customers that can't afford the kind of infrastructure we offer," he says.

And he points out that such services are precisely what victims of the down economy who've lost their jobs may need if they jump into entrepreneurial mode. "Month-to-month infrastructure rental lets entrepreneurs get started proving their big idea, and it's our foot in the door when they succeed. Virtualization will make it easier for us to work with 'the next YouTube' and grow with them – that's the real bottom line."

Dave Trowbridge is a freelance writer based in Boulder Creek, CA.

Cisco Tapping the Network to Help with Environmental Efforts

Long viewed as a societal burden for corporations, environmental concerns are now proving a surprising catalyst to a host of both obvious and unexpected business benefits.

Companies that have made substantial commitments to reducing their environmental impact are discovering new ways of cutting costs and improving operations. And for many companies, especially technology-focused ones such as Cisco Systems, environmental initiatives throughout the world are creating potentially huge and diverse markets for new products to help improve the energy efficiency of everything from buildings and data centers to automobiles and the electrical grid itself.

With this in mind, Cisco is now focusing on ways to use networking technologies to speed such efforts to help its customers improve both their businesses and the environment. In its first major product aimed at this goal, the company announced the development of Cisco EnergyWise, a technology that will help businesses put a stake through the heart of "vampire power," the energy drawn by many electrical devices even when they are not in use.

EnergyWise, a free software upgrade for Cisco's Catalyst line of network switches, will make it possible for businesses to monitor and control the energy consumption of many kinds of networking devices, including IP phones, video cameras, and wireless access points. By combining software-based policy management tools with EnergyWise, companies can automatically turn off or reduce power to their digital devices.

Cisco will also extend EnergyWise to curtail energy usage of other types of products like personal computers and printers. The company says it will eventually make EnergyWise capable of controlling energy uptake by other devices throughout a building, including elevators, heating systems, and lighting.

Chart: Corporations Have Lots of Help to Go Green

As the world's leading networking equipment maker, Cisco has hundreds of millions of products connected to business communications systems throughout the world. In general, more than one billion devices link to all types of corporate and home networks. That number is expected to increase to more than five billion by 2012, making such technologies as EnergyWise an important aid for energy conservation efforts, Cisco executives say.

While the energy used by data centers and communications networks will rapidly increase to keep pace with all the new devices, experts have calculated that new information technology and networking advances like EnergyWise could reduce worldwide pollution by five times more than what such technologies would generate.

Saving with Green

Neal Elliott, associate director of research at the American Council for an Energy-Efficient Economy, says new technologies like EnergyWise can help businesses tap into long-ignored opportunities for cost reductions. Waste by definition is expensive, he says. Certainly, volatile energy prices are now underscoring this approach.

"Energy is a big piece of the cost pie that many companies have overlooked," Elliott says. "For the past 50 years corporations have focused on worker productivity as the way to reduce costs. But now they are realizing there's a lot more potential in reducing their energy consumption. This is clearly no longer a tree-hugger issue. It's a key to business success."

Cisco would agree. Less than a year after making a commitment to substantially reduce the environmental impact of its operations, the company is finding just how beneficial "being green" can be for a business.

Last June Cisco - working in partnership with leading government agencies and environmental groups - launched a four-year effort to reduce its greenhouse gas emissions by 25 percent.

The company has already knocked 10 percent off its business travel by using virtual meeting networking tools such as WebEx online conferencing software and its high-end video meeting product, Cisco TelePresence (27 percent of Cisco's direct greenhouse gas emissions are from business travel).

By working with the Environmental Defense Fund, Cisco also identified a simple way to cut $24 million over four years from energy consumption in its labs.

Rick Hutley, vice president of the global innovations practice in Cisco's Internet Business Solutions Group, says benefits from green efforts can also extend well beyond cost cutting. "Green equals efficiency. And efficiency means good business." he says. "It should be an 'ah-ha' moment for executives that not only can they afford to go green, they can't afford not to."

By being greener, for example, a company can create products that use less material and therefore are less expensive to make, giving a company a competitive price advantage. In other cases, corporations that find alternative ways to conduct meetings or organize their workforce can boost productivity from regaining hours previously lost to business travel, Hutley says.

Corporate Environmental Leadership

But beyond good economic sense, the world is depending on major corporations to lead a new era of environmental stewardship, experts say. As the primary controllers of resource usage, collectively corporations are perhaps the most important players on the environmental stage.

"Governmental regulation is important, but the reality is we won't successfully address climate change if corporations don't become proactive," says Elizabeth Sturcken, a managing director at the Environmental Defense Fund.

But in his work with the world's largest businesses, Hutley says he has found that only "a handful have decided to make the kind of commitment that is necessary to gain the strategic benefits of environmental initiatives."

But Sturcken says the situation seems to be changing. "Our phone is ringing off the hook from companies that want to know how to get started."

Susan Wickwire, the director of voluntary corporate climate programs for the U.S. Environmental Protection Agency, says big businesses are taking much more interest in the EPA's various "green" programs. Last year, for example, the number of participants in its Climate Leaders initiative increased 50 percent. Climate Leaders helps major corporations properly analyze their climate change emissions and set aggressive reduction goals. It now has 250 participating companies.

"We saw a real spike in 2008," Wickwire says. "Once companies seriously look into this issue, they are realizing just how good green can be."

Charles Waltner is a freelance writer in Piedmont, Calif.

Sunday, January 18, 2009

MacWorld Aftermath: A Closer Look At iLife, iWork

While I disagree with how Apple went about announcing the end of its direct participation in Macworld Conference & Expo, I can't really be that harsh on them for doing so. Apple has been pulling out of shows at a fairly steady pace over the years, so decamping the Macworld Conference & Expo was more of a when than an if.




iLife '09 contains iPhoto, iMovie, Garageband, iWeb, and iDVD.
(click for image gallery)

An Apple Macworld keynote is an interesting thing because in the end, it's a PR presentation. Things are announced, and the presentation is designed to get you in the mood to buy those things. Over the years, the Mac community has made the keynote into a lot more, but viewed dispassionately, it's about two hours of PR.

Phil Schiller gave this year's keynote, (hence "Philnote") and while he's notas good as Steve Jobs, he did a good job. And while he wasn't announcing anything earth-shattering, the content was not bad at all. (To be somewhat blunt, Steve Jobs has given less than a handful of earth-shattering keynotes. The rest have been fairly average, but the Mac Community has turned all of them into the pronouncements of a deity. So this keynote really was not any kind of nadir.)

Schiller skipped the usual financial slides, (with the state of the stock market and economy, this was no surprise), and even though he was visibly nervous at the start, (who wouldn't have been), he eventually found his groove, and showed off the three new products.

iLife '09

The first product was the latest version of Apple's "lifestyle" suite, iLife, now iLife '09. The suite contains 5+ applications: iPhoto, iMovie, Garageband, iWeb, and iDVD, with iTunes being the "+", since it's occasionally part of iLife, but exists on its own outside of that suite. Every Mac has iTunes, but iLife is a separate install.

iPhoto '09

The major changes in iPhoto wrap around two words: Faces and Places. Faces is Apple-ese for the new facial recognition code in iPhoto that allows you to pick a face in a picture, tag it with a name, have iPhoto find that person in every picture in your library and tag them in those pictures.

That's a pretty cool feature, especially for people like me, who initially tried tagging things all nice and detailed, and then gave up because it was annoying and tedious. Being able to find all the pictures I have of my son, or my wife, in one step is more than a little handy. Apple also tied Faces into iPhoto's "Smart Albums," so I can create a Smart Album that's tied to all the pictures of my wife, and any time I add pictures to my library that have her in the picture, they'll just show up in that Smart Album. You can also create Smart Albums with groups of people, so you could, say, create a Smart Album for your family, your kids, friends, what have you.

Nokia Adds Free Licenses For Qt Platform

In a move to boost its developer ecosystem, Nokia said Wednesday it would offer its Qt user interface and application framework under the Lesser General Public License.

Qt is a cross-platform framework based on C++ that can be used to build applications and frameworks for computers, set-top boxes, and mobile phones. It enables developers to write programs for multiple platforms with minimal adjustments for a specific platform. It's offered by Nokia-owned Qt Software, and cross-platform applications that use Qt include Google Earth, Last.fm, Opera, and Skype.

The toolkit previously had been available under the General Public License, as well as a commercial license. Nokia said the move to LGPL was meant to lower the barrier of entry, as well as boost adoption rate.

"Nokia is making significant contributions to open source communities through ongoing work with Qt, its contribution of Symbian OS and S60 to the Symbian Foundation, and open development of the Maemo platform," said Kai Oistamo, Nokia's executive VP of devices, in a statement. "By moving to LGPL, opening Qt's source code repositories, and encouraging more contributions, Qt users will have more of a stake in the development of Qt, which will in turn encourage wider adoption."

On the Qt blog, the company said it would take multiple steps to make Qt more friendly with the open source software community. It will be opening up the Qt source code repository, employing more developers, reducing the overhead needed to make a submission, and launching a new Web infrastructure to support contributions.

The company said the LGPL license will be available with the release of Qt 4.5, which is scheduled for March. The 4.5 version also will include better support for WebKit and performance improvements. Previous versions of Qt would not be affected by this move.

Intel, Microsoft, HP Sued for Alleged Patent Infringement

The feature to quickly recover data in PCs and Windows is under attack. Data recovery firm Xpoint earlier this week sued IT giants including Intel, Dell, Hewlett-Packard and Microsoft for infringing on patents to quickly restore data in the event of corrupted hardware or software.

Xpoint's patents involve quick recovery of data from secondary storage in case data on primary storage is corrupted or damaged. Quick data recovery technology is widely used in products from PC makers like HP, Dell and Lenovo for users to quickly restore the operating systems.

Xpoint is seeking unspecified monetary damages and injunctive relief from companies selling infringing products. The company owns two patents related to the data recovery: 7,024,581, which was issued in April 2006 and 7,430,686, which was issued in September 2008 by the U.S. Patent and Trademark Office.

The lead inventor of the patents was Xpoint CEO Frank Wang, who worked for six years at IBM as a member of the core technology team that developed the first IBM PC, according to the complaint filed on Monday.

In the complaint, Xpoint said it failed to reach licensing agreements with Intel, Dell and HP, which allegedly used knowledge of Xpoint's patents to implement data recovery features in their products.

Intel allegedly infringed on Xpoint's patents through data recovery technology used in its chipsets and motherboards. Intel used technology from Farstone Technology and Acronis, which Xpoint also sued.

Microsoft was also accused by Xpoint of infringing on patents with the System Restore feature in Windows Vista Home and Vista Basic. Similarly, Xpoint said backup and recovery features in Windows Vista Enterprise, Vista Business and Vista Ultimate infringed on its patents. HP and Dell were also accused of infringing on patents in Backup & Recovery Manager and One Button Restore features respectively.

The other companies filed by Xpoint include Gateway, Acer and Toshiba.

Xpoint's lawyers declined further comment about the case. Intel couldn't be reached for comment.

Dell doesn't comment on pending litigation, said David Frink, a Dell spokesman.

Rolling Review: Microsoft Hyper-V

Windows-only shops looking to do a little virtualization on the cheap need look no further than Microsoft's Hyper-V and the freebie Hyper-V Server 2008 standalone host. However, our tests showed that customers with even mildly complex virtualization requirements should run Hyper-V on top of Enterprise or Datacenter editions of Windows Server 2008 and manage guest virtual machines by adding System Center Virtual Machine Manager--which brings on licensing costs.

As for non-Windows environments, Microsoft's claim that Hyper-V is capable of mixed operating system virtualization is technically accurate, but the latest version of Novell's SUSE Enterprise is the only flavor of Linux supported across the Hyper-V range.

This leaves Red Hat, Debian, and other Linux variants to run on other hosts, such as Xen, KVM, and VMware. Linux-heavy organizations that aren't using SUSE Enterprise should bypass Hyper-V in favor of VMware ESX, Citrix XenServer 5.0, or another alternative.

The elephant in the room is Hyper-V's lack of live migration support; VMware and Citrix allow a running virtual machine to shift from host to host with no production outage. But despite early promises to the contrary, Hyper-V doesn't allow live migration.

If these issues don't apply to you, Hyper-V has a couple of selling points beyond the price tag. Windows guest virtual machine performance was more than satisfactory on both our trimmed-down Hyper-V Server 2008 test setup and our "fat OS" installation of Hyper-V on Windows Server 2008 Enterprise.

Microsoft also offers a sensible license model that simplifies management for midsize and larger companies using Windows Server 2008 Datacenter. Datacenter removes Windows guest VM licensing compliance headaches by permitting one physical server (the VM host) and unlimited guest OS instances under the same umbrella license. Citrix and VMware, in contrast, can't offer blanket licensing for Microsoft guests. Windows Server 2008 Enterprise versions allow for a host server plus four VM licenses.

At the other end of the spectrum, a Server 2008 Standard Edition license includes the host plus one guest; additional guests must each get their own license codes. And although Hyper-V Server is free, organizations are responsible for individual licenses for all hosted Windows virtual machines.

OUR TAKE
> Hyper-V is best for virtualization novices in small shops or Windows-only environments > It's hard to argue with free, but overall, Hyper-V feels like a "me-too" product, released so Microsoft would have a modern virtualization offering on the market > Rival products cost more, but you don't have to spend extra to get broad Linux support and live migration

We had no setup or installation issues adding Hyper-V services to our new or existing Windows Server 2008 hosts. Hyper-V Server 2008 ran well on our virtualization-aware chipsets from Intel and AMD, although each server in our test environment required a base installation of Windows Server 2008 and attendant updates prior to revving up Hyper-V.

Hyper-V proved to be a worthy host on our test setup, a four-host Windows 2008 cluster accessing a shared EqualLogic iSCSI SAN. We had to install Microsoft's System Center Virtual Machine Manager (SC-VMM) 2008 to match the management tool functionality in other host platforms in this Rolling Review. With Hyper-V essentially free, the $869 SC-VMM unlimited license or $505 five-host license are relative bargains for Microsoft customers.

Tapping SC-VMM's "intelligent placement," Hyper-V does a capable job of allocating new virtual machines to physical servers, comparable to XenCenter's virtual machine placement.

SC-VMM's physical-to-virtual conversions virtualized existing Windows servers without a hitch in our tests. Physical-to-virtual conversions of XP, Windows 2003, and newer Microsoft operating systems utilize Volume Shadow Copy Service. Like XenConvert or VMware Converter, SC-VMM physical-to-virtual migrations can grab a snapshot of a running production machine.

Autodesk to cut 750 jobs, sees 4Q loss (AP)

SAN RAFAEL, Calif. - Autodesk Inc., the software company that has lost its leader to Yahoo, is cutting 750 jobs, or about 10 percent of its work force to cut expenses and expects to report a loss rather than a profit for the fourth quarter.

The announcement of job cuts and the financial forecast Thursday sent the company's shares down more than 10 percent in morning trading.

Autodesk, which makes architectural and engineering software, said it will also consolidate some facilities in addition to making the job cuts, and expects a savings of $130 million annually.

The company said it will likely report a loss of 5 cents to 12 cents per share on sales of $475 million to $500 million for the quarter ending this month. In November, Autodesk said it expected a profit of 13 cents to 19 cents per share with sales of $525 million to $550 million.

Excluding items and expenses related to the job cuts, Autodesk said it will report earnings of 18 cents to 24 cents per share. The restructuring charges should come to 15 cents or 16 cents per share, the company said.

Analysts, who typically exclude one-time charges, were expecting earnings of 31 cents per share, according to a survey by Thomson Reuters.

The announcement comes two days after the internet company Yahoo Inc. disclosed it had hired Autodesk executive chairman and former CEO Carol Bartz as its new chief executive. That ended Yahoo's two-month effort to replace Jerry Yang.

Bartz spent 17 years with Autodesk and saw sales grow by $300,000 while she was chief executive.

Autodesk shares fell $2.25, or 12.7 percent, to close at $15.48 in trading Thursday.

Run Old Windows Apps in Vista... Finally!

Microsoft released a tool that will enable Windows Vista users to run older, incompatible applications. But with Windows 7 just around the corner, is Microsoft a little too late?

This entry over on the Official MDOP Blog website reports that Microsoft released MED-V 1.0 (Microsoft Enterprise Desktop Virtualization), the first product stemming from the company's acquisition of desktop virtualization vendor Kidaro back in May 2008. The arrival of MED-V could be considered as both a good thing and "a little too late," as the incompatibility problem has been a thorn in Microsoft's side since the company released Windows Vista back in November 2006, souring the stomachs of corporations and homes alike. Many gamers, home office users and business executives have opted to avoid Windows Vista altogether, and wait for the newer operating system, Windows 7, scheduled to hit retail outlets later this year.

However, for now, MED-V seems to be the immediate solution to using legacy software, enabling Vista users to run the older, incompatible applications built for Windows XP and Windows 2000 within a virtual environment. But because the program is still in its beta stage, all applications may might not work correctly. Still, as the saying goes, "something is better than nothing."

"For those of us on the MED-V product team, our primary goal was to deliver an enterprise virtualization solution for the compatibility challenges that IT teams have with some of their line-of-business applications, during the upgrade to new operating systems (like Windows Vista)," says Ran Oelgiesser, a MED-V senior product manager. "With MED-V 1.0, you can easily create, deliver and centrally manage virtual Windows XP or 2000 environments (based on Microsoft Virtual PC 2007), and help your users to run legacy applications on their Windows Vista desktops. No need to wait for the testing and migration of those incompatible applications to complete."

Oelgiesser also mentioned that the final version of MED-V would not be available until the second quarter of 2009, and will be included in the upcoming Microsoft Desktop Optimization Pack geared towards streamlining PC management and offering greater IT control of the desktop. The pack actually stems from a culmination of Microsoft virtualization acquisitions -its mission to alleviate incompatibilities and enable user productivity anywhere- including companies such as AssetMetrix, DesktopStandard, and Winternals Software.

Although based on Windows Vista, Microsoft released Beta 1 of Windows 7 over the weekend to anxious consumers wanting a taste of the new operating system. The download servers immediately crashed, however, the company managed to get the system back up and running early Saturday morning. Mixed reports have surfaced since then, some heralding Windows 7 as the next greatest operating system, some hating its new interface and similarities to Windows Vista.

Hopefully, Windows 7 consumers will not need the use of MED-V as well.

Vaio P Designer Inspired by Mini Cooper

Designer of the Vaio P Lifestyle notebook says he was inspired by the Mini Cooper.

Computer notebooks, and PCs in particular, always feel like they’re built by a committee. In contrast, cars often have a lead visionary at the helm to steer the final product into something that’s unique (especially for enthusiast vehicles). Of course, computer typically share many more of the same “platform” parts, such as chipsets and CPUs.

It’s not often, then, to hear the designer of a notebook speak out on his inspirations and thought for a product. Takuma Tomoaki shared his thought processes when designing the Vaio P, which he said shares the same characteristics as the Mini Cooper in that both are small but sophisticated.

In a sea of netbooks that are very similar, thanks to all running the same Intel chipset and Atom processor, Sony’s not-a-netbook Vaio P manages to be a different from the rest.

First of all, the decision to go with a “trackpoint” nub rather than a touchscreen (or touchpad) was to save on power, thickness and costs. That also allowed for the “smallest usable keyboard” possible, which also has a carefully calculated pitch of 16.5mm (pitch is the spacing between the centers of adjacent keys).

The ultra-wide 1600 x 760 resolution was for the purpose of displaying the full detail of 720p HD video. Of course, the one flaw in that is that the Intel Atom at its current speeds is unable to render video at such high-definition resolutions. The display was to be larger too, but had to be reduced to make room for the wireless antennas.

There’s no denying that the Vaio P is a well constructed notebook though. The top side is made of aluminum, the inside plastic, and carbon fiber makes up the base -- all for a mix of strength and lightness.

From our hands-on the Vaio P at CES, we were impressed by the form and design but it’s certainly not the most usable or value-conscious portable solution.

Sony P Series Ships at 1.3GHz in U.S., 1.6GHz in UK

Last week in Las Vegas we happened upon (okay, so it was repeatedly in our faces every time we turned around during our Sony appointment) Sony’s P Series Lifestyle PC. While there wasn't much in the way of specs, we did have a couple of scraps of info on hand about the guts of this little machine. Now we know a little more, and we’re not exactly pleased.

ZoomDuring CES, we had heard that the P Series would pack a 1.6 GHz processor and we were pretty happy with that. We know some of you were crossing your fingers for something more powerful, but your prayers have gone unanswered; but it looks like that’s the least of your worries if you're planning on purchasing one of these, depending on where you live.

Looking at the UK site, it seems British customers will be getting a 1.6 GHz Z530 Atom processor while the U.S. site just lists "Intel (1.3 GHz)" under the specs. Not cool, Sony. Why can’t we have the 1.6 GHz, too? Sadface.

As far as pricing is concerned, last week's $900 price tag is bang on the money. The P Series with a 60 GB HDD will ship for $899.99, with Vista Home Basic (don't shoot the messenger, and remember it's dual-boot to XMB). Models go up in price from there, with options for SSDs ranging from 64 GB to 128 GB and an upgrade to Vista Home Premium. All include the Atom 1.3 GHz processor. UK models range from £849 to £1368.99, all include the 1.6 GHz Z530 and are shipping mid-February.

Even the top-end Vaio P for the U.S. at $1499.99 lacks the faster CPU and Vista Business of the UK counterpart.

We’re not really sure what to make of this. Doesn’t seem like there’s a reason for the U.S. version to ship with a slower processor.

Monday, January 12, 2009

Nintendo Hint System Wants to Play Games for You

If Nintendo follows through on a newly discovered patent, then the gap between video games and cinema will truly be erased.

According to a patent filed by Nintendo guru Shigeru Miyamoto, the company would implement a new "hint system" in their games. This Hint System would allow for gamers to watch a game being played by the console itself, either from start to finish or specific points in the game. Also, when the console is playing through a game, the user has the option of taking the game over with the push of a button.

So what does the development community think of this possible trend? Mixed feelings to say the least. Kotaku had a chance to ask several developers what they thought about the Hint System. "I'm in Fallout 3 and have focused energy on sneak and unarmed combat," said Prince of Persia Producer Ben Mattes. "If I'm in a particular point in the game I can't pass, and I use this system, what 'recording' could the game know to use? It can't possibly have developer walkthroughs of all possible configurations of a character and strategies to pass through each in-game challenge."

On the other hand, Todd Howard, who is a Game Director for Bethesda (who actually makes Fallout 3), sees it from a different perspective. "Most people stop playing a certain game because they get frustrated or confused by what the game wants them to do," says Howard. "It becomes work and frustration, as opposed to ‘playtime.’ This idea clearly tries to alleviate that. It’s much like passing the controller to someone who knows the game really well, so you can move ahead or simply enjoy the story."

Either way, the new feature would be optional, and even a fresh perspective for even the most hardcore gamers. Titles like Metal Gear Solid 4, if used in conjunction with this Hint System, could be played like the game that was so popular this year, or watched like a movie, with its plethora of cinematics.

If the Hint System does come to life, I doubt it will be on the Wii. Look for it on Nintendo's next generation offering...whatever that ends up being.

AMD Phenom II X4: 45nm Benchmarked : The Phenom II And AMD's Dragon Platform

The Phenom II And AMD's Dragon Platform

It's way past high-time that AMD launched a counter-strike to the flurry of compelling Intel CPUs that've been launched since Phenom first got off of the ground. The switch over to 45 nm manufacturing seemingly took a lot longer than the company originally planned, but alongside a new CPU with smaller transistor elements, this release introduces some brand-new technology.

The improvements are manifest in a revised transistor count. Phenom II boasts roughly 758 million transistors, up from right around 450 million. As with its predecessor, the original Phenom, Phenom II drops into nearly every Socket AM2 motherboard. This gives the Phenom II broad appeal to the upgrade crowd, many of whom have long sought improved performance for their AMD systems.

Zoom

How do Phenom II configurations look to the enthusiast crowd? As it happens, the Phenom II starts right where the previous generation left off. The incoming flagship—the Phenom II X4 940—employs a naming convention that goes straight after the company's principle competitor, running at 3.0 GHz. The fastest Phenom, the X4 9950 Black Edition, was set to operate with a 2.6 GHz clock. Overclockers soon learned that this also represented something near the upper limit of the chips range, and could only get more out of it with the introduction of Advanced Clock Calibration (ACC) on the SB750 southbridge, which helped extend scalability up another few hundred megahertz. The Phenom II represents an end to such limits: even at 3 GHz this chip still has lots of headroom, as we will show with the results from our Munich lab.

AMD fired its first 45 nm salvo a few weeks back with its server-oriented Opteron models, which enjoy a much larger market share than the company’s desktop processors. In the interim, 45 nm chip yields have increased enough to permit AMD to supply the desktop market as well. With is new, smaller core re-design (code-named "Deneb"), AMD not only pulled off a die-shrink maneuver, but it also achieved some drastic improvements in energy consumption and module switching tactics.

Since the introduction of the first-generation Phenoms (alongside the Spider platform), graphics card performance has also experienced a sharp spike upwards. The platform AMD is replacing Spider with consists of the Phenom II and the latest Radeon HD 4800-series graphics cards. The mascot for the so-called “Dragon” platform is, naturally, an aggressive-looking, red-eyed silver dragon.

The original Phenom processors quickly ran into performance limitations because of high energy consumption. Simply by switching from 65 nm to 45 nm, energy consumption at the individual transistor level decreases sharply. To pump a first-generation 2.5 GHz Phenom up to 2.6 GHz, AMD also had to raise its maximum power consumption rating from 125 W (TDP) to 140 W. By itself, this was enough to disqualify that chip from use in many favorite AM2 motherboards. But with its 45 nm technology, AMD gives Phenom II a fresh start and bolsters the chip's attractiveness with improvements in several other important areas. Here’s the bottom line: in terms of speed, energy consumption, clock rates and overclockability, AMD has taken a huge step with the Phenom II.

Model
Clock Speed
L3 Cache
Code Name
Manufacturing Node
Phenom II X4 940 Black Edition
3.00 GHz
6 MB
Deneb
45 nm
Phenom II X4 920
2.80 GHz
6 MB
Deneb
45 nm
Phenom X4 9950 Black Edition
2.60 GHz
2 MB
Agena
65 nm
Phenom X4 9850 Black Edition
2.50 GHz
2 MB
Agena
65 nm
Phenom X4 9850
2.50 GHz
2 MB
Agena
65 nm
Phenom X4 9750
2.40 GHz
2 MB
Agena
65 nm
Phenom X4 9650
2.30 GHz
2 MB
Agena
65 nm
Phenom X4 9550
2.20 GHz
2 MB
Agena
65 nm
Phenom X4 9350
2.00 GHz
2 MB
Agena
65 nm


To start, AMD is offering two 45 nm Desktop CPUs: the Phenom II X4 920 at 2.8 GHz, and the Phenom II X4 940 at 3.0 GHz.

GeForce 3D Vision: Gaming Goes Stereo

When you read a review of AMD’s latest graphics card or Intel’s latest CPU, there’s a fair chance you want to know, first and foremost, how that product performs. At least, that’s why we run the most exhaustive battery of tests possible. You take those performance figures and factor in pricing, availability, and the competition to arrive at a decision: is this worth my money or not?

Other products are evaluated far more subjectively, though. Mice, keyboards, remotes—for many of those devices, you consider look, feel, interface, and setup. It’s much more difficult to lay down a decisive judgment on an item that might just feel better in your hand than it does in mine. Hence, the challenge in reviewing Nvidia’s new GeForce 3D Vision glasses, which incite very personal opinions, depending on who wears them.

Nvidia Goes Back To The Future

Nvidia's retail GeForce 3D Vision boxNvidia's retail GeForce 3D Vision box

If you’ve been around long enough, then you probably remember Elsa’s 3D Revelator glasses circa 1999. The shades came bundled with Elsa’s Erazor cards, worked with DirectX games, and required a minimum 100 Hz refresh rate. Naturally, that meant you were using a CRT display. Once the world started shifting to LCDs running 60 Hz, the active technology Elsa used simply wouldn’t work—the refresh rate in each eye was too low for flicker-free game play.

Besides, while the 3D glasses were cool in concept—and indeed looked impressive in a number of games—there were some downsides. First of all, the Nvidia TNT2 cards of the day took serious performance hits when they were used for stereo viewing. Back then, 800x600 was about all you could ask for. There were also a handful of driver settings that needed to be configured, depending on the game you were playing. Finally, eyestrain became a problem over time. We’re so used to playing 3D games on a 2D panel, that adding depth takes some getting used to. Nevertheless, because the Revelators were part of a hardware bundle, it didn’t hurt to try them out.

Now, LCD panel technology has come far enough along that the idea of active stereoscopic glasses is once again viable, and Nvidia is out to show that a lot can happen in 10 years. Its GeForce 3D Vision glasses were first showcased at NVISION in the fall of 2008. And they're now ready for retail (the company can’t yet say which brick and mortar will carry them, but we’re going to go out on a limb and guess Best Buy will have them on offer).

Having played with Nvidia’s new shades for the past several days, it’s safe to say that they’ll fundamentally change the way you experience 3D. However, we’re not quite ready to call the technology bullet-proof. Onward for more about what you’ll need in order to run a set of 3D Visions and what you can expect to see with the setup purring.

Overclocking: Core i7 Vs. Phenom II : Introduction

Introduction

This match-up needs no introduction—but I’m going to throw one down anyway.

It’s no secret that Intel has dominated our performance tests over the past year. First, its Core 2 Duos at 45 nm gave enthusiasts a great platform for aggressive, yet relatively safe overclocking. The company’s Core 2 Quads cost quite a bit more, but they managed to deliver smoking speeds in the applications optimized for multi-threaded execution.

The recent Core i7 launch further cemented Intel’s position as the performance champion. Its Core i7 965 Extreme, clocked at 3.2 GHz, demonstrated gains straight across the board versus its outgoing flagship, the Core 2 Extreme QX9770. And the Core i7 920, Intel’s sub-$300 entry-level model running at 2.66 GHz, seems to have little trouble reaching up to 4 GHz on air cooling.

There was once a time when Intel didn’t handle its technology shifts as smoothly. As recently as the Pentium 4 Prescott core (OK, that was a while back), Intel struggled to maintain an advantage against AMD’s Athlon 64. But now, with the marketing of its "tick-tock" approach to rolling out lithography advancements and micro-architecture tweaks, things have certainly turned around. How is AMD expected to compete?

Core i7 920 and Phenom II X4 940 go head-to-headCore i7 920 and Phenom II X4 940 go head-to-head

Up until now, AMD has relied on the loosely-translated term "value" to keep in the game. On its own, the Phenom X4-series is a moderate performer. AMD knows this, and has priced the chip more competitively than Intel’s quad-core offerings to attract attention. However, the Phenom hasn’t had to exist alone in an ecosystem backed by third-party vendors. It’s instead complemented by AMD’s own chipsets, mainly the 790GX and 790FX. Of course, those platforms extend comprehensive CrossFire support for its own graphics cards, which have been capturing hearts since mid-2008.

Combined, AMD’s processors, chipsets, and GPUs have fared better than any one of those components would have alone. Thus, we’d consider the company’s efforts to emphasize its Spider platform—the cumulative result of all three puzzle pieces—a success.

AMD Needs Something New

In light of a new competitive challenge—Intel’s Core i7—AMD is revamping its Spider platform with a new processor and the addition of software able to tie all of the hardware together. As you no doubt already know from reading Bert’s story, this latest effort is called Dragon.

But we’re not here to rehash the details of Phenom II. Rather, in light of significant enhancements to the CPU architecture’s overclocking capabilities (and indeed, confirmation from AMD that all of the "magic" that went into its ACC [Advanced Clock Calibration] technology is now baked into Phenom II), we’re eager to compare the value of AMD’s fastest 45 nm chip to Intel’s entry-level Core i7 920—the one most enthusiasts would be likely to eye as an overclocking contender.

In the pages that follow, we’ll describe how each platform was overclocked, just how hot each system got, how much they cost, how well they perform at their top speeds, and, ultimately, which should be on the short list for your next upgrade.