Microsoft has announced its intention to offer virtualization management capabilities for virtual machines based on VMware and the open source Xen hypervisor as part of a broad update of its virtualization capabilities.
The move is a departure for Microsoft's systems management strategy in that it will offer management capabilities for rival offerings, and was announced as the company released System Center Virtual Machine Manager 2007 to manufacturing.
Due to be generally available in October, SCVMM is designed to manage virtual machines based on Microsoft Virtual Server 2005 R2, the company's standalone virtualization management platform.
The new product sees Microsoft attempting to win over existing customers of virtualization market leader VMware as it offers tools for the rapid conversion of existing physical servers and VMware virtual servers to Microsoft's VHD format.
The multi-vendor management capabilities will enable users to manage VMware and Xen virtual machines alongside those based on Microsoft's forthcoming Viridian hypervisor. They will be delivered in the next version of System Center Virtual Machine Management, which is due to enter beta testing in the first quarter of 2008.
Also known as Windows Server virtualization, Viridian will be delivered with 180 days of Windows Server 2008, which itself is still due for launch at the end of February 2008, despite its release to manufacturing recently slipping to the first quarter of 2008, rather than the last quarter of 2007.
Microsoft also announced last week that it is introducing a new licensing scheme for its System Center family of management products, introducing System Center Management Suite Enterprise to manage both physical and virtual environments.
Priced at $860 plus two years of software assurance, the product suite includes Virtual Machine Manager 2007 as well as licenses for the 2007 versions of System Center Configuration Manager, Operations Manager, and Data Protection Manager.
For mid-market customers, the company also announced that it is introducing a version of System Center Virtual Machine Manager 2007 designed to manage up to five physical servers and any number of virtual machines. To be priced at $499, SCVMM 2007 Workgroup will be available in January 2008.
source:http://www.cbronline.com/
Search This Blog
Monday, September 10, 2007
Google recruits Capgemini for Enterprise relation development views
Paris, France-based Capgemini will provide consulting, integration, helpdesk, and support services around Google's Apps Premium Edition offering which was launched in February and covers the search engine giant's word processing, spreadsheet, email, messaging, and Start Page applications.
The partnership is not exclusive. Capgemini already manages more than 1 million desktops worldwide as part of its outsourcing contracts, and has a close relationship with Microsoft. Richard Payling, director of sales channels at Capgemini, told Computer Business Review that the company would deliver Google Apps as a complementary offering to its existing managed desktop services.
He said: "It doesn't have to be one or the other...Office productivity tools have moved away from a one-size-fits-all model because companies have realized that not all users need access to everything… We want to give customers freedom of choice rather than being prescriptive."
Customers will pay Capgemini an annual fee for a managed service, the exact price of which will be calculated based on what services the client requires, the level of helpdesk support, the duration of the contract, and the number of users being supported.
The two companies said that while Google will continue to host the applications, Capgemini will help to provide the secure, managed services wrap that most IT departments require. Payling said: "We are helping to make Google Apps enterprise-class, by providing services such as back-up, single sign-on and security, and migrating data to the Google environment, while ensuring that it is compliant."
Capgemini said a lot of people within its own organization are using Google Apps and it expects to announce its first client in the next few months. It added that the alliance with Google highlights its willingness to address to the software-as-a-service model, which many believe will have a negative impact on the IT services community as Web-based software delivery removes the need for a lot of installation and integration work.
Andrew Gough, UK alliance manager at Capgemini, said: "We see SaaS as both a challenge and an opportunity. We have worked with SAP in the area of SaaS for some time, and clients will still need someone to help them tackle issues such as security and archiving."
Our View
We won't be able to judge the success of this alliance for some time as Google Apps Premium Edition has only been available for just over six months and the two companies are just beginning to take their joint proposition to market.
Capgemini is the first services partner that Google has recruited to help it establish Google Apps as a key player in the enterprise office applications market. And it is an essential move with more and more clients outsourcing their desktop estates as they become increasingly commoditized, and users look to take advantage of the centralized management and support functions that the likes of Capgemini can deliver.
Google argues that it doesn't lack credibility in the enterprise space as a software provider, but it will only benefit from having a major services organization such as Capgemini offer a robust support layer around its applications, at a time when compliance and security are top of most CIOs' lists of priorities. Google won't make major inroads into Microsoft's dominant position in the office space, but it may pick up business with organizations with a number of low-power users.
One of the big selling points of Google Apps against Microsoft, and also Sun's StarOffice, is that it is internet-based, with users gaining access through a web browser without having to install the software locally. Google also talked up its collaboration tools which enable users to share and publish data in real-time, while it may also be attractive for organizations with a lot of low-power users, who may occasionally need to access applications such as email, but don't justify the cost of investing in a full office suite.
So why Capgemini? It is not the biggest desktop management company in the world - IBM Global Services supports over 4 million desktops, while EDS manages more than 3 million. But Google tells us that it was impressed by Capgemini's commitment to the SaaS model and its understanding of the growing impact of consumer technology in the work environment.
The pricing model that Capgemini will use is a further step down the road towards the utility-style charging that the software-as-a-service movement is working towards where users pay only for what they use rather than a flat, multi-year license fee. It is not yet at the level where clients use and pay for the service in the same way that they do their electricity or water, but the two companies said they will look to "industrialize" the offering in coming years.
The partnership is not exclusive. Capgemini already manages more than 1 million desktops worldwide as part of its outsourcing contracts, and has a close relationship with Microsoft. Richard Payling, director of sales channels at Capgemini, told Computer Business Review that the company would deliver Google Apps as a complementary offering to its existing managed desktop services.
He said: "It doesn't have to be one or the other...Office productivity tools have moved away from a one-size-fits-all model because companies have realized that not all users need access to everything… We want to give customers freedom of choice rather than being prescriptive."
Customers will pay Capgemini an annual fee for a managed service, the exact price of which will be calculated based on what services the client requires, the level of helpdesk support, the duration of the contract, and the number of users being supported.
The two companies said that while Google will continue to host the applications, Capgemini will help to provide the secure, managed services wrap that most IT departments require. Payling said: "We are helping to make Google Apps enterprise-class, by providing services such as back-up, single sign-on and security, and migrating data to the Google environment, while ensuring that it is compliant."
Capgemini said a lot of people within its own organization are using Google Apps and it expects to announce its first client in the next few months. It added that the alliance with Google highlights its willingness to address to the software-as-a-service model, which many believe will have a negative impact on the IT services community as Web-based software delivery removes the need for a lot of installation and integration work.
Andrew Gough, UK alliance manager at Capgemini, said: "We see SaaS as both a challenge and an opportunity. We have worked with SAP in the area of SaaS for some time, and clients will still need someone to help them tackle issues such as security and archiving."
Our View
We won't be able to judge the success of this alliance for some time as Google Apps Premium Edition has only been available for just over six months and the two companies are just beginning to take their joint proposition to market.
Capgemini is the first services partner that Google has recruited to help it establish Google Apps as a key player in the enterprise office applications market. And it is an essential move with more and more clients outsourcing their desktop estates as they become increasingly commoditized, and users look to take advantage of the centralized management and support functions that the likes of Capgemini can deliver.
Google argues that it doesn't lack credibility in the enterprise space as a software provider, but it will only benefit from having a major services organization such as Capgemini offer a robust support layer around its applications, at a time when compliance and security are top of most CIOs' lists of priorities. Google won't make major inroads into Microsoft's dominant position in the office space, but it may pick up business with organizations with a number of low-power users.
One of the big selling points of Google Apps against Microsoft, and also Sun's StarOffice, is that it is internet-based, with users gaining access through a web browser without having to install the software locally. Google also talked up its collaboration tools which enable users to share and publish data in real-time, while it may also be attractive for organizations with a lot of low-power users, who may occasionally need to access applications such as email, but don't justify the cost of investing in a full office suite.
So why Capgemini? It is not the biggest desktop management company in the world - IBM Global Services supports over 4 million desktops, while EDS manages more than 3 million. But Google tells us that it was impressed by Capgemini's commitment to the SaaS model and its understanding of the growing impact of consumer technology in the work environment.
The pricing model that Capgemini will use is a further step down the road towards the utility-style charging that the software-as-a-service movement is working towards where users pay only for what they use rather than a flat, multi-year license fee. It is not yet at the level where clients use and pay for the service in the same way that they do their electricity or water, but the two companies said they will look to "industrialize" the offering in coming years.
Virtualization
Companies like VMware, and more recently XenSource, got their start with standalone virtualization software that let customers run several operating systems simultaneously on a single computer. But Linux sellers and Microsoft, unwilling to cede their influential position selling the foundational software of a computer, are trying to make virtualization a feature of the operating system.
Now the virtualization companies are trying to make their software a feature of the server instead. XenSource and VMware both have added new versions of their products that can be embedded directly in servers, and both companies have lined up major server makers who will build it in.
"With virtualization, where you can run any operating system on top, it seems a lot more logical that it would be effectively a layer sitting on top of a server," said Illuminata analyst Gordon Haff. "Why wouldn't it be supplied with the server?"
XenSource announced XenExpress OEM Edition last week, and market leader VMware this week is announcing VMware ESX Server 3i at its VMworld conference. The products run from flash memory built into a server instead of being installed on the hard drive.
The embedded versions aren't just a fantasy. VMware has partnerships with IBM, Dell, Hewlett-Packard and Network Appliance. "We expect them to begin integrating ESX Server 3i into their servers later this year or early next," a VMware representative said.
"With virtualization, where you can run any operating system on top, it seems a lot more logical that it would be effectively a layer sitting on top of a server. Why wouldn't it be supplied with the server?"
--Gordon Haff, analyst, Illuminata Likewise, XenSource has a partnership with a tier-one server company that will use its software, but XenSource won't announce which company until 30 days from now, said Chief Technology Officer Simon Crosby.
A foot in the door
The move has strategic importance in these relatively early days of virtualization, elevating the profile of virtualization specialists' products. Getting a foot in the door could help the virtualization specialists get a foot in the doors of customers who might be interested in higher-level products to manage the increasingly sophisticated computing infrastructure that can be built atop virtual machines.
Virtualization has been around for decades, but its inclusion in mainstream computers with x86 chips is bringing it out of the shadows. And the money is following. In August, VMware, an EMC subsidiary, had a roaring initial public offering, and Citrix Systems bought XenSource for $500 million.
But the foundational elements of virtualization--in some cases called a hypervisor--aren't in and of themselves likely to be a great moneymaker. Rather, it's the higher level.
"The hypervisor will come for free from multiple sources," said Forrester analyst Frank Gillett. "To me, it's not about what hypervisor you're using, it's about what ecosystem you're plugging into for management."
Management tools available today include VMware's Virtual Infrastructure, XenSource's XenEnterprise, Microsoft's System Center Virtual Machine Manager and Virtual Iron's Xen-based eponymous product. They are designed for tasks such as controlling what resources a particular virtual machine may use, backing up data or moving virtual machines from one machine to another in case of failed or overtaxed hardware.
Just getting a hypervisor onto a server doesn't guarantee success for a virtualization specialist. For one thing, server makers have their own management software to sell. For another, there's strong pressure to standardize virtual machine control interfaces so anybody's management software can work with anybody's hypervisor, Haff said.
Making the basic virtualization software a component of an operating system, available at no extra cost, exerts price pressure on VMware's core products. But it hasn't been easy for operating-system companies to build virtualization into their products.
Bumps on the road
Microsoft has yet to produce its first hypervisor, code-named Viridian and officially called Windows Server virtualization. It's due to debut within 180 days of the first-quarter launch of Windows Server 2008, but in May, Microsoft lopped off some major virtualization features to meet the deadline. (Microsoft does offer server and desktop computer virtualization technologies, but it lets virtual machines run as "guests" on a "host" version of Windows, not on a hypervisor.)
Meanwhile, leading Linux sellers Red Hat and Novell's Suse both built open-source Xen into their server products, but in neither case was the technology mature, Haff said.
"Even though Xen has been part of (Linux products) for a while, it's really just now getting ready for prime time," Haff said. "VMware is still very much the dominant player in virtualization."
Now the virtualization companies are trying to make their software a feature of the server instead. XenSource and VMware both have added new versions of their products that can be embedded directly in servers, and both companies have lined up major server makers who will build it in.
"With virtualization, where you can run any operating system on top, it seems a lot more logical that it would be effectively a layer sitting on top of a server," said Illuminata analyst Gordon Haff. "Why wouldn't it be supplied with the server?"
XenSource announced XenExpress OEM Edition last week, and market leader VMware this week is announcing VMware ESX Server 3i at its VMworld conference. The products run from flash memory built into a server instead of being installed on the hard drive.
The embedded versions aren't just a fantasy. VMware has partnerships with IBM, Dell, Hewlett-Packard and Network Appliance. "We expect them to begin integrating ESX Server 3i into their servers later this year or early next," a VMware representative said.
"With virtualization, where you can run any operating system on top, it seems a lot more logical that it would be effectively a layer sitting on top of a server. Why wouldn't it be supplied with the server?"
--Gordon Haff, analyst, Illuminata Likewise, XenSource has a partnership with a tier-one server company that will use its software, but XenSource won't announce which company until 30 days from now, said Chief Technology Officer Simon Crosby.
A foot in the door
The move has strategic importance in these relatively early days of virtualization, elevating the profile of virtualization specialists' products. Getting a foot in the door could help the virtualization specialists get a foot in the doors of customers who might be interested in higher-level products to manage the increasingly sophisticated computing infrastructure that can be built atop virtual machines.
Virtualization has been around for decades, but its inclusion in mainstream computers with x86 chips is bringing it out of the shadows. And the money is following. In August, VMware, an EMC subsidiary, had a roaring initial public offering, and Citrix Systems bought XenSource for $500 million.
But the foundational elements of virtualization--in some cases called a hypervisor--aren't in and of themselves likely to be a great moneymaker. Rather, it's the higher level.
"The hypervisor will come for free from multiple sources," said Forrester analyst Frank Gillett. "To me, it's not about what hypervisor you're using, it's about what ecosystem you're plugging into for management."
Management tools available today include VMware's Virtual Infrastructure, XenSource's XenEnterprise, Microsoft's System Center Virtual Machine Manager and Virtual Iron's Xen-based eponymous product. They are designed for tasks such as controlling what resources a particular virtual machine may use, backing up data or moving virtual machines from one machine to another in case of failed or overtaxed hardware.
Just getting a hypervisor onto a server doesn't guarantee success for a virtualization specialist. For one thing, server makers have their own management software to sell. For another, there's strong pressure to standardize virtual machine control interfaces so anybody's management software can work with anybody's hypervisor, Haff said.
Making the basic virtualization software a component of an operating system, available at no extra cost, exerts price pressure on VMware's core products. But it hasn't been easy for operating-system companies to build virtualization into their products.
Bumps on the road
Microsoft has yet to produce its first hypervisor, code-named Viridian and officially called Windows Server virtualization. It's due to debut within 180 days of the first-quarter launch of Windows Server 2008, but in May, Microsoft lopped off some major virtualization features to meet the deadline. (Microsoft does offer server and desktop computer virtualization technologies, but it lets virtual machines run as "guests" on a "host" version of Windows, not on a hypervisor.)
Meanwhile, leading Linux sellers Red Hat and Novell's Suse both built open-source Xen into their server products, but in neither case was the technology mature, Haff said.
"Even though Xen has been part of (Linux products) for a while, it's really just now getting ready for prime time," Haff said. "VMware is still very much the dominant player in virtualization."
Advanced Micro Devices' quad-core Opteron processor is finally ready
AMD CEO Hector Ruiz will formally unveil the quad-core Opteron chip, previously code-named Barcelona, during an event in San Francisco Monday evening. Over a year in the making, and six months later than expected, Barcelona will be AMD's first chip with four processing cores.
Intel has had quad-core chips for servers since last November. The company chose an easier-to-implement method of putting four processing cores together by simply packaging two dual-core chips together. AMD took a different approach, integrating all four cores onto a single chip, with the belief that having all four cores together was a better fit for its architecture.
Will that insistence on a specific design goal make a difference? In some ways, it already has.
AMD has been forced to severely discount server processor prices this year to compete against Intel's quad-core chips, causing hundreds of millions of dollars in losses. The company's sales force is in the middle of a reorganization following the departure of its top two sales executives. And because of Barcelona's delay, caused by technical glitches brought on by its challenging design, Ruiz will introduce Barcelona about 10 weeks before Intel's launches its second-generation quad-core server processor. The initial reviews have yet to surface, but it seems AMD might be able to stop the bleeding with Barcelona. The four major server vendors in the world--Dell, Hewlett-Packard, IBM and Sun Microsystems--all plan to use Barcelona in their servers. And AMD thinks it can court new customers by emphasizing a different metric for measuring power consumption in data centers.
But AMD will not deliver--at least not yet--on promises made by Randy Allen, corporate vice president of AMD's server and workstation division, in January. "We expect across a wide variety of workloads for Barcelona to outperform Clovertown by 40 percent," Allen said. In May, Allen told reporters that Barcelona "will be the highest-performing x86 chip out there. It will blow away Clovertown."
There was no proof to those statements in the test results AMD distributed ahead of the Barcelona launch. In its briefing materials, the company touted only benchmark results that emphasized floating-point performance and memory bandwidth, which have always been strengths of the Opteron processor but do not cover the entire spectrum of the server market. And even among those benchmarks, Barcelona outperformed Intel's Xeon X5345 processor by more than 40 percent on only three criteria.
Barcelona will arrive in three different categories for high-performance, standard-issue, and energy-efficient server models. The high-performance models won't be available until the fourth quarter, but two standard and three energy efficient processors are now available for two-socket servers, the dominant segment of the market. Two processors for four-socket servers in both the standard and energy-efficient categories also will be available
In the standard category, AMD will launch processors at 2GHz and 1.9GHz, costing $389 and $319, respectively. The energy-efficient Opterons will launch at 1.9GHz, 1.8GHz and 1.7GHz.
That's slower than some had expected from Barcelona, and could have something to do with the company's earlier projections for Barcelona's performance against Intel. When "technical glitches" arise in processor production, they are often solved by running the chip at slower clock speeds until the problems can be ironed out.
AMD plans to launch 2.3GHz high-performance versions in the fourth quarter and will likely boost clock speed as momentum starts to grow behind the chip. The company demonstrated a 3GHz Barcelona chip at its analyst day in July. Clock speed is by no means the only measure of processor performance, but it is an important measure.
As a result, AMD will initially market its chips in part by using a new metric it developed for measuring the average power consumed by its processors. Power consumption has become a huge issue for companies looking to build large data centers. It's increasingly more expensive to provide electricity and cooling to data centers than it is to buy the servers themselves, forcing the chip and server industries to work on building more energy-efficient products.
But AMD customers who relied on the company's previous power metric of TDP (thermal design power) were putting too many resources into cooling and electrical supply, said Bruce Shaw, director of server and workstation marketing for AMD. That's because TDP was developed so server manufacturers would know much power the chip consumes in worst-case maximum-power situations that very rarely occur, and design their systems accordingly, he said.
So now AMD will advise customers of an Opteron processor's average CPU (central processing unit) power, or ACP. "ACP is meant to be the best real-world end-user estimate of what they are likely to see from the power consumption on the processor," Shaw said.
This will give customers a better sense of how they should plan for the power consumed by Opteron servers, Shaw said. AMD still plans to publish TDP ratings that are important to server designers, but will direct customers to the ACP figure, which has the added bonus of being significantly lower than TDP.
AMD says it won't use the ACP number to compare the power consumption of its processors against Intel's. AMD is publishing the methodology behind the ACP metric, but Shaw said the company won't rate Intel's processors by using the metric for comparison purposes. Still, it is touting average CPU power of 55 watts for the energy-efficient Barcelona models and 75 watts for the standard models, which could confuse some customers for investors used to TDP comparisons.
Power consumption marketing is the new battleground for Intel and AMD, and it can be a minefield when trying to make a purchase decision, given the different implementations each company uses. But AMD does appear to have some advantages over Intel in pure energy efficiency, according to independent tests by Neal Nelson and Associates and demonstrations performed by AMD during its most recent analyst day, which could help it gain traction in the growing blade-server market.
That's good, because marketing its chips on pure performance is no longer a possibility for AMD. After years of touting the superior performance of its dual-core Opteron chips against Intel's dual-core Xeon processors, AMD's clear advantage ended with the launch of Intel's Core microarchitecture processors in June last year. Although Opteron still does well against Xeon on certain workloads that demand excellent floating-point performance or memory bandwidth, it's no longer the undisputed winner that it once was.
And it doesn't appear that Barcelona leapfrogs Intel's current quad-core chips to the degree predicted by Allen in January. The quad-core Opteron outdoes Intel by 35 percent on the SPECfp_rate2006 benchmark, a test of floating-point performance administered by the Standard Performance Evaluation Corporation (SPEC) that's long been an Opteron strongpoint and is generally a metric eyed by those with high-performance computing needs, such as labs and research institutions.
But AMD didn't provide specific numbers for SPECint_rate, a measure of integer-processing speed that relates more directly than SPECfp_rate to business-computing tasks such as e-mail or database transactions. However, according to published scores and AMD's performance estimates, Barcelona appears to trail Intel's current Xeon chips by a significant margin. AMD said the two-socket edition of Barcelona would be 55 percent faster on SPECint_rate2006 than its current dual-core Opteron chips, which received a score of 56.8. That would put Barcelona at around 88, much slower than the published results for Intel's two-socket quad-core Xeon chips running at 3GHz, which received a score of 116.
AMD uses an old benchmark, SPECompM2001 base, in another comparison. The other benchmarks touted by AMD will delight high-performance computing customers but aren't as relevant for the corporate market. The new Opteron does well on the Fluent and LSDYNA benchmarks that emphasize floating-point performance and memory bandwith.
Those aren't the types of applications that crop up more often for most business customers, and AMD doesn't cite any application-specific benchmarks that crop up more frequently, such as tests of Java performance database-driven financial software from SAP.
One strong suit that AMD can point to is virtualization technology, which is becoming increasingly important to server customers. The performance of VMware's software will be 79 percent better on AMD's quad-core Opteron compared with the previous generation, according to AMD. The company built several hooks into Barcelona that were designed to improve virtualization performance.
But it appears that Barcelona is far from the smash hit that AMD once hoped it had with its "native" quad-core design. And Intel has new quad-core chips in the offing around mid-November, with a dramatic overhaul expected next year to mimic many of AMD's design features that made Opteron a winner in the past.
AMD's best hope is to get Barcelona's clock speeds up to higher levels as quickly as possible, which could unlock the advantages of putting all the cores on the same processor die. One disadvantage of Intel's implementation is that signals have to leave one dual-core chip to visit the other, and that takes time.
Barcelona is expected to be available in servers from two of AMD's server partners on Monday, and in a few weeks from the others. It likely won't add too much revenue to AMD's coffers until the fourth quarter, meaning the company could be in for another rough patch until Barcelona reaches a wider portion of the market.
Intel has had quad-core chips for servers since last November. The company chose an easier-to-implement method of putting four processing cores together by simply packaging two dual-core chips together. AMD took a different approach, integrating all four cores onto a single chip, with the belief that having all four cores together was a better fit for its architecture.
Will that insistence on a specific design goal make a difference? In some ways, it already has.
AMD has been forced to severely discount server processor prices this year to compete against Intel's quad-core chips, causing hundreds of millions of dollars in losses. The company's sales force is in the middle of a reorganization following the departure of its top two sales executives. And because of Barcelona's delay, caused by technical glitches brought on by its challenging design, Ruiz will introduce Barcelona about 10 weeks before Intel's launches its second-generation quad-core server processor. The initial reviews have yet to surface, but it seems AMD might be able to stop the bleeding with Barcelona. The four major server vendors in the world--Dell, Hewlett-Packard, IBM and Sun Microsystems--all plan to use Barcelona in their servers. And AMD thinks it can court new customers by emphasizing a different metric for measuring power consumption in data centers.
But AMD will not deliver--at least not yet--on promises made by Randy Allen, corporate vice president of AMD's server and workstation division, in January. "We expect across a wide variety of workloads for Barcelona to outperform Clovertown by 40 percent," Allen said. In May, Allen told reporters that Barcelona "will be the highest-performing x86 chip out there. It will blow away Clovertown."
There was no proof to those statements in the test results AMD distributed ahead of the Barcelona launch. In its briefing materials, the company touted only benchmark results that emphasized floating-point performance and memory bandwidth, which have always been strengths of the Opteron processor but do not cover the entire spectrum of the server market. And even among those benchmarks, Barcelona outperformed Intel's Xeon X5345 processor by more than 40 percent on only three criteria.
Barcelona will arrive in three different categories for high-performance, standard-issue, and energy-efficient server models. The high-performance models won't be available until the fourth quarter, but two standard and three energy efficient processors are now available for two-socket servers, the dominant segment of the market. Two processors for four-socket servers in both the standard and energy-efficient categories also will be available
In the standard category, AMD will launch processors at 2GHz and 1.9GHz, costing $389 and $319, respectively. The energy-efficient Opterons will launch at 1.9GHz, 1.8GHz and 1.7GHz.
That's slower than some had expected from Barcelona, and could have something to do with the company's earlier projections for Barcelona's performance against Intel. When "technical glitches" arise in processor production, they are often solved by running the chip at slower clock speeds until the problems can be ironed out.
AMD plans to launch 2.3GHz high-performance versions in the fourth quarter and will likely boost clock speed as momentum starts to grow behind the chip. The company demonstrated a 3GHz Barcelona chip at its analyst day in July. Clock speed is by no means the only measure of processor performance, but it is an important measure.
As a result, AMD will initially market its chips in part by using a new metric it developed for measuring the average power consumed by its processors. Power consumption has become a huge issue for companies looking to build large data centers. It's increasingly more expensive to provide electricity and cooling to data centers than it is to buy the servers themselves, forcing the chip and server industries to work on building more energy-efficient products.
But AMD customers who relied on the company's previous power metric of TDP (thermal design power) were putting too many resources into cooling and electrical supply, said Bruce Shaw, director of server and workstation marketing for AMD. That's because TDP was developed so server manufacturers would know much power the chip consumes in worst-case maximum-power situations that very rarely occur, and design their systems accordingly, he said.
So now AMD will advise customers of an Opteron processor's average CPU (central processing unit) power, or ACP. "ACP is meant to be the best real-world end-user estimate of what they are likely to see from the power consumption on the processor," Shaw said.
This will give customers a better sense of how they should plan for the power consumed by Opteron servers, Shaw said. AMD still plans to publish TDP ratings that are important to server designers, but will direct customers to the ACP figure, which has the added bonus of being significantly lower than TDP.
AMD says it won't use the ACP number to compare the power consumption of its processors against Intel's. AMD is publishing the methodology behind the ACP metric, but Shaw said the company won't rate Intel's processors by using the metric for comparison purposes. Still, it is touting average CPU power of 55 watts for the energy-efficient Barcelona models and 75 watts for the standard models, which could confuse some customers for investors used to TDP comparisons.
Power consumption marketing is the new battleground for Intel and AMD, and it can be a minefield when trying to make a purchase decision, given the different implementations each company uses. But AMD does appear to have some advantages over Intel in pure energy efficiency, according to independent tests by Neal Nelson and Associates and demonstrations performed by AMD during its most recent analyst day, which could help it gain traction in the growing blade-server market.
That's good, because marketing its chips on pure performance is no longer a possibility for AMD. After years of touting the superior performance of its dual-core Opteron chips against Intel's dual-core Xeon processors, AMD's clear advantage ended with the launch of Intel's Core microarchitecture processors in June last year. Although Opteron still does well against Xeon on certain workloads that demand excellent floating-point performance or memory bandwidth, it's no longer the undisputed winner that it once was.
And it doesn't appear that Barcelona leapfrogs Intel's current quad-core chips to the degree predicted by Allen in January. The quad-core Opteron outdoes Intel by 35 percent on the SPECfp_rate2006 benchmark, a test of floating-point performance administered by the Standard Performance Evaluation Corporation (SPEC) that's long been an Opteron strongpoint and is generally a metric eyed by those with high-performance computing needs, such as labs and research institutions.
But AMD didn't provide specific numbers for SPECint_rate, a measure of integer-processing speed that relates more directly than SPECfp_rate to business-computing tasks such as e-mail or database transactions. However, according to published scores and AMD's performance estimates, Barcelona appears to trail Intel's current Xeon chips by a significant margin. AMD said the two-socket edition of Barcelona would be 55 percent faster on SPECint_rate2006 than its current dual-core Opteron chips, which received a score of 56.8. That would put Barcelona at around 88, much slower than the published results for Intel's two-socket quad-core Xeon chips running at 3GHz, which received a score of 116.
AMD uses an old benchmark, SPECompM2001 base, in another comparison. The other benchmarks touted by AMD will delight high-performance computing customers but aren't as relevant for the corporate market. The new Opteron does well on the Fluent and LSDYNA benchmarks that emphasize floating-point performance and memory bandwith.
Those aren't the types of applications that crop up more often for most business customers, and AMD doesn't cite any application-specific benchmarks that crop up more frequently, such as tests of Java performance database-driven financial software from SAP.
One strong suit that AMD can point to is virtualization technology, which is becoming increasingly important to server customers. The performance of VMware's software will be 79 percent better on AMD's quad-core Opteron compared with the previous generation, according to AMD. The company built several hooks into Barcelona that were designed to improve virtualization performance.
But it appears that Barcelona is far from the smash hit that AMD once hoped it had with its "native" quad-core design. And Intel has new quad-core chips in the offing around mid-November, with a dramatic overhaul expected next year to mimic many of AMD's design features that made Opteron a winner in the past.
AMD's best hope is to get Barcelona's clock speeds up to higher levels as quickly as possible, which could unlock the advantages of putting all the cores on the same processor die. One disadvantage of Intel's implementation is that signals have to leave one dual-core chip to visit the other, and that takes time.
Barcelona is expected to be available in servers from two of AMD's server partners on Monday, and in a few weeks from the others. It likely won't add too much revenue to AMD's coffers until the fourth quarter, meaning the company could be in for another rough patch until Barcelona reaches a wider portion of the market.
Subscribe to:
Posts (Atom)