Search This Blog

Thursday, July 31, 2008

Fwd: GreenerComputing News :: July 30, 2008



---------- Forwarded message ----------
From: GreenerComputing Editor <computing@greenerworldmedia.com>
Date: Wed, Jul 30, 2008 at 6:37 PM
Subject: GreenerComputing News :: July 30, 2008
To: onemosi@gmail.com


July 30, 2008
In This Issue GreenBuzz
  » Featured News: IT Plugs into the Green Agenda
  » The Latest News on Environmentally Responsible Computing
  »
  » GreenBiz Radio: Green IT Can Be Virtually Free
Sign Up Now for our
FREE
Newsletters

»Subscribe Here
preston-gralla.jpg
THIS ISSUE'S SPONSOR


A Note from the Editor

By Preston Gralla

Green IT is all about saving money and polishing the corporate image, right? Well, yes and no, according to the most recent research from firms such as Gartner and other experts in the field. There's no doubt that greening the data center and the rest of an enterprise will certainly help the bottom line and burnish a company's image. But it turns out that it will do plenty more as well.

As Linda More reports in "IT Plugs into the Green Agenda," greening IT can also mean transforming the way that IT operates, and introducing new innovations. Simon Mingay, research vice principal at analyst Gartner, told her ... Read More


   The Latest News on Environmentally Responsible Computing
Nine Green IT Tips for Network Admins
By GreenerComputing Staff

From packaging to building materials to maximizing efficiency of raised floor, a handful of green tips from Sun and Aten Technology are making their way around the web this week.... Read More


Despite Widespread Interest, Cost Prohibits Many from Greening IT

Mindware, Wyse Team to Spread Thin Computing to Middle East

Pepsi Picks Redemtech to Handle its E-Waste

UCSD Scores $2M to Study Energy Efficient IT


   Featured Article
IT Plugs into the Green Agenda
By Linda More

Feature Surging power costs are starting to have an impact across operations, and although being green may once have been considered a luxury, but times have changed, and there are clear financial benefits and corporate advantage to be gained. Here's how companies are starting, and where your organization can benefit.... Read More


   GreenBiz Radio
Green IT Can Be Virtually Free
By Matthew Wheeland

As IT needs take up an ever-bigger part of companies' energy bills and purchasing budget, the costs of maintaining computers based on their performance per dollar are growing exponentially. Ken Brill of the Uptime Institute spoke with GreenBiz Radio about the surprisingly easy ways to drop IT costs while improving performance.... Listen


      FEATURED RESOURCES

The Global Green IT Attitude & Action Survey

This study from Info-Tech takes the pulse of which countries are leading -- and which are lagging -- in adopting green IT practices and policies.

State by State Guide to U.S. E-Waste Legislation

The National Conference of State Legislatures has compiled an extensive list of the ever-growing state and federal requirements around processing, recycling and disposing of electronic waste




BROWSE BY TOPIC
» All News
» Data Centers
» Servers
» Desktops & Laptops
» Asset Management
» E-waste & Recycling
» Virtualization
» Energy & HVAC
» Events & Tools


FEATURED JOBS
Senior Consultant
Portland, Ore.

Digital Marketing Expert
Austin, Texas

CDM Projects Consultant in China
Beijing, China

Senior Building Environmental Designer / MEP Engineer
New York City and Abu Dhabi

Environmental Project Scientist/Engineer
Dedham, MA

» Browse All Jobs


FEATURED EVENT
COPENMIND '08
Date: September 1, 2008 - 12:00am
Location: ,000

Copenmind is a global event, founded and hosted in Copenhagen, Denmark. The value of COPENMIND is the unique combination of academia and business; a cutting-edge platform for partnerships creating the world's first truly global marketplace for university-industry interaction in relation to technology transfer and research partnerships.





Wanna Write for GreenerComputing?
GreenBiz.com is looking for guest and regular columnists and feature writers. We're seeking contributions from business leaders as well as the journalists who write about them. If you're interested, send a brief query to managing editor Matthew Wheeland, at editor@greenerdesign.com.
Read our editorial guidelines.
Become a GreenerComputing Sponsor
Reach tens of thousands of businesses every month by placing your ad here. Contact us to receive more information.

© Greener World Media, Inc. All rights reserved.

Forward email

Safe Unsubscribe

GreenBiz.com | Greener World Media, Inc. | Oakland | CA | 94610


Tuesday, July 29, 2008

Cuil FAQs : How does Cuil improve search results?

24hoursnews

Cuil’s goals are to index the whole Web, to analyze deeply its pages and to organize results in a rich and helpful way that allows you to explore fully the subject of your search.

So we started from scratch—with a fresh approach, an entirely new architecture and breakthrough algorithms.

Our approach is to focus on the content of a page and then present a set of results that has both depth and breadth.

Our aim is to give you a wider range of more detailed results and the opportunity to explore more fully the different ideas behind your search. We think this approach is more useful to you than a simple list.

So Cuil searches the Web for pages with your keywords and then we analyze the rest of the text on those pages. This tells us that the same word has several different meanings in different contexts. Are you looking for jaguar the cat, the car or the operating system?

We sort out all those different contexts so that you don’t have to waste time rephrasing your query when you get the wrong result.

Different ideas are separated into tabs; we add images and roll-over definitions for each page and then make suggestions as to how you might refine your search. We use columns so you can see more results on one page.

We think that if you are interested in content rather than popularity, you’ll find our approach more useful.

I want to use Cuil as my default search engine. How do I change my browser homepage to Cuil?

If you use Firefox, grab the little globe icon beside Cuil’s address in your browser and just drop it onto the house image on your toolbar.

For Internet Explorer, click on the arrow beside the house and then "add or change home page". Select "use this webpage as your only home page" and click Yes.

If you use Safari, edit your “Preferences.”

If you use Firefox, you can add Cuil to your search toolbar by clicking “Add Cuil to Firefox” at the bottom of the search results page

Cuil not a Google killer - yet


With hours of being launched Monday, Cuil - a new search engine created by former top Google engineers - was already being touted in the blogosphere as the next Google killer. But unless Cuil (pronounced ‘cool’) can develop an ad platform to rival Google’s, Cuil will have a difficult time challenging the search giant.
Cuil is not so cool
WANNABE GOOGLE KILLER Cuil's first day in the search engine playground turned out to be a disaster.

While the new search engine was high in the list of Google's Trends listings, for most of the day the site was off-line.

The on-again oh-look-its-off-again search engine kept turning up pages that were empty other than the words "cuil shuttered.png" and "cuilfail4.jpg".

Users also moaned that search results were inaccurate. A quick search on the name 'Nick Farrell" [who he? Ed] showed my stories, or books, next to pictures that had nothing to do with me. Then the other Nick Farrell, the one who people are actually interested in seeing his sex tape, had a couple of INQ pictures beside his name.

The word "penguins" or "failure" returned zero results, although we can't understand what inspired punters to type those words in.

CrunchGear notes that culi.com leads to an Italian porn outfit, but we couldn't possibily investigate that further. ยต

Dont think so .Actual challenger to Google by Ex-Googlers

24hoursnews

actual challenger to GoogleEx-Googlers launch Cuil,I was supposed to visit cuil.com but after a long time i get
We’ll be back soon...
Due to overwhelming interest, our Cuil servers are running a bit hot right now. The search engine is momentarily unavailable as we add more capacity.

Thanks for your patience.
world's "biggest search engine"
There's yet another new search engine on the block, but this time, it's being ballyhooed as an actual challenger to Google. Cuil.com (pronounced "cool") made its public launch today and already calls itself the "biggest search engine on the web." Run by a team of former Google employees, the startup takes a slightly different approach to search than most of the big names, but whether it will be able to unseat Google—or even Yahoo—remains to be seen.

Right off the bat, the Cuil team makes a number of direct comparisons to Google, but often without actually naming the Big G. For one, Cuil brags that it has already indexed 120 billion Web pages, roughly three times as many as Google and ten times as many as Microsoft. "Rather than rely on superficial popularity metrics" like Google does, Cuil's info page says that the site contextually analyzes each page and organizes search results based on content and relevance.

When you search for a particular keyword, Cuil attempts to group the results into relevant categories. As you can see from my screenshot below, searching for "chinchilla" produces tabs for "all results," "chinchilla food," "chinchilla supplies," and "chinchilla fur" across the top. An "Explore by Category" box on the right-hand side of the results page offers a number of further associations (in this case, related to the Australian town of Chinchilla in Queensland).

Sexology

24hoursnews


Comment how is this???

Monday, July 28, 2008

Energy efficient plastics injection molding and robotic systems in the spotlight

24hoursnews

Arburg is to present all facets of its innovative plastics injection molding technology with ten exhibits at the 2008 Fakuma in Friedrichshafen from October 14 to 18.
The main focus of the Arburg trade fair stand at Fakuma 2008 will be energy-efficient plastics injection molding production and innovations in the field of robotic systems. Trade visitors will also be introduced to a representative cross section of the current range of Arburg products, applications and services at the 940 square metre Arburg stand No. 3101 in Hall A3.

“Energy Efficiency Allround” is an important Arburg corporate objective in 2008 and is consequently a major focus of the company’s presence at the Fakuma. Arburg adopts a holistic approach to optimising energy consumption, i.e. both in terms of the company itself and of the Arburg products and their fields of application. On the one hand, the goal is to produce the Allrounder machines using as little energy as possible. On the other, Arburg seeks to use its products and expertise in order to efficiently minimize energy consumption among its customers. In this context, the energy efficient Allrounders are identified by the “e²” energy efficiency label. In addition to the relevant design and equipment of the machines, Arburg also provides extensive individual consulting with regard to process optimization. Here, it is not only essential to minimize power and water consumption, or heat dissipation into the environment, but for example to optimize production capacity utilization and ensure high plastics injection molding quality with a minimum reject rate.

Energy efficient Allrounder A and S series
Four exhibition machines at the Fakuma bear the “e²” energy efficiency label: two representatives of the electric Allrounder A series and two hydraulic Allrounder S series machines with electromechanical dosage drive.

On the electric two-component Allrounder 470 A with a clamping force of 1,000 kN, a part in a hard/soft combination is produced from liquid silicone. The two injection units are arranged in an L-configuration, so that the pre-molded plastics part can be moved to the second cavity via the vertically-operating robotic system.


At the Fakuma six high-quality, labelled yogurt pots will be produced in an overall cycle time of just four seconds. View High resolution image. photo: Arburg


Arburg will demonstrate the high performance of its electric machines with a high-speed IML application. Six high-quality, labelled yogurt pots will be produced in an overall cycle time of four seconds. In order to meet the requirements with regard to material preparation and speed, the Allrounder 570 A with a clamping force of 2,000 kN is equipped with integrated, adaptive hot runner control, a high-performance plasticising cylinder and a mold featuring a pneumatic needle shut-off system. The complete IML automation equipment – feed, separation and placement of the labels, as well as removal and cavity-specific stacking of the pots – will be provided by “Waldorf Technik”. With this complete system, Arburg impressively demonstrates how IML plastics products can be produced rapidly, reliably and in high quality, as well as in large volumes.

As further fast-running, energy efficient high-performance machine, the Allrounder 630 S, will be presented equipped with a 48-cavity bottle cap mold. In order to achieve a high material throughput in a very short cycle time with this packaging application, the fully accumulator-driven hydraulic machine also features a high-performance plasticising cylinder and an energy-saving electromechanical dosage drive.

Electrical dosage is also implemented on the Allrounder 920 S with a clamping force of 5,000 kN and the largest, size 4,600 injection unit. Moreover, the exhibition machine is equipped with a Multilift V robotic system and is integrated within a production cell. A patented folding crate from “Ifco Systems” will be produced in a single cycle, removed by a robotic system and subsequently assembled fully automatically. The sequence of this complex application is simply and reliably controlled via the “Selogica” user interface.

Allrounder S - from huge to tiny
As well as the largest S-series machine, the smallest will also be on show at the Fakuma. The Allrounder 170 S with a clamping force of 150 kN and a size 30 clamping unit. The plastics exhibition machine will feature a 12-millimetre screw and will process a micro granulate. Following injection molding, the finished micro gear wheels are removed by a horizontally-operating Multilift H robotic system.

Flexible use of vertical Allrounder V
A further small machine represented will be the vertically-operating Allrounder 175 V with a clamping force of 125 kN, which is equipped with a servoelectric rotary table. The two mold halves of the rotary table allow the insertion and removal of parts during the plastics injection molding process. This machine can consequently easily be integrated into automated production lines. The fact that the vertical Allrounder V machines with their free-space system are not only predestined for the encapsulation of inserts is evidenced by the Allrounder 375 V with a clamping force of 500 kN. This vertical machine, introduced onto the market in the spring of 2008, will be presented in conjunction with the Exjection® process from “IB Steiner and Hybrid Composite Products GmbH”. This process can be used to produce long, thin-walled and structured plastics components with integrated end caps and functional geometries, also using viscous thermoplastics. Thanks to the horizontally-installed mold on the vertical Allrounder V machines, there are no limitations in terms of mold, stroke and, consequently, part lengths because the servo-controlled transfer movements are performed horizontally during the plastics injection process. Thanks to the central Selogica control system, the complete system can be conveniently programmed and controlled.

Complex production cell with six-axis robot
The Allrounder 570 S with a clamping force of 2,200 kN will also be presented for the first time at the fair. This is the new machine size of the hydraulic Allrounder S machine series, which was first introduced at the K 2007. Optimum processing of BMC moist polyester will be demonstrated on this exhibit through the production of an insulating rail for a domestic iron. For this purpose, the Allrounder 570 S will be equipped with a position-regulated screw, a newly developed screw compression device for optimal material feed and an integrated mold heating system. Thanks to feed pressure control specially developed for this application, the servo-motor driven feed screw of the screw/injester unit enables gentle and very consistent material preparation, even in the case of large shot weights. The finished molded parts are removed and subsequently processed fully automatically by a six-axis robot from “Kuka”.

Selogica user interface implemented in robot control system
The integrated production cell automation communicates with the Selogica control system via the robot interface and additional fieldbus expansion. In close co-operation with “FPT”, Kuka’s OEM partner, a Selogica user interface has been implemented in the robot control system. This allows machine installation technicians to perform sequence programming of the complex six-axis movements of the Kuka robot in their familiar plastics injection molding environment without requiring outside help. Through the expanded real-time fieldbus connection, even complex mold entry operations are very easy to implement.

Multilift V Select: Simple programming and extended application
Trade visitors to the Fakuma will be introduced to the simple and convenient programming of the servoelectric “Multilift V Select” robotic system with the new teach-in function. Here, by means of manually performed steps, the robotic system can learn the positions it has to move to in order to pick up and set down parts with the utmost precision.

Furthermore, with this exhibit, Arburg will present the entry level Multilift V Select model on an Allrounder Golden Edition for the first time. From autumn 2008, it will therefore also be possible to equip the Allrounder Golden Edition machines with an Arburg robotic system, extending the range of applications for the Multilift V Select.

In total, six Allrounders will operate with robotic systems at the Fakuma, a clear reflection of their increasing significance within complex plastics injection molding production. In this context, an important role is played by integration into the machine control system. This is achieved comprehensively via the Selogica user interface.

Fakuma injection molding production under control
Ensuring consistently high product quality, optimizing production capacity utilization, minimizing downtime and thereby ultimately enhancing the energy efficiency of production will become increasingly important for reasons of economy in the future. The manner in which this objective can be achieved through centralized production planning and control will be demonstrated by Arburg at the Fakuma with a production management stand and the Arburg host computer system, ALS, which will enable access to all ten Fakuma exhibits. The Arburg host processing engineering team will thus be able to demonstrate the benefits of ALS live and answer individual questions. ALS offers a comprehensive overview of the entire production operations as well as rapid, secure access to current data relating to running plastics injection molding production. ALS enables both effective planning as well as implementation. Thanks to its modular design using independent modules, the system can be flexibly adapted to customer requirements and is consequently also interesting for small injection molding companies. For example, machine and mold maintenance can be planned in a targeted manner, which is increasingly important in terms of preventive measures. In addition to preventive maintenance, these also include oil analysis and machine calibration. Both of these will be demonstrated and explained in practical terms by the Arburg Service Team at the exhibition stand.

Representative cross section
Ten exhibits, including electric and hydraulic Allrounders covering a clamping force range from 125 to 5.000 kN, vertical machines, various Multilift robotic systems, as well as a six-axis robot, complex production cells and a broad application spectrum comprising micro injection molding multi-component technology in conjunction with LSR processing, thermoset injection molding (BMC moist polyester) and Exjection – with this comprehensive program, Arburg will again prove at the Fakuma 2008 that its products are suitable for use in all fields of plastics injection molding and that Arburg has the capability to provide the corresponding consulting expertise.

If you're an employer looking to fill a job opening, click here to find out about posting your job listing through Jobwerx.com.

See all the Press Releases from Arburg sites.

Discover the best discounts on industrial products from Global Equipment. All featured items contain great instant rebates.
Int’l sales successes in large diameter plastics pipe extrusion

The pipe extrusion segment is registering a strong trend to large diameter plastics pipes for systems supplier KraussMaffei Berstorff.



These large diameter plastics pipes are used, for example, in water supply and district heating networks. KraussMaffei Berstorff supplies high-performance lines for production of high-quality, large diameter pipe. The company has recently sold a number of these systems.

A line for the production of large-diameter PE-HD pipe with outer diameters from 400 to 1200 mm recently went into operation at Emirates Preinsulated Pipes Industries (EPPI), Abu Dhabi/United Arab Emirates. The line will be used mainly for the production of PU-insulated pipe used to transport cooling liquid for air conditioning systems in tower blocks. The line features a KraussMaffei Berstorff KME 125-36 B/R single-screw extruder with KM-RKW 36 and KM-RKW 38 pipeheads. Both pipeheads can be moved sideways out of the line on tracks.


KraussMaffei Berstorff extrusion line for the production of large polyolefin pipes. View High resolution image. photo: KraussMaffei


Output up to 1700 kg an hour
P.E.S Productive and Industrial Co. Teheran/Iran also recently started operating a KraussMaffei Berstorff line to produce large-diameter plastics pipe. The line is headed by a KME 150-36 B/R extruder and a KM-RKW 39 pipehead. The line achieves output of 1700 kg/h producing PE-HD pipe with outer diameters up to 1600 mm.

KraussMaffei Berstorff supplies complete turnkey solutions to produce large diameter plastics pipe, which are tailored to the specific requirements of individual customers. RKW pipeheads are capable of producing pipe with diameters up to 2000 mm and wall thicknesses up to 100 mm. A 36 L/D single-screw extruder from KraussMaffei Berstorff is capable of output up to 1700 kg/h. The 36 L/D screw concept stands for high output and a gentle, high-performance plasticizing process. The longer processing unit delivers a thermally homogenous melt with fillers and pigment perfectly mixed in

Tags of the day

24hoursnews


24hoursnews accessories actress actualidad adsense adventure advertising advice affiliate amazing america anime antivirus apartamente apple argentina arsenal art articles arts audio auto automotive autos babes bali basketball beach beautiful beauty best bikini blog blogger blogging blogging_tips blogs bmw bollywood books brasil business cancer car career cars cat celebridades celebrities celebrity celebrity_gossip celebs charts chat cheats children china cine cinema clothes clubpenguin club_penguin club_penguin_cheats college comedy computer computers conservative cooking cool costume credit cricket css culture dating debt design dicas diet digital dogs domain download downloads driver dvd earn earn_money education egypt electronics entertainment environment euro_2008 events exercise f1 family farandula fashion ferrari film finance firefox fitness flash food football forex forexgen forexmarket forex_trading forum foto fotos free freebies freeware free_download free_software friends fun funny furniture gadget gadgets gallery game games gaming gay girl girls goals google google_adsense gossip gratis guide hacking hacks hardware health highlights hip_hop history hollywood home home_based_business home_business hot hotel hotels hoteluri how_to humor humour ideas images inchirieri india indian indonesia info informatica information inspiration insurance internet internet_marketing investing investment iphone ipod islam japan job jobs jokes joomla juegos kerja kitchen laptop laptops law life lifestyle lingerie linux loan love lowongan lowongan_kerja lyrics mac make_money make_money_online malaysia management manchester_united manga market marketing media medicine men microsoft mobile mobiles mobile_phone mobile_phones moda model models money mortgage motorola movie movies mp3 msn music musica naruto nature nba news nike nokia noticias notizie nutrition online online_business online_marketing open_source opinion parenting pc people personal personal_finance pets philippines phone phones photo photography photos photoshop php pics picture pictures platform poetry poker politica politics ppc program programas programming ps3 psp real_estate recipe recipes relationship relationships religion republican review reviews rock romance samsung scholarship science search_engine_optimization security sem seo sex sexy shoes shopping signals singapore sneakers soccer social software softwares sony_ericsson sport sports sri_lanka stars stocks student style symbian tech technology tecnologia television templates thailand themes tips tourism trading traffic trailer training travel trends tricks tuning turkey tutorial tutorials tv ubuntu university unix vacancy vacation video videogames videos video_games vista voip wallpaper wallpapers web web_2_0 web_design weight_loss weird wii windows women wordpress work_at_home work_from_home writing xbox_360 yahoo youtube

Sunday, July 27, 2008

Will Sex with Robot chage the social relation!!


24hoursnews-mosi

It makes me puzzled"sex with Robot" From the beginning of human story it was needed a woman for a man,sometimes--specially now a days many women many man.whatever man and woman makes love ,sex ,family and many such a natural beauty of world.
human charecter is growing tremendously complicated so adjustment becomes more tuff. love comes naturally,,sex comes maturally,, love distroys naturally,,,destroy makes pains.----Not to do so-- sex with robot is great idea but love with Robot?/ is it possible?!!
One great benificiary facts of sex with robot can be that ,to mantain a sex doll robot is less expensive than a man/ woman. mantaimance also easy.
Fer vi(Hindi) i dont know .

Lets know the latest;
A German inventor claims to have created the world's most sophisticated robot sex doll.

The sex androids developed by aircraft mechanic Michael Harriman from Nuremberg have 'hearts' that beat harder during sex.

They also breathe harder and have internal heaters to raise the body temperature - but their feet stay cold "just like in real life", according to Harriman.

He said: "They are almost impossible to distinguish from the real thing, but I am still developing improvements and I will only be happy when what I have is better than the real thing."

The dolls sold under the Andy brand name are on offer for £4,000 each for the basic model, with extra charges for adaptations like extra large breasts.


Underneath the silicon skin, developed for use in medical surgery, is an electronic heart that beats faster during sex.

The model can also be made to move by remote control, wiggling her hips under the bedclothes and making other suggestive movements - all at the touch of a button.

Harriman said his design was an improvement on the popular 'real dolls' sold in the USA..

..................
sex machines that kick pleasure into overdrive.

From the smooth, silent glide of the Monkey Rocker Tango to Le Chair's ability to put two people into a dozen compromising positions, the new products and prototypes unveiled at this week's Adult Novelty Expo straddle the line between toy and machine.

The evolutionary equipments have been recently unveiled by engineers at Adult Novelty Expo, which help improve people's dull sex life,

The Large Hadron Collider :Our understanding of the Universe is about to change...


24hoursnews

The Large Hadron Collider (LHC) is a gigantic scientific instrument near Geneva, where it spans the border between Switzerland and France about 100 m underground. It is a particle accelerator used by physicists to study the smallest known particles – the fundamental building blocks of all things. It will revolutionise our understanding, from the miniscule world deep within atoms to the vastness of the Universe.

Two beams of subatomic particles called 'hadrons' – either protons or lead ions – will travel in opposite directions inside the circular accelerator, gaining energy with every lap. Physicists will use the LHC to recreate the conditions just after the Big Bang, by colliding the two beams head-on at very high energy. Teams of physicists from around the world will analyse the particles created in the collisions using special detectors in a number of experiments dedicated to the LHC.

There are many theories as to what will result from these collisions, but what's for sure is that a brave new world of physics will emerge from the new accelerator, as knowledge in particle physics goes on to describe the workings of the Universe. For decades, the Standard Model of particle physics has served physicists well as a means of understanding the fundamental laws of Nature, but it does not tell the whole story. Only experimental data using the higher energies reached by the LHC can push knowledge forward, challenging those who seek confirmation of established knowledge, and those who dare to dream beyond the paradigm.

Saturday, July 26, 2008

iPhone 3G is its ability to pinpoint your location via GPS

One of the most powerful functions on the new Apple (AAPL) iPhone 3G is its ability to pinpoint your location via GPS. Yet, at least in New York City, it's been one of the phone's most disappointing features.

In our experience, routinely -- especially indoors -- the iPhone's Google Maps app and other location-hungry apps seem like they can't get a precise GPS location, and seem to use the iPhone's 1.0 location tools -- cellphone tower and wi-fi triangulation -- to narrow my location down to a neighborhood or a few-block radius. Real GPS, where I can follow my precise location as I walk, has only worked when I'm outdoors and away from tall buildings. This, I'm told, is a problem specific to New York City -- lots of tall, old buildings with lots of concrete and metal, and lousy sight lines to cell signals.

For most practical purposes, a less-precise location is fine -- Google (GOOG) will still be able to show me the closest Starbucks (SBUX) store or post office. But the iPhone's location services have also failed completely several times, searching for my location for more than a minute and never finding anything. Very frustrating.

The good news: Location-hungry app developers (and Apple) are working on it. One major iPhone developer tells us his company is working on an enhancement to their app that's more reliant on network-based location -- and less on GPS. And Apple's latest iPhone firmware/developer kit, version 2.1, reportedly includes a lot of GPS improvements, too.











Apple has finally posted an update regarding the extended MobileMe email outage that has affected approximately 1% of users. Their updated support document reports that they have restored web-access to affected MobileMe accounts. This will allow affected users to see emails that they have received since July 18th, the day the outage started.
As Apple details, users still do not have access to emails prior to that date and are unable to access their email from their desktop email clients. This functionality will presumably be restored over time. This temporary web-access is meant to allow users to access their recent mail that has been unavailable.

Apple warns that users should not make changes to their MobileMe password, email aliases, or storage allocation while this temporary solution is in place. Doing so could result in technical errors.

Apple also admits that while the majority of email messages will be fully restored, approximately 10% of messages received between 5:00 a.m. PDT on July 16th and 10:20 a.m. PDT on July 18th have been lost. Additional details can be found in their tech note.

The writer explains that he or she will be updating again this weekend to report on progress.

Nanotechnoloy : a small stuff but really big


24hoursnews

Nanotechnology the new boost of scince is attracting ths whole science market. In some recent comments it found that nano isn't nano
You may never have heard of it, but chances are some of the products you use make use of nanotechnology. These products include particles so small, they might be able to pass through the wall of a cell.

Nanomaterials are measured in nanometres — or millionths of a millimetre. A human hair is 100,000 times thicker than a particle measuring one nanometre.

Proponents hail nanotechnologies as the next industrial revolution, with the potential to make cancer therapy more effective, consumer products more durable, solar cells more efficient and revolutionize the natural gas industry. Those developments could be years away.

Nanoscale chemical substances, or nanomaterials, behave differently from their full-sized counterparts. Nano-gold, for instance, shows dramatic changes in properties and colour with only slight changes in size. Titanium dioxide nanoparticles in sunscreen are transparent to visible light, but absorb UV light. The same chemical in its conventional form is thick, white and opaque, and is used in products such as house paint and adhesives.

Titanium dioxide accounts for 70 per cent of the worldwide production of pigments. It has also been classified by the International Agency for Research on Cancer (IARC) as a possible human carcinogen.

The question many are asking is whether or not the novel properties of nanomaterials give rise to new exposures and effects, and would that mean that already-approved chemicals should be reassessed for their potential impact as nanomaterials on human health and the environment?

Friday, July 25, 2008

Private social network Facebook to go Web wide


Mark Zuckerberg, founder and CEO of Facebook, gestures while delivering the keynote address during the annual Facebook f8 developer conference in San Francisco, Wednesday, July 23, 2008. Facebook announced that 24 Web sites and applications have joined its efforts to make the Web more open and connected through Facebook Connect

The leader of a youth movement that swept the world this past year by encouraging Web users to share bits of their lives with selected friends spoke on Wednesday of spreading his service across the Web, even while apologizing for past excesses.

Mark Zuckerberg, 24, told an audience of 1,000 industry executives, software makers, media -- and his mother and father -- at Facebook's annual conference of how the company's features will run on affiliated sites outside its own.

"Facebook Connect" will transform the social network from a private site where activity occurs entirely within a "walled garden" to a Web-wide phenomenon where software makers, with user permission, can tap member data for use on their sites.

Facebook Connect is our version of Facebook for the rest of the Web," Zuckerberg told the second annual F8 conference.

Facebook, begun in 2004 as a socializing site for students at Harvard University, has seen its growth zoom to 90 million members from 24 million a little over a year ago, overtaking rival MySpace to become the world's largest social network.

It has lured 400,000 developers to build programs for it since opening up its site in May 2007. Now Facebook is letting designers build software on affiliated sites, for mobile phones or as services that tap desktop applications like Microsoft's Outlook e-mail system. It said that in coming months it would let designers building software for Facebook simultaneously create versions for Apple Inc's iPhone.

"As time goes on, less of this movement is going to be about Facebook and the platform we have created and more about the applications other people have built," Zuckerberg said. "This year, we are going to push for parity between applications on and off Facebook."

more........
Facebook to share its logins with other sites, wants better apps

At its annual F8 developer conference Wednesday, Facebook unveiled a system like OpenID where users can login to other sites with their Facebook account. It also rolled out new tools to help developers create better applications.

Available in the fall, Connect will allow users of Facebook to take their identities with them across partner sites. 24 websites and applications have already announced their support for the initiative, including Digg, Six Apart, and Citysearch.


Essentially, the offering looks much like OpenID, where a single login is used across all participating sites. For example, Digg users can choose to login with a Facebook identity, and when a user diggs a story, it can appear in their mini-feed.

Responding in advance to expected concerns about security and privacy, Facebook noted that users would be able to trust the sites that would be participating in the unified login process, as well as ensure their privacy preferences follow them on each of the partner sites.

Thursday, July 24, 2008

Intel Brings Out Multifunction Chips


Intel Revamps Its System On Chip Designs
The first batch of products is under the Intel EP80579 Integrated Processor family for industrial robotics and for security, storage and communication devices.
on Thursday is expected to introduce the first products of a new generation of system-on-a-chip designs that will target several markets, including consumer electronics, mobile Internet devices, and embedded systems.
The first batch of products is under the Intel EP80579 Integrated Processor family for industrial robotics and for security, storage and communication devices. The system-on-a-chip (SoC) products are based on the Pentium M processor. Intel plans to base future SoC hardware on its Atom processor, which was released this year for low power mobile Internet devices.
Intel Corp. Wednesday unveiled the first fruits of a new effort to make multifunction chips, a strategy that could accelerate a longtime goal to diversify beyond computers.

The company said products it is developing -- called SoCs, for systems on a chip -- can be used in an array of devices, including car entertainment and information systems, TV set-top boxes, and industrial robots, as well as security and communications hardware.

Intel has the dominant franchise in microprocessors, the calculating engines in computers. It has a much smaller business in selling versions of those chips for what the industry calls "embedded" applications, which include office equipment and store point-of-sale terminals.

To attack that market as well as others, Intel now plans to combine microprocessors on chips along with circuitry to handle other functions, such as networking, voice communications and video decoding. Besides its own technology, the company expects that special-purpose portions of future SoC products may be contributed by other companies.

The new products each contain four chips -- a CPU core, memory controller, input/output controller, and acceleration technology -- integrated into one system. Intel claims the SoCs are 45% smaller and uses 34% less power than other Intel chips with similar capabilities.
Each of the new offerings come with 7-year manufacturing support and are best suited for embedded and industrial computer systems, small and midsize business and home network-attached storage, enterprise security devices, Internet telephony, and wireless infrastructure.

Prices range from $40 to $95, depending on clock speed and whether the product includes Intel's acceleration technology for cryptographic and packet processing for enterprise-level voice over Internet protocol applications and/or for security appliances, such as virtual private network gateways and firewalls. Intel also provides software drivers and software modules for download, such as libraries for secure enterprise voice applications and tools for developing security appliances.

Hardware vendors expected to use Intel's new products include Nortel, Alcatel-Lucent, Advantech, Lanner, iBase, NexCom, Emerson, and others. The SoCs also support multiple operating systems, such as Wind River's real-time operating system and Red Hat Linux.

While the new products use an older Pentium M CPU, future SoCs will be built around Intel's Atom core. Atom is one of Intel's latest 45 nanometer-scale manufactured processors. The low-power chip available with one or two cores is expected to have a clock speed ranging from 800 MHz to 1.87 GHz and is aimed at ultra-mobile PCs, smartphones, mobile Internet devices, and other portable and low-power applications.

Intel said it has more than 15 SoC projects planned, including the company's first consumer electronics chip codenamed Canmore, which is scheduled for introduction later this year, and the second generation Sodaville, set to be introduced next year.

Scheduled to hit the market in 2009 or 2010 is the next generation semiconductors and accompanying chipsets for mobile Internet devices. Codenamed Moorestown, the platform will include a 45-nm CPU codenamed Lincroft, which will have the core, graphics, and memory controller on a single die.

The new products and roadmap are not the first time Intel has built SoC technology. In 2006, Intel sold its XScale technology to Marvell Technology, a storage, communications and chip developer. XScale, based on an ARM architecture, was the linchpin for Intel's PXA9xx-series communications processor; the same that powered Research In Motion's BlackBerry.

In jumping back into the market, Intel has abandoned the idea of developing a separate architecture and is leveraging the same architecture as the PC and server processors that account for most of its revenue.

Intel this time around also sees an emerging market that will someday encompass billions of next-generation Internet-connected devices, ranging from handheld computers in people's pockets to home health-monitoring devices sending patient data to doctors in a medical center.

As a result, "we can expect the complexity of those system-on-chips to be high," Gadi Singer, general manager of Intel's SoC Enabling Group, told a news conference. Intel, according Singer, is in a strong position to meet the requirements of future SoCs because of its extensive research and development labs and high-volume manufacturing facilities.

Intel rival Advanced Micro Devices is also working on microprocessors for mobile Internet devices, but has not released a roadmap.

The virtual virtualization case study

Server virtualization has, without doubt, taken the IT industry by storm. It provides a cost-effective way to dramatically reduce downtime, increase flexibility, and use hardware much more efficiently.

However, small and medium businesses often find it hard to evaluate whether virtualization is an appropriate fit and, if it is, how to adopt it with a small IT staff and limited funding. It's easier for larger companies with more developed IT staffs to figure out, but it can still be a challenge.

Whether you're big or small, this six-part virtual case study explores the key considerations and deployment approaches you should examine when virtualizing your servers. Each part covers a key stage of the virtualization process, using the fictional Fergenschmeir Inc. to lay out the issues, potential mistakes, and realistic results any company should be aware of — but that you're not likely to see in a typical white paper or case study.

So follow along with Eric Brown, Fergenschmeir's new infrastructure manager, his boss Brad Richter, upper management, and Eric's IT team to find out what did and didn't work for Fergenschmeir as the company virtualized its server environment.

Stage 1: Determining the rationale

The idea of implementing production server virtualization came to Fergenschmeir from several directions. In May 2007, infrastructure manager Eric Brown had just hired Mike Beyer, an eager new summer intern. One of the first questions out of Mike’s mouth was, “So how much of your server infrastructure is virtual?” The answer, of course, was none. Although the software development team had been using a smattering of EMC VMware Workstation and Server to aid their development process, they hadn’t previously considered bringing it into production. But an innocent question from an intern made Eric give it more serious thought. So he did some research.

Eric started by talking to his team. He asked about the problems they’d had, and whether virtualization could be a solution. There were obvious wins to be had, such as the portability of virtual guest servers. Additionally, they would no longer be dependent on specific hardware, and they would be able to consolidate servers and reduce IT overhead.

The actual business motivation came a month later. The server running Fergenschmeir’s unsupported, yet business-critical, CRM application crashed hard. Nobody knew how to reinstall the application, so it took four days of downtime to get the application brought back up. Although the downtime was largely due to the original software developer being defunct, this fiasco was a serious black mark on the IT department as a whole and a terrible start for Eric’s budding career at Fergenschmeir.

The final push toward virtualization was a result of the fact that Fergenschmeir’s CEO, Bob Tersitan, liked to read IT industry magazines. The result of this pastime was often a tersely worded e-mail to Brad that might read something like, “Hey. I read about this Web portal stuff. Let’s do that. Next month? I’m on my boat -- call cell.” Usually Brad could drag his feet a bit or submit some outlandish budgetary numbers and Bob would move on to something else more productive. In this case, Bob had read a server virtualization case study he found on InfoWorld’s site and the missive was to implement it to solve the problems that Fergenschmeir had been experiencing. Bingo! Eric had already done the research and now had executive OK to go forward. The fiasco turned into an opportunity.

Stage 2: Doing a reality check

Fergenschmeir’s infrastructure manager, Eric Brown, had a good idea of how server virtualization might help improve disaster recovery and utilization issues, and he had marching orders to get it done.

But Eric was concerned that there was very little virtualization experience within his team. The intern, Mike Beyer, was by far the best resource Eric had, but Mike had never designed a new virtualization architecture from the ground up -- just peripherally administered one.

Eric also faced resistance from his staff. Eric’s server administrators, Ed Blum and Mary Edgerton, had used VMware Server and Microsoft Virtual Server before and weren’t impressed by their performance. Lead DBA Paul Marcos said he’d be unwilling to deploy a database server on a virtual platform because he had read that virtual disk I/O was terrible.

Eric and his CTO Brad Richter had already assured CEO Bob Tersitan that they’d have a proposal within a month, so, despite the obstacles, they went ahead. They started by reading everything they could find on how other companies had built their systems. Eric asked Mike to build a test platform using a trial version of VMware’s ESX platform, as that seemed to be a popular choice in the IT-oriented blogosphere.

Within a few days, Mike had an ESX server built with a few test VMs running on it. Right away, it was clear that virtualization platforms had different hardware requirements than normal servers did. The 4GB of RAM in the test server wasn’t enough to run more than three or four guest servers concurrently, and the network bandwidth afforded by the two onboard network interfaces might not be sufficient for more virtual servers.

But even with those limits, the test VMs they did deploy were stable and performed significantly better than Eric’s team had expected. Even the previously skeptical Paul was impressed with the disk throughput. He concluded that many of the workgroup applications might be good candidates for virtualization, even if he was still unsure about using VMs for their mission-critical database servers.

With this testing done, Brad and Eric were confident they could put a plan on Bob’s desk within a few weeks. Now they had to do the critical planning work.

Stage 3: Planning around capacity

After testing the server virtualization software to understand whether and where it met their performance requirements, Fergenschmeir’s IT leaders then had to do the detailed deployment planning. Infrastructure manager Eric Brown and CTO Brad Richter had two basic questions to answer in the planning: first, what server roles did they want to have; second, what could they virtualize?

Brad started the process by asking his teams to provide him with a list of every server-based application and the servers that they were installed on. From this, Eric developed a dependency tree that showed which servers and applications depended upon each other.

Assessing server roles
As the dependency tree was fleshed out, it became clear to Eric that they wouldn’t want to retain the same application-to-server assignments they had been using. Out of the 60 or so servers in the datacenter, four of them were directly responsible for the continued operation of about 20 applications. This was mostly due to a few SQL database servers that had been used as dumping grounds for the databases of many different applications, sometimes forcing an application to use a newer or older version of SQL than it supported.

Furthermore, there were risky dependencies in place. For example, five important applications were installed on the same server. Conversely, Eric and Brad discovered significant inefficiencies, such as five servers all being used redundantly for departmental file sharing.

Eric decided that the virtualized deployment needed to avoid these flaws, so the new architecture had to eliminate unnecessary redundancy while also distributing mission-critical apps across physical servers to minimize the risks of any server failures. That meant a jump from 60 servers to 72 and a commensurate increase in server licenses.

Determining virtualization candidates
With the architecture now determined, Eric had to figure out what could be deployed through virtualization and what should stay physical. Figuring out the answer to this was more difficult than he initially expected.

One key question was the load for each server, a key determinant of how many physical virtualization hosts would be needed. It was obvious that it made no sense to virtualize an application load that was making full use of its hardware platform. The initial testing showed that the VMware hypervisor ate up about 10 percent of a host server’s raw performance, so the real capacity of any virtualized host was 90 percent of its dedicated, unvirtualized counterpart. Any application whose utilization was above 90 percent would likely see performance degradation, as well as have no potential for server consolidation.

But getting those utilization figures was not easy. Using Perfmon on a Windows box, or a tool like SAR on a Linux box, could easily show how busy a given server was within its own microcosm, but it wasn’t as easy to express how that microcosm related to another.

For example, Thanatos -- the server that ran the company’s medical reimbursement and benefit management software -- was a dual-socket, single-core Intel Pentium 4 running at 2.8GHz whose load averaged at 4 percent. Meanwhile, Hermes, the voicemail system, ran on a dual-socket, dual-core AMD Opteron 275 system running at 2.2GHz with an average load of 12 percent. Not only were these two completely different processor architectures, but Hermes had twice as many processor cores as Thanatos. Making things even more complicated, processor utilization wasn’t the only basic resource that had to be considered; memory, disk, and network utilization were clearly just as important when planning a virtualized infrastructure.

Eric quickly learned that this was why there were so many applications available for performing capacity evaluations. If he had only 10 or 20 servers to consider, it might be easier and less expensive to crack open Excel and analyze it himself. He could have virtualized the loads incrementally and seen what the real-world utilization was, but he knew the inherent budgetary uncertainty wouldn’t appeal to CEO Bob Tersitan and CFO Craig Windham.

So, after doing some research, Eric suggested to Brad that they bring in an outside consulting company to do the capacity planning. Eric asked a local VMware partner to perform the evaluation, only to be told that the process would take a month or two to complete. The consultants said it was impossible to provide a complete, accurate server utilization analysis without watching the servers for at least a month. Otherwise, the analysis would fail to reflect the load of processes that were not always active, such as week and month-end report runs.

That delay made good technical sense, but it did mean Eric and Brad couldn’t meet Bob’s deadline for the implementation proposal. Fortunately, Craig was pleased that an attempt to make the proposal as accurate as possible was being made and his support eventually made Bob comfortable with the delay.

The delay turned out to be good for Eric and Bob, as there were many other planning tasks that hadn’t even come close to completion yet, such as choosing the hardware and software on which they’d run the system. This analysis period would give them breathing room to work and try to figure out what they didn’t know.

When the initial capacity planning analysis did arrive some time later, it showed that most of Fergenschmeir’s applications servers were running at or below 10 percent equalized capacity, allowing for significant consolidation of the expected 72 server deployments. A sensible configuration would require eight or nine dual-socket, quad-core ESX hosts to comfortably host the existing applications, leave some room for growth, and support the failure of a single host with limited downtime.

Stage 4: Selecting the platforms

While the consultant was doing the server utilization analysis to determine which apps could run on virtual servers and which needed to stay on physical servers, the IT team at Fergenschmeir started to think about what hardware would be used as the hosts in the final implementation.

The virtualization engine
It was obvious that any hardware they chose had to be compatible with VMware ESX, the virtualization software they had tested, so infrastructure manager Eric Brown’s team started checking the VMware hardware compatibility list. But server administrator Mary Edgerton stopped the process with a simple question: “Are we even sure we want to use VMware?”

Nobody had given that question much thought in the analysis and planning done so far. VMware was well known, but there were other virtualization platforms out there. In hindsight, the only reason Eric’s team had been pursuing VMware was due to the experience that the intern, Mike Beyer, had with it. That deserved some review.

From Eric’s limited point of view, there were four main supported virtualization platforms that he could chose from. VMware Virtual Infrastructure (which includes VMware ESX Server), Virtual Iron, XenSource, and Microsoft’s Virtual Server.

Eric wasn’t inclined to go with Microsoft’s technology because, from his reading and from input from the other server administrator, Ed Blum, who had used Microsoft Virtual Server before, it wasn’t as mature nor did it perform as well as VMware. Concerns over XenSource’s maturity also gave Eric pause, and industry talk that XenSource was a potential acquisition target created uncertainty he wanted to avoid. (And indeed it was later acquired.)

Virtual Iron, on the other hand, was a different story. The two were much closer in terms of maturity, from what Eric could tell, and Virtual Iron was about a quarter the cost. This gave Eric some pause, so he talked over the pros and cons of each with CTO Brad Richter at some length.

In the end they decided to go with VMware as they had originally planned. The decision came down to the greater number of engineers who had experience with the more widely deployed VMware platform and the belief that there would also be more third-party tools available for it. Another factor was that CEO Bob Tersitan and CFO Craig Windham had already heard the name VMware. Going with something different would require a lot of explanation and justification -- a career risk neither Eric nor Brad were willing to take.

The server selection
After the question of platform had been solved, Eric had received the initial capacity planning analysis, which indicated the need for eight or nine dual-socket, quad-core ESX hosts. With that in mind, the IT group turned its focus back to selecting the hardware platform for the revamped datacenter. Because Fergenschmeir already owned a lot of Dell and Hewlett-Packard hardware, the initial conversation centered on those two platforms. Pretty much everyone on Eric’s team had horror stories about both, so they weren’t entirely sure what to do. The general consensus was that HP’s equipment was better in quality but Dell’s cost less. Eric didn’t really care at an intellectual level -- both worked with VMware’s ESX Server, and his team knew both brands. Ed and Mary, the two server administrators, loved HP’s management software, so Eric felt more comfortable with that choice.

Before Eric’s team could get down to picking a server model, Bob made his presence known again by sending an e-mail to Brad that read, “Read about blades in InfoWorld. Goes well with green campaign we’re doing. Get those. On boat; call cell. -- Bob.” It turned out that Bob had made yet another excellent suggestion, given the manageability, power consumption, and air conditioning benefits of a blade server architecture.

Of course, this changed the hardware discussion significantly. Now, the type of storage chosen would matter a lot, given that blade architectures are generally more restrictive about what kinds of interconnects can be used, and in what combination, than standard servers.

For storage, Eric again had to reconsider the skills of his staff. Nobody in his team had worked with any SAN, much less Fibre Channel, before. So he wanted a SAN technology that was cheap, easy to configure, and still high-performance. After reviewing various products, cross-checking the ESX hardware compatibility list, and comparing prices, Eric decided to go with a pair of EqualLogic iSCSI arrays -- one SAS array and one SATA array for high- and medium-performance data, respectively.

This choice then dictated a blade architecture that could support a relatively large number of gigabit Ethernet links per blade. That essentially eliminated Dell from the running, narrowing the choices to HP’s c-Class architecture and Sun’s 6048 chassis. HP got the nod, again due to Mary’s preference for its management software. Each blade would be a dual-socket, quad-core server with 24GB of RAM and 6Gbps Ethernet ports. Perhaps the IT team would increase the amount of RAM per blade through upgrades later if the hosts became RAM-constrained, but this configuration seemed to be a good initial starting place.

The network selection
The next issue to consider was what type of equipment Eric’s team might need to add to the network. Fergenschmeir’s network core consisted of a pair of older Cisco Catalyst 4503 switches, which drew together all of the fiber from the network closets, and didn’t quite provide enough copper density to serve all of the servers in the datacenter. It was certainly not enough to dual-home all of the servers for redundancy. The previous year, someone had added an off-brand gigabit switch to take up the slack, and that obviously needed to go.

After reviewing some pricing and spec sheets, Eric decided to go with two stacks of Catalyst 3750E switches and push the still-serviceable 4503s out to the network edge. One pair of switches would reside in the telco room near the fiber terminations and perform core routing duties, while the other pair would sit down the hall and switch the server farm.

In an attempt to future-proof himself, Eric decided to get models that could support a pair of 10G links between the two stacks. These switches would ultimately cost almost as much as getting a single, highly redundant Catalyst 6500-series switch, but he would have had to retain the massive bundle of copper running from the telco room to the datacenter, or extend the fiber drops through to the datacenter to make that work. Neither prospect was appealing.

The total platform cost
All told, the virtualization hardware and software budget was hanging right around $300,000. That included about $110,000 in server hardware, $40,000 in network hardware, $100,000 in storage hardware, and about $50,000 in VMware licensing.

This budget was based on the independent consultant’s capacity planning report, which indicated that this server configuration would conservatively achieve a 10:1 consolidation ratio of virtual to physical servers, meaning 8 physical servers to handle the 72 application servers needed. Adding some failover and growth capacity brought Eric up to nine virtualization hosts and a management blade.

This approach meant that each virtualized server -- including a completely redundant storage and core network infrastructure but excluding labor and software licensing costs -- would cost about $4,200. Given that an average commodity server generally costs somewhere between $5,000 and $6,000, this seemed like a good deal. When Eric factored in the fact that commodity servers don’t offer any kind of non-application-specific high availability or load balancing capabilities, and are likely to sit more than 90-percent idle, it was an amazing deal.

Before they knew it, Eric and Brad had gotten Bob’s budget approval and were faxing out purchase orders.

Stage 5: Deploying the virtualized servers

About a month after the purchase orders went out for the hardware and software selected for the server virtualization project, the Fergenschmeir IT department was up to its elbows in boxes. Literally.

This was because server administrator Mary Edgerton ordered the chosen HP c-Class blades from a distributor instead of buying it directly from HP or a VAR and having it pre-assembled. This way, she could do the assembly (which she enjoyed) herself, and it would cost less.

As a result of this decision, more than 120 parcels showed up at Fergenschmeir's door. Just breaking down the boxes took Mary and intern Mike Beyer most of a day. Assembling the hardware wasn't particularly difficult; within the first week, they had assembled the blade chassis, installed it in the datacenter, and worked with an electrician to get new circuits wired in. Meanwhile, the other administrator, Ed Blum, had been working some late nights to swap out the core network switches.

Before long, they had VMware ESX Server installed on nine of the blades, and VirtualCenter Server installed on the blade they had set aside for management.

Unexpected build-out complexity emerges
It was at this point that things started to go sideways. Up until now, the experience Mike had gained working with VMware ESX at his college had been a great help. He knew how to install ESX Server, and he was well versed in the basics of how to manage it once it was up and running. However, he hadn't watched his college mentor configure the network stack and didn't know how ESX integrated with the SAN.

After a few fits and starts and several days of asking what they'd later realize were silly questions on the VMware online forums, Ed, Mary, and Mike did get things running, but they didn't really believe they had done it correctly. Network and disk performance weren't as good as they had expected, and every so often, they'd lose network connectivity to some VMs. The three had increasing fears that they were in over their heads.

Infrastructure manager Eric Brown realized he'd need to send his team out for extra training or get a second opinion if they were going to have any real confidence in their implementation. The next available VMware classes were a few weeks away, so Eric called in the consultant that had helped with capacity planning to assist with the build out.

Although this was a significant and unplanned expense, it turned out to be well worth it. The consultant teamed up with Mary to configure the first few blades and worked with Ed on how best to mesh the Cisco switches and VMware's fairly complex virtual networking stack. This mentoring and knowledge transfer process proved to be very valuable. Later, while Mary was sitting in her VMware class, she noted that the course curriculum wouldn't have come anywhere near preparing her to build a complete configuration on her own. Virtualization draws together so many different aspects of networking, server configuration, and storage configuration that it requires a well-seasoned jack-of-all-trades to implement successfully in a small environment.

Bumps along the migration path
Within roughly a month of starting the deployment, Eric's team had thoroughly kicked the tires, and they were ready to start migrating servers.

Larry had done a fair amount of experimenting with VMware Converter, a physical-to-virtual migration tool that ships with the Virtual Infrastructure suite. For the first few servers they moved over, he used Converter.

But it soon became clear that Converter's speed and ease of use came at a price. The migrations from the old physical servers to the new virtualized blades did eliminate some hardware-related problems that Fergenschmier had been experiencing, but it also seemed to magnify the bugs that had crept in over years of application installations, upgrades, uninstalls, and generalized Windows rot. Some servers worked relatively well, while others performed worse than they had on the original hardware.

After a bit of digging and testing, it turned out that for Windows servers that weren't recently built, it was better to build the VMs from scratch, reinstall applications, and migrate data than it was to completely port over the existing server lock, stock, and barrel.

The result of this realization was that the migration would take much longer than planned. Sure, VMware's cloning and deployment tools allowed Ed, Mary, and Mike to deploy a clean server from a base template in four minutes, but that was easy part. The hard part was digging through application documentation to determine how everything had been installed originally and how it should be installed now. The three spent far more time on the phone with their application vendors than they had trying to figure out how to install and configure VMware.

Another painful result of their naivete emerged: Although they had checked their hardware against VMware's compatibility list during the project planning, no one had thought to ask the application vendors if they supported a virtualized architecture. In some cases, the vendors simply did not.

These application vendors hadn't denied Fergenschmeir support when their applications had been left running on operating systems that hadn't been patched for years, and they hadn't cared when the underlying hardware was on its last legs. But they feared and distrusted their applications running on a virtualized server.

In some cases, it was simply an issue of the software company not wanting to take responsibility for the configuration of the underlying infrastructure. The IT team understood this concern and accepted the vendors' caution that if any hardware-induced performance problems emerged, they were on their own -- or at least had to reproduce the issue on an unvirtualized server.

In other cases, the vendors were ignorant about virtualization. Some support contacts would assume that they were talking about VMware Workstation or Server as opposed to a hypervisor-on-hardware product such as VMware ESX. So they learned to identify the less knowledgeable support staff and ask for another technician when this happened.

But one company outright refused to provide installation support on a virtual machine. The solution to this turned out to be hanging up and calling the company back. This time they didn't breathe the word "virtual," and the tech happily helped them through the installation and configuration.

These application vendors' hesitance, ignorance, and downright refusal to support virtualization didn't make anyone in Fergenschmeir's IT department feel very comfortable, but they hadn't yet seen a problem that they could really attribute to the virtualized hardware. Privately, Eric and CTO Brad Richter discussed the fact that they had unwittingly bought themselves into a fairly large liability, but there wasn't much they could do about that now.

Stage 6: Learning from the experience

About five months after the Fergenschmeir IT team had started unpacking boxes, they were done with the server virtualization deployment.

In the end, it took about a month and a half to get the VMware environment stable, train the team, and test the environment enough to feel comfortable with it. It took another three months to manually migrate every application while still maintaining a passable level of support to their user base.

Toward the end of the project, infrastructure manager Eric Brown started leaning on outsourced providers for manpower to speed things up, but his team did most of the core work.

In the months following the migration, Eric was pleasantly surprised by how stable the server network had become. Certainly, there had been several curious VMware-specific bugs, mostly with regard to management, but nothing to the degree that they had been dealing with before they rationalized the architecture and migrated to the virtual environment.

The painful act of rebuilding the infrastructure from the ground up also gave Eric’s team an excellent refresher on how the application infrastructure functioned. Eric made sure they capitalized on this by insisting that every step of the rebuild was documented. That way, if another fundamental technology or architecture change was ever needed, they’d be ready.

Find here

Home II Large Hadron Cillider News