Search This Blog

Tuesday, August 28, 2007

LHC Computing


presented by Md Moshiur Rahman


When the LHC is fully operational, it will produce roughly 1 billion proton-proton collision events per second in the detectors (40 million bunch crossings per second). This data will be heavily filtered so that only about 100 events of interest per second will be recorded permanently. Each event represents a few Megabytes of data, so the total data rate from the experiments will be of order 1 Gigabyte per second.


Including raw data, processed data and simulated data, the LHC will produce each year about 15 Petabytes (15 million Gigabytes) of data, the equivalent of about 20 million CDs! Copies of the data from one or more experiments will be stored at a dozen major computing centres, the so-called Tier-1 centres, and the analysis will be carried out by a Grid of over 100 computer centres in universities and research labs around the world, the Tier-2 centres. This computing Grid will allow thousands of scientists to access and analyse the LHC data, a task which will require a total computing power equivalent to ~ 100,000 of today's standard PC processors.


Several projects are involved in providing the necessary computing Grid infrastructure and associated middleware for LHC computing. The LHC Computing Grid project (LCG) currently operates the world's largest scientific Grid, with over 130 sites in 31 countries contributing resources, including more than 10,000 CPUs and several Petabytes of storage.


Other Grid projects contributing essential resources and know-how for LHC computing include:


National and Regional Grid infrastructure projects:
EGEE
GÉANT
Grid2003
GridPP
INFN Grid
LCG France
NorduGrid
PPDG
WestGrid


Grid middleware development projects:
CROSSGRID
Globus
GriPhyN
iVDGL
Virtual Data Toolkit


Industrial collaboration:
CERN openlab


For more information about Grid technology, visit the GridCafé




Technorati :

LHC experiments


Due to switch on in 2007, the LHC will provide collisions at the highest energies ever observed in laboratory conditions and physicists are eager to see what they will reveal. Four huge detectors - ALICE, ATLAS, CMS and LHCb - will observe the collisions so that the physicists can explore new territory in matter, energy, space and time. There is a fifth experiment called Totem (Total Cross Section, Elastic Scattering and Diffraction Dissociation at the LHC).


ALICE
Web site at CERN
Members of the collaboration

Contact information
Chair person: Lodovico Riccati
Phone secretariat: +41 22 767 2771
alice.secretariat@cern.ch

ATLAS

Web site at CERN
Members of the collaboration

Contact information
Spokesperson: Peter Jenni
Phone: +41 22 767 3046
Peter.Jenni@cern.ch

CMS
Web site at CERN
Members of the collaboration

Contact information
Chairperson: L. Foa
cms.outreach@cern.ch

LHCb
Web site at CERN
Members of the collaboration

Contact information
Spokesperson: Tatsuya Nakada
Phone secretariat: +41 22 767 9278
lhcb.secretariat@cern.ch

TOTEM
Web site at CERN
Members of the collaboration

Contact information
Spokesman: Karsten Eggert
Karsten.Eggert@cern.ch




Technorati :

LHC ( large Hadron collider) facts


WHAT IS THE LHC?

CERN´s aerial view Where is it?
The LHC is being installed in a tunnel 27 km in circumference, buried 50-175 m below ground. Located between the Jura mountain range in France and Lake Geneva in Switzerland, the tunnel was built in the 1980s for the previous big accelerator, the Large Electron Positron collider (LEP). The tunnel slopes at a gradient of 1.4% towards Lake Geneva.


What will it do?
The LHC will produce head-on collisions between two beams of particles, either protons or lead ions. The beams will be created in CERN's existing chain of accelerators and then injected into the LHC. These beams will travel through a vacuum comparable to outer space. Superconducting magnets operating at extremely low temperatures will guide them around the ring. Each beam will consist of nearly 3000 bunches of particles and each bunch will contain as many as 100 billion particles. The particles are so tiny that the chance of any two colliding is very small. When the particle beams cross, there will be only about 20 collisions among 200 billion particles. However, the particle beams will cross about 40 million times per second, so the LHC will generate about 800 million collisions per second.


What is it for?
Due to switch on in 2007, the LHC will provide collisions at the highest energies ever observed in laboratory conditions and physicists are eager to see what they will reveal. Four huge detectors - ALICE, ATLAS, CMS and LHCb - will observe the collisions so that the physicists can explore new territory in matter, energy, space and time. A fifth experiment, TOTEM, installed with CMS, will study collisions where the protons experience only very small deflections.


How powerful?
The LHC is a machine for concentrating energy into a very small space. Particle energies in the LHC are measured in tera electronvolts (TeV). 1 TeV is roughly the energy of a flying mosquito, but a proton is about a trillion times smaller than a mosquito. Each proton flying round the LHC will have an energy of 7 TeV, so when two protons collide the collision energy will be 14 TeV. Lead ions have many protons, so they can be accelerated to even greater energy: the lead ion beams will have a collision energy of 1150 TeV. At full power, each beam will be about as energetic as a car travelling at 2100 kph. The energy stored in the magnetic fields will be even greater, equivalent to a car at 10 700 kph.


At near light-speed, a proton in a beam will make 11 245 turns per second. A beam might circulate for 10 hours, travelling more than 10 billion kilometres - far enough to get to the planet Neptune and back again.


How will it work?
To control beams at such high energies the LHC will use some 7000 superconducting magnets. These electromagnets are built from superconducting materials: at low temperatures they can conduct electricity without resistance, and so create much stronger magnetic fields than ordinary electromagnets. The LHC's niobium-titanium magnets will operate at a temperature of only 1.9 K (-271°C). The strength of a magnetic field is measured in units called tesla. The LHC will operate at about 8 tesla, whereas ordinary "warm" magnets can achieve a maximum field of about 2 tesla. If the LHC used ordinary "warm" magnets instead of superconductors, the ring would have to be at least 120 km in circumference to achieve the same collision energy.

LHC: the guide
A collection of facts and figures about the Large Hadron Collider (LHC) in the form of questions and answers.




Technorati :

LHC


presented by Md moshiur Rahman


The LHC is the next step in a voyage of discovery which began a century ago. Back then, scientists had just discovered all kinds of mysterious rays, X-rays, cathode rays, alpha and beta rays. Where did they come from? Were they all made of the same thing, and if so what?


These questions have now been answered, giving us a much greater understanding of the Universe. Along the way, the answers have changed our daily lives, giving us televisions, transistors, medical imaging devices and computers.


On the threshold of the 21st century, we face new questions which the LHC is designed to address. Who can tell what new developments the answers may bring?









Building 904, where the short straight sections are being assembled, is often called "Lego Land" by the workers because of the wide variety of these sets of magnets and cryostats.








The mirrors of the RICH2 detector, one of the two Ring Imaging Cherenkov detectors of the LHCb experiment, are meticulously assembled in a clean room.



Technorati :

Acer Inc. plans to acquire Gateway Inc


Acer Inc. plans to acquire Gateway Inc. in a deal worth $710 million that Acer says will make it the world's third-largest PC vendor.


Under terms of the agreement announced Monday, Acer will purchase all of Gateway's outstanding shares for $1.90 per share. The deal has already been approved by the boards of directors at both companies and should be completed by the end of this year, subject to government approval, Acer said in a statement. Gateway's shares ended at $1.21 Friday on the New York Stock Exchange.


"This is the biggest acquisition in Acer's 30 year history," said J.T. Wang, Acer's chairman, speaking at a news conference in Taipei.


"After this acquisition, we are solidly number three in the global PC market," Wang said.


Acer's acquisition deal with Gateway also derails rival Lenovo Group Ltd.'s plans to acquire Packard Bell BV.


Alongside the acquisition deal with Acer, Gateway unveiled plans to exercise its right of first refusal to acquire shares in Packard Bell's parent company, PB Holding Co. SARL, from John Hui. Hui is the founder of eMachines Inc., which Gateway acquired in 2004, and the largest shareholder in Packard Bell.


Gateway did not disclose how much it has offered for Hui's stake in PB Holding.


Acer's efforts to overtake Lenovo will get a big boost from Gateway, which was the world's eighth largest PC vendor during 2006. Together Acer and Gateway shipped 18.6 million PCs during 2006, compared to 16.6 million PCs shipped by Lenovo.


The Gateway acquisition will have the greatest impact in the U.S., where Acer has been growing fast but remains in sixth place among PC vendors.


"This is definitely a good play for them from the U.S. consumer perspective," said Bryan Ma, director of personal systems research at IDC Asia-Pacific. However, the big question is how Acer plans to integrate Gateway with its own operations, and how smoothly the integration process will go, he said.


Acer's share of the U.S. PC market grew 164 percent during the second quarter of 2007, compared to the same period last year. Acer shipped 888,000 PCs to U.S. customers, giving the company a 5.2 percent share of the market.


By comparison, Gateway was the fourth-largest PC vendor during the second quarter, shipping 965,000 PC and taking 5.6 percent share of the U.S. PC market. The Gateway acquisition vaults Acer into the number three spot in the U.S. PC market, behind only HP and Dell.


"Acer is an outstanding strategic partner for Gateway," said Ed Coleman, CEO of Gateway, in a video feed at the Taipei news conference to announce the deal.


Gateway reported net income of $1.9 million for the second quarter, compared to a loss of $7.7 million one year earlier. The company said gains in its retail division during the period were offset by declining revenue in its professional and direct divisions.


However, talks are currently underway to sell off the professional division to a third party, Gateway said Monday. The company did not offer details of those discussions.


Gateway (telecommunications)


In telecommunications, the term gateway has the following meanings:


In a communications network, a network node equipped for interfacing with another network that uses different protocols.
A gateway may contain devices such as protocol translators, impedance matching devices, rate converters, fault isolators, or signal translators as necessary to provide system interoperability. It also requires the establishment of mutually acceptable administrative procedures between the two networks.
A protocol translation/mapping gateway interconnects networks with different network protocol technologies by performing the required protocol conversions.
Loosely, a computer is configured to perform the tasks of a gateway. For a specific case, see default gateway.
Routers exemplify special cases of gateways.


Gateways, also called protocol converters, can operate at any layer of the OSI model. The job of a gateway is much more complex than that of a router or switch. Typically, a gateway must convert one protocol stack into another.



Examples


A gateway may connect an AppleTalk network to nodes on a DECnet network
A very popular example is connecting a Local Area Network or Wireless LAN to the Internet or other Wide Area Network. In this case the gateway connects an IPX/SPX (the LAN) to a TCP/IP network (the Internet).
MainWay is the Bull brand for a gateway which connects DSA to TCP/IP



Connecting IP Networks


Gateways that connect two IP-based networks, have two IP addresses, one on each network. A gateway address like 192.168.1.1 is a Private address, and is the address to which traffic is sent from the LAN. The other IP address is the Wide Area Network address, this is the address to which traffic is sent coming from the WAN. When this is the Internet, that address is usually assigned by an ISP.


When talking about the gateway IP address, commonly the LAN-address of the gateway is meant.


If private addressing is used then the addresses of computers connected to the LAN are hidden behind the WAN gateway. That is, remote computers located "out there" on the WAN can only communicate with LAN stations via the gateway's WAN IP address. To regulate traffic between the WAN and the LAN, the gateway commonly performs Network Address Translation (NAT), presenting all of the LAN traffic to the WAN as coming from the gateway's WAN IP address and doing packet sorting and distribution of return WAN traffic to the local network




Technorati :

Acer to acquire Gateway for $710 million


Acer Inc. plans to acquire Gateway Inc. in a deal worth $710 million that Acer says will make it the world's third-largest PC vendor.


Under terms of the agreement announced Monday, Acer will purchase all of Gateway's outstanding shares for $1.90 per share. The deal has already been approved by the boards of directors at both companies and should be completed by the end of this year, subject to government approval, Acer said in a statement. Gateway's shares ended at $1.21 Friday on the New York Stock Exchange.


"This is the biggest acquisition in Acer's 30 year history," said J.T. Wang, Acer's chairman, speaking at a news conference in Taipei.


"After this acquisition, we are solidly number three in the global PC market," Wang said.


Acer's acquisition deal with Gateway also derails rival Lenovo Group Ltd.'s plans to acquire Packard Bell BV.


Alongside the acquisition deal with Acer, Gateway unveiled plans to exercise its right of first refusal to acquire shares in Packard Bell's parent company, PB Holding Co. SARL, from John Hui. Hui is the founder of eMachines Inc., which Gateway acquired in 2004, and the largest shareholder in Packard Bell.


Gateway did not disclose how much it has offered for Hui's stake in PB Holding.


Acer's efforts to overtake Lenovo will get a big boost from Gateway, which was the world's eighth largest PC vendor during 2006. Together Acer and Gateway shipped 18.6 million PCs during 2006, compared to 16.6 million PCs shipped by Lenovo.


The Gateway acquisition will have the greatest impact in the U.S., where Acer has been growing fast but remains in sixth place among PC vendors.


"This is definitely a good play for them from the U.S. consumer perspective," said Bryan Ma, director of personal systems research at IDC Asia-Pacific. However, the big question is how Acer plans to integrate Gateway with its own operations, and how smoothly the integration process will go, he said.


Acer's share of the U.S. PC market grew 164 percent during the second quarter of 2007, compared to the same period last year. Acer shipped 888,000 PCs to U.S. customers, giving the company a 5.2 percent share of the market.


By comparison, Gateway was the fourth-largest PC vendor during the second quarter, shipping 965,000 PC and taking 5.6 percent share of the U.S. PC market. The Gateway acquisition vaults Acer into the number three spot in the U.S. PC market, behind only HP and Dell.


"Acer is an outstanding strategic partner for Gateway," said Ed Coleman, CEO of Gateway, in a video feed at the Taipei news conference to announce the deal.


Gateway reported net income of $1.9 million for the second quarter, compared to a loss of $7.7 million one year earlier. The company said gains in its retail division during the period were offset by declining revenue in its professional and direct divisions.


However, talks are currently underway to sell off the professional division to a third party, Gateway said Monday. The company did not offer details of those discussions.




Technorati :

Light on out-of-body experiences


Researchers have found a way to induce out-of-body experiences using virtual-reality goggles, helping to explain a phenomenon reported by about one in 10 people.


The illusion of watching oneself from several feet away while awake is often reported by people undergoing strokes or epileptic seizures or using drugs.


In the studies published in Thursday's Science journal, two teams of researchers managed to induce the effect in healthy people by scrambling their senses of vision and touch with the aid of the goggles.


"We...describe an illusion during which healthy participants experienced a virtual body as if it were their own, and localized their 'selves' outside their body borders at a different position in space," wrote Olaf Blanke, a researcher at the Ecole Polytechnique Federale de Lausanne in Switzerland.


One team, led by Henrik Ehrsson at University College London, had volunteers sit in a chair in the middle of a room wearing virtual-reality goggles showing the view from a video camera placed behind them.


A researcher moved a rod up to the camera at the same time as the person's chest was touched, and then the rod disappeared from view.


This created the illusion that the person was sitting a few steps back, where the camera stood.


In Blanke's experiment, subjects wearing virtual-reality goggles watched an image of a mannequin representing their own body placed directly in front of them while a researcher scratched their back.


Afterwards, the volunteers were blindfolded and guided backwards. When they were asked to return to their original positions, they went toward the place where they had seen their virtual body--the mannequin.


The researchers said mixing up the senses of sight and touch was key to the experiments.


"We tried to take two modalities--sight and touch--and systematically dissociate the information with those two senses, using virtual information to do this," Blanke said in a telephone interview. "It is a mismatch between the two senses."


This type of experiment could help to shed light on philosophical questions surrounding the sense of self, and could also lead to more practical applications in video games or remote surgery, the researchers said.


This could involve providing tactile information to a surgeon who is using video to control robot arms in a remote operating theater, said Ehrsson, now at Sweden's Karolinska Institute.


"In the best case it would be the whole self transported to the operating theater," he said. "This experiment will help to improve things like that."


Vertual reality.


Virtual reality (VR) is a technology which allows a user to interact with a computer-simulated environment, be it a real or imagined one. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced, haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications. Users can interact with a virtual environment or a virtual artifact (VA) either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove, the Polhemus boom arm, and omnidirectional treadmill. The simulated environment can be similar to the real world, for example, simulations for pilot or combat training, or it can differ significantly from reality, as in VR games. In practice, it is currently very difficult to create a high-fidelity virtual reality experience, due largely to technical limitations on processing power, image resolution and communication bandwidth. However, those limitations are expected to eventually be overcome as processor, imaging and data communication technologies become more powerful and cost-effective over time.


Terminology
The origin of the term virtual reality is uncertain. The Judas Mandala, a 1982 science fiction novel by Damien Broderick where the context of use is somewhat different from that defined above. The VR developer Jaron Lanier claims that he coined the term.[1] A related term coined by Myron Krueger, "artificial reality", has been in use since the 1970s. The concept of virtual reality was popularized in mass media by movies such as Brainstorm and The Lawnmower Man (and others mentioned below), and the VR research boom of the 1990s was motivated in part by the non-fiction book Virtual Reality by Howard Rheingold. The book served to demystify the subject, making it more accessible to less technical researchers and enthusiasts, with an impact similar to what his book The Virtual Community had on virtual community research lines closely related to VR.



VR timeline
Morton Heilig wrote in the 1950s of an "Experience Theatre" that could encompass all the senses in an effective manner, thus drawing the viewer into the onscreen activity. He built a prototype of his vision dubbed the Sensorama in 1962, along with five short films to be displayed in it while engaging multiple senses (sight, sound, smell, and touch). Predating digital computing, the Sensorama was a mechanical device, which reportedly still functions today. In 1968, Ivan Sutherland, with the help of his student Bob Sproull, created what is widely considered to be the first virtual reality and augmented reality (AR) head mounted display (HMD) system. It was primitive both in terms of user interface and realism, and the HMD to be worn by the user was so heavy it had to be suspended from the ceiling, and the graphics comprising the virtual environment were simple wireframe model rooms. The formidable appearance of the device inspired its name, The Sword of Damocles. Also notable among the earlier hypermedia and virtual reality systems was the Aspen Movie Map, which was created at MIT in 1977. The program was a crude virtual simulation of Aspen, Colorado in which users could wander the streets in one of three modes: summer, winter, and polygons. The first two were based on photographs - the researchers actually photographed every possible movement through the city's street grid in both seasons - and the third was a basic 3-D model of the city. In the late 1980s the term "virtual reality" was popularized by Jaron Lanier, one of the modern pioneers of the field. Lanier had founded the company VPL Research (from "Virtual Programming Languages") in 1985, which developed and built some of the seminal "goggles n' gloves" systems of that decade.


Future
It is unclear exactly where the future of virtual reality is heading. In the short run, the graphics displayed in the HMD will soon reach a point of near realism. The audio capabilities will move into a new realm of three dimensional sound. This refers to the addition of sound channels both above and below the individual. The virtual reality application of this future technology will most likely be in the form of over ear headphones.


Within existing technological limits, sight and sound are the two senses which best lend themselves to high quality simulation. There are however attempts being currently made to simulate smell. The purpose of current research is linked to a project aimed at treating Post Traumatic Stress Disorder (PTSD) in veterans by exposing them to combat simulations, complete with smells. Although it is often seen in the context of entertainment by popular culture, this illustrates the point that the future of VR is very much tied into therapeutic, training, and engineering demands. Given that fact, a full sensory immersion beyond basic tactile feedback, sight, sound, and smell is unlikely to be a goal in the industry. It is worth mentioning that simulating smells, while it can be done very realistically, requires costly research and development to make each odor, and the machine itself is expensive and specialized, using capsules tailor made for it. Thus far basic, and very strong smells such as burning rubber, cordite, gasoline fumes, and so-forth have been made. Something complex such as a food product or specific flower would be prohibitively expensive (see the perfume industry as an example).


In order to engage the other sense of taste, the brain must be manipulated directly. This would move virtual reality into the realm of simulated reality like the "head-plugs" used in The Matrix. Although no form of this has been seriously developed at this point, Sony has taken the first step. On April 7, 2005, Sony went public with the information that they had filed for and received a patent for the idea of the non-invasive beaming of different frequencies and patterns of ultrasonic waves directly into the brain to recreate all five senses.[2] There has been research to show that this is possible. Sony has not conducted any tests as of yet and says that it is still only an idea.


It has long been feared that Virtual Reality will be the last invention of man, as once simulations become cheaper and more widespread, no one will ever want to leave their "perfect" fantasies. Satirists, however, have nodded towards humans' aversion to catheters and starvation.



Impact
There has been increasing interest in the potential social impact of new technologies, such as virtual reality (as may be seen in utopian literature, within the social sciences, and in popular culture). Mychilo S. Cline, in his book, Power, Madness, and Immortality: The Future of Virtual Reality, argues that virtual reality will lead to a number of important changes in human life and activity. He argues that:


Virtual reality will be integrated into daily life and activity and will be used in various human ways.
Techniques will be developed to influence human behavior, interpersonal communication, and cognition (i.e., virtual genetics).[3]
As we spend more and more time in virtual space, there will be a gradual "migration to virtual space," resulting in important changes in economics, worldview, and culture.
The design of virtual environments may be used to extend basic human rights into virtual space, to promote human freedom and well-being, and to promote social stability as we move from one stage in socio-political development to the next.


Heritage and Archaeology
The use of VR in Heritage and Archaeology has enormous potential in museum and visitor centre applications, but its use has been tempered by the difficulty in presenting a 'quick to learn' real time experience to numerous people any given time. Many historic reconstructions tend to be in a pre-rendered format to a shared video display, thus allowing more than one person to view a computer generated world, but limiting the interaction that full-scale VR can provide. The first use of a VR presentation in a Heritage application was in 1994 when a museum visitor interpretation provided an interactive 'walk-through' of a 3D reconstruction of Dudley Castle in England as it was in 1550. This comprised of a computer controlled laserdisc based system designed by British based engineer Colin Johnson. It is a little known fact that one of the first users of Virtual Reality was Her Majesty Queen Elizabeth II, when she officially opened the visitor centre in June 1994. Details of the original project can be viewed here..Virtual Tours of Dudley Castle archive The system featured in a conference held by the British Museum in November 1994 and in the subsequent technical paper.. 'Imaging the Past' - Electronic Imaging and Computer Graphics in Museums and Archaeology - ISBN 0861591143.


Mass media
Mass media has been a great advocate and perhaps a great hindrance to its development over the years. During the research "boom" of the late 1980s into the 1990s the news media's prognostication on the potential of VR - and potential overexposure in publishing the predictions of anyone who had one (whether or not that person had a true perspective on the technology and its limits) - built up the expectations of the technology so high as to be impossible to achieve under the technology then or any technology to date. Entertainment media reinforced these concepts with futuristic imagery many generations beyond contemporary capabilities.



Fiction books


Many science fiction books and movies have imagined characters being "trapped in virtual reality". One of the first modern works to use this idea was Daniel F. Galouye's novel Simulacron-3, which was made into a German teleplay titled Welt am Draht ("World on a Wire") in 1973 and into a movie titled The Thirteenth Floor in 1999. Other science fiction books have promoted the idea of virtual reality as a partial, but not total, substitution for the misery of reality (in the sense that a pauper in the real world can be a prince in VR), or have touted it as a method for creating breathtaking virtual worlds in which one may escape from Earth's now toxic atmosphere. They are not aware of this, because their minds exist within a shared, idealized virtual world known as Dream Earth, where they grow up, live, and die, never knowing the world they live in is but a dream. Stanislaw Lem wrote in early 1960 a short story "dziwne skrzynie profesora Corcorana" in which he presented a scientist, who devised a completely artificial virtual reality. Amongst the beings trapped inside his created virtual world, there is also a scientist, who also devised such machines creating another level of virtual world.


The Piers Anthony novel Killobyte follows the story of a paralysed cop trapped in a virtual reality game by a hacker, whom he must stop to save a fellow trapped player with diabetes slowly succumbing to insulin shock. This novel toys with the idea of both the potential positive therapeutic uses, such as allowing the paralysed to experience the illusion of movement while stimulating unused muscles, as well as virtual realities' dangers.


An early short science fiction story - "The Veldt" - about an all too real "virtual reality" was included in the 1951 book The Illustrated Man, by Ray Bradbury and may be the first fictional work to fully describe the concept.


Other popular fictional works that use the concept of virtual reality include William Gibson's Neuromancer which defined the concept of cyberspace, Neal Stephenson's Snow Crash, in which he made extensive reference to the term "avatar" to describe one's representation in a virtual world, and Rudy Rucker's The Hacker and the Ants, in which programmer Jerzy Rugby uses VR for robot design and testing.



Television
Perhaps the earliest example of virtual reality on television is a Doctor Who serial "The Deadly Assassin". This story, first broadcast in 1976, introduced a dream-like computer-generated reality known as the Matrix (no relation to the film - see below). The first major television series to showcase virtual reality was Star Trek: The Next Generation. They featured the holodeck, a virtual reality facility on starships, that enabled its users to recreate and experience anything they wanted. One difference from current virtual reality technology, however, was that replicators, force fields, holograms, and transporters were used to actually recreate and place objects in the holodeck, rather than relying solely on the illusion of physical objects, as is done today.


In Japan and Hong Kong, the first anime series to use the idea of virtual reality was Video Warrior Laserion (1984).


An anime series known as Lain:Serial Experiments included a virtual reality world known as "The Wired" that eventually co-existed with the real world.


Channel 4's Gamesmaster (1992 - 1998) also used a VR headset in its "tips and cheats" segment.


BBC 2's Cyberzone (1993) was the first true "virtual reality" game show. It was presented by Craig Charles.


FOX's VR.5 (1995) starring Lori Singer and David McCallum, used what appeared to be mistakes in technology as part of the show's on-going mystery.


In 2002, Series 4 of hit New Zealand teen sci-fi TV Series, The Tribe featured the arrival of a new tribe to the city, The Technos. They tried to gain power by introducing Virtual Reality to the city. The tribes would battle each other in the Virtual World in a "game" designed by the leader of The Techno's, Ram. However, the effects of VR on the people turned nasty when they started to fight in the real world as well, after too much use made them unable to tell the difference between what was real and what was virtual.


In 2005, Brazilian's Globo TV features a show where VR helmets are used by the attending audience in a space simulation called Conquista de Titã, broadcasted for more than 20 million viewers weekly.


In the anime Yugioh there's a virtual reality world created by the main characters rival Seto Kaiba consisting on playing the game of duel monsters in a virtual reality world where if you loose all your life points you become deleted.


The Popular .hack multimedia franchise is based on a virtual reality MMORPG ironically dubbed "The World"



Motion pictures
Steven Lisberger's film TRON was the first mainstream Hollywood picture to explore the idea, which was popularized more recently by the Wachowski brothers in 1999's The Matrix. The Matrix was significant in that it presented virtual reality and reality as often overlapping, and sometimes indistinguishable. Total Recall and David Cronenberg's film EXistenZ dealt with the danger of confusion between reality and virtual reality in computer games. Cyberspace became something that most movies completely misunderstood, as seen in The Lawnmower Man. Also, the British comedy Red Dwarf used in several episodes the idea that life (or at least the life seen on the show) is a virtual reality game. This idea was also used in Spy Kids 3-D: Game Over. Another movie that has a bizarre theme is Brainscan, where the point of the game is to be a virtual killer. A more artistic and philosophical perspective on the subject can be seen in Avalon. There is also a film from 1995 called "Virtuosity" with Denzel Washington and Russel Crowe that dealt with the creation of a serial killer, used to train law enforcement personnel, that escapes his virtual reality into the real world.


Music videos
The lengthy video for hard rock band Aerosmith's 1993 single "Amazing" depicted virtual reality, going so far as to show two young people participating in virtual reality simultaneously from their separate personal computers (while not knowing the other was also participating in it) in which the two engage in a steamy makeout session, sky-dive, and embark on a motorcycle journey together.


Games
In 1991, the company (originally W Industries, later renamed) Virtuality licenced the Amiga 3000 for use in their VR machines and released a VR gaming system called the 1000CS. This was a stand-up immersive HMD platform with a tracked 3D joystick. The system featured several VR games including Dactyl Nightmare (shoot-em-up), Legend Quest (adventure and fantasy), Hero (VR puzzle), Grid Busters (shoot-em-up). Virtual Reality I Glasses Personal Display System is a visor and headphones headset that is compatible with any video input including 3D broadcasting, and usable with most game systems (Nintendo, PlayStation, etc.). Virtual Reality World 3D Color Ninja game comes with headset visor and ankle and wrist straps that sense the player's punches and kicks. Virtual Reality Wireless TV Tennis Game comes with a toy tennis racket that senses the player's swing, while Wireless TV Virtual Reality Boxing includes boxing gloves that the player wears and jabs with. Nintendo's Virtual Boy was sold for only one year, 1995. Bob Ladrach brought Virtual Knight into the major theme park arcades in 1994. Aura Interactor Virtual Reality Game Wear is a chest and back harness through which the player can feel punches, explosions, kicks, uppercuts, slam-dunks, crashes, and bodyblows. It works with Sega Genesis and Super Nintendo.


In the Mage: The Ascension role-playing game, the mage tradition of the Virtual Adepts is presented as the real creators of VR. The Adepts' ultimate objective is to move into virtual reality, scrapping their physical bodies in favour of improved virtual ones. Also, the .hack series centers on a virtual reality video game. This shows the potentially dangerous side of virtual reality, demonstrating the adverse effects on human health and possible viruses, including a comatose state that some players assume. Metal Gear Solid bases heavily on VR usage, either as a part of the plot, or simply to guide the players through training sessions. In Kingdom Hearts II, the character Roxas lives in a virtual Twilight Town until he merges with Sora. In System Shock, the player has implants making him able to enter into a kind of cyberspace. Its sequel, System Shock 2 also features some minor levels of VR.


Due to the increasing popularity of massively multiplayer games and simulated reality fiction, some members of the MMOG community have jokingly compared real life to MMORPG mechanics.


In 2006 a German company [2] released VR 3D Dragonflight.
Fine Art
David Em was the first fine artist to create navigable virtual worlds in the 1970s. His early work was done on mainframes at III, JPL and Cal Tech. Jeffrey Shaw explored the potential of VR in fine arts with early works like Legible City (1989), Virtual Museum (1991), Golden Calf(1994). Canadian artist Char Davies created immersive VR art pieces Osmose (1995) and Ephémère (1998). Maurice Benayoun's work introduced metaphorical, philosophical or political content, combining VR, network, generation and intelligent agents, in works like Is God Flat (1994), The Tunnel under the Atlantic (1995), World Skin (1997). Other pioneering artists working in VR have included Rita Addison, Rebecca Allen, Perry Hoberman, Jacki Morie, and Brenda Laurel.
Marketing
A side effect of the chic image that has been cultivated for virtual reality in the media is that advertising and merchandise have been associated with VR over the years to take advantage of the buzz. This is often seen in product tie-ins with cross-media properties, especially gaming licenses, with varying degrees of success. The NES Power Glove by Mattel from the 1980s was an early example as well as the U-Force and later, the Sega Activator. Marketing ties between VR and video games are not to be unexpected, given that much of the progress in 3D computer graphics and virtual environment development (traditional hallmarks of VR) has been driven by the gaming industry over the last decade. TV commercials featuring VR have also been made for other products, however, such as Nike's "Virtual Andre" in 1997, featuring a teenager playing tennis using a goggle and gloves system against a computer generated Andre Agassi.


Health care education
While its use is still not widespread, virtual reality is finding its way into the training of health care professionals. Use ranges from anatomy instruction (example) to surgery simulation (example). Annual conferences are held to examine the latest research in utilizing virtual reality in the medical fields.



Therapeutic


The primary use of VR in a therapeutic role is its application to various forms of exposure therapy, ranging from phobia treatments, to newer approaches to treating PTSD. A very basic VR simulation with simple sight and sound models has been shown to be invaluable in phobia treatment (notable examples would be various zoophobias, and acrophobia) as a step between basic exposure therapy such as the use of simulacra and true exposure. A much more recent application is being piloted by the U.S. Navy to use a much more complex simulation to immerse veterans (specifically of Iraq) suffering from PTSD in simulations of urban combat settings. While this sounds counterintuitive, talk therapy has limited benefits for people with PTSD, which is now thought by many to be a result of changes either to the limbic system in particular, or a systemic change in stress response. Much as in phobia treatment, exposure to the subject of the trauma or fear seems to lead to desensitization, and a significant reduction in symptoms. Some information on this can be found at this Businessweek article as well as this Office of Naval Research article.


Real estate
The real estate sector has used the term "virtual reality" for websites that offer panoramic images laced into a viewer such as QuickTime player in which the viewer can rotate to see all 360 degrees of the image.


A Phoenix-based research and design company has launched what they call a VuPOD. Using this device, a prospective buyer can request the images of any property for sale and see them in the VuPOD. The images immerse the viewer in a full 360 degree image that surrounds them, complete with floor plans of the property so the viewer can tell where the room is in relation to the rest of the property.[citation needed]


Challenges
Virtual reality has been heavily criticized for being an inefficient method for navigating non-geographical information. At present, the idea of ubiquitous computing is very popular in user interface design, and this may be seen as a reaction against VR and its problems. In reality, these two kinds of interfaces have totally different goals and are complementary. The goal of ubiquitous computing is to bring the computer into the user's world, rather than force the user to go inside the computer. The current trend in VR is actually to merge the two user interfaces to create a fully immersive and integrated experience. See simulated reality for a discussion of what might have to be considered if a flawless virtual reality technology was possible. Another obstacle is the headaches due to eye strain, caused by VR headsets. RSI can also result from repeated use of the handset gloves.




Technorati :

Light on out-of-body experiences


Researchers have found a way to induce out-of-body experiences using virtual-reality goggles, helping to explain a phenomenon reported by about one in 10 people.


The illusion of watching oneself from several feet away while awake is often reported by people undergoing strokes or epileptic seizures or using drugs.


In the studies published in Thursday's Science journal, two teams of researchers managed to induce the effect in healthy people by scrambling their senses of vision and touch with the aid of the goggles.


"We...describe an illusion during which healthy participants experienced a virtual body as if it were their own, and localized their 'selves' outside their body borders at a different position in space," wrote Olaf Blanke, a researcher at the Ecole Polytechnique Federale de Lausanne in Switzerland.


One team, led by Henrik Ehrsson at University College London, had volunteers sit in a chair in the middle of a room wearing virtual-reality goggles showing the view from a video camera placed behind them.


A researcher moved a rod up to the camera at the same time as the person's chest was touched, and then the rod disappeared from view.


This created the illusion that the person was sitting a few steps back, where the camera stood.


In Blanke's experiment, subjects wearing virtual-reality goggles watched an image of a mannequin representing their own body placed directly in front of them while a researcher scratched their back.


Afterwards, the volunteers were blindfolded and guided backwards. When they were asked to return to their original positions, they went toward the place where they had seen their virtual body--the mannequin.


The researchers said mixing up the senses of sight and touch was key to the experiments.


"We tried to take two modalities--sight and touch--and systematically dissociate the information with those two senses, using virtual information to do this," Blanke said in a telephone interview. "It is a mismatch between the two senses."


This type of experiment could help to shed light on philosophical questions surrounding the sense of self, and could also lead to more practical applications in video games or remote surgery, the researchers said.


This could involve providing tactile information to a surgeon who is using video to control robot arms in a remote operating theater, said Ehrsson, now at Sweden's Karolinska Institute.


"In the best case it would be the whole self transported to the operating theater," he said. "This experiment will help to improve things like that."






Technorati :

Vietnam too weak in capability to make sedans


presented by Md moshiur rahman.


Vietnam should relinquish the dream of manufacturing its own cars as it does not have enough capability to do that, say experts.



Low capability


According to the Ministry of Industry and Trade, investors are now rushing to set up car assembly workshops with surprisingly low capital, even lower than the price of a car. Two of the 35 domestic automobile manufacturers have capital of less than VND10bil ($625,000).


Phan Dang Tuat, General Director of the Institute for Industry Policy and Strategy (IPS) under the Ministry of Industry and Trade, made the comparison that a wedding gown shop in Vietnam must have the capital of VND20bil ($1.25mil), while some investors intend to assemble cars with just several billion dong.


Investors rush to set up car assembling workshops, while no one thinks of producing car parts to provide for car assemblers, though Vietnam is still very weak in supporting industries.


Mr Tuat cited an example: Toyota has 1,400-1,600 car part suppliers, while in Vietnam there are only 60 car part suppliers, which serve 50 car assemblers.


Just focus on fortes


Vietnam has been considered a car market full of potential, which is believed to be a foundation for developing an automobile industry. Why not make cars domestically, when several billions of dollars are spent every year to import cars?


However, Mr Tuat said that Vietnamese enterprises should know their fortes and shortcomings to decide where to invest. He cited Truong Hai and Xuan Kien as the two examples. The two manufacturers' sales have been very successful, ranking the second and third among Vietnam Automobile Manufacturers' Association (VAMA) members, just after Toyota (Toyota sold 1,774 units in July, Truong Hai 956 and Xuan Kien 608). Mr Tuat said that the manufacturers followed the right way when focusing on buses and vans, which the market needs and fit their capability.


Mr Tuat said that Vietnam should draw lessons from Europe's stories. The continent, which is considered as having a powerful automobile industry, has only five nations successful in car manufacturing, while other countries still have to face a lot of difficulties.


Making sedans requires super technologies. A driving seat of a Mercedes Benz S500 has 26 small engines. One automobile manufacturer in the world needs several thousand car part suppliers. "It is clear that making sedans should not be seen as the job of Vietnamese enterprises



More About Vietnam car industry


:Creating an auto industry


There is a paradox in Vietnam's strategy on creating an auto industry. Industrial architects want Vietnam to find a place on the world's auto manufacturing map, first to meet domestic demand and second to jump into the regional and world markets. These ambitious plans overlook the fact that Vietnam's auto industry is around 30 years behind other players in the region. That's not to say that Vietnam will never achieve those goals, but it suggests there's no point in trying to go too fast. The industrial planners want the 11 auto joint ventures operating here to look beyond simply assembling cars from imported parts and increase the ratio of locally made parts. But in a country which cannot produce a 100 per cent locally made motorbike, the request to increase the car part localisation rate in the next few months sounds like mission impossible. Global conditions govern such long-term development. Due to its participation in AFTA, under the terms of CEPT Vietnam must remove all non-tariff trade barriers by January 2009, and normalise import duties on products with a 40 per cent or higher Asian content in the 0-5 per cent range. Due to its imminent admission to WTO, it will also be unable to rely on local content requirements to develop the local car industry. Hence any protection policy must be implemented quickly to allow the local industry time to grow. In the mid 90s when car companies were fighting each other for entry to Vietnam's market, the government acted to protect locally produced cars by implementing various measures such as banning second hand car imports in a bid to generate a CKD-dominated domestic market. From 25,000 units in 1996, the market has grown to 30,000 in 1997, 36,000 in 1998, 41,000 in 1999, and 45,000 in 2000. However, those are production figures. Actual sales were 20,000 units in 1996 and 1997, 26,000 in 1998, and 21,000 in 1999. They reached their peak of 43,600 units in 2003 before news of the tax rise due to take effect on the first day of 2004 was released. The automobile industry depends enormously on economy of scale, something very difficult to achieve in such a limited market as Vietnam's. The optimal size for a single plant is said to be over 200,000 units per year. If Vietnam goes ahead and tries to establish a car industry and promote supporting industries, it would seem to be doomed to failure since the domestic market is too small to support such a move and likely to remain that way for some time. A recent report released by the Ministry of Industry reveals that the eleven foreign invested auto joint ventures currently in operation have an assembling capacity of 148,200 cars per year. To date, however, they have reached just 30 per cent of the designed capacity. Up to the end of 2003, they had assembled a total of 117,137 cars with total turnover of nearly VND3 billion ($198,000). The total investment capital under the licence granted to the eleven joint ventures should be $574.7m, but in reality they have invested around $450m, a little over 80 per cent. Mr Do Huu Hao, vice Minister of Industry, said that fundamentally Vietnam has been in the process of formulating an automobile industry with the participation of well-known car manufacturers. Through the operation of joint ventures, the Vietnam partners have updated technologies and techniques of assembling and manufacturing high grade cars, attracting a large number of workers and creating the preconditions for the development of support industries. However, the lack of spare parts manufacturers in Vietnam means the joint ventures have to rely on importing almost all parts for assembling cars resulting in higher prices. New projects on assembling and manufacturing four seat cars won't be granted licences. However, projects specialising in assembling and manufacturing specialised vehicles such as trucks, vans and buses, and those specialising in manufacturing spare parts, will be facilitated and encouraged. In the judgement of the Ministry of Industry, most of the foreign joint ventures are focusing on assembling, not promoting technology transfer or human resources management in auto industry technology. Authorities feel this should be changed. While a number of domestic companies are now working on automobile parts production, most of them have very small capacity while the parts being produced domestically are of low value added and are few in number. The average localisation rate of foreign joint ventures is only 2 to 10 per cent, except for Toyota whose rate is up to 13 per cent. The localisation rate in light trucks and buses (below 24 seats) is around 10 to 20 per cent, while buses with more than 24 seats have a localisation rate of over 30 per cent.






Technorati :

Interactive voice response


presented by ; Md Moshiur Rahman.


Talking to machines: Interactive voice response gets better
Improved technology makes it less likely that you'll get caught in 'touch-tone hell'


Such recorded greetings, inviting a response via the caller's touch-tone telephone keypad, are generated by interactive voice response (IVR) systems, which for two decades have been the principal communications interface between the public and corporate America, supporting self-service applications -- or at least reducing the workload on live call agents.


But these days, IVR systems are changing, leaving less and less likelihood of callers being trapped in "touch-tone hell." More corporations are switching to speech recognition so that callers are greeted by a voice that invites them to simply state their business. Reacting to the words they recognize, these systems route the calls accordingly.


Such an open-ended greeting is called a natural language system, explained Lynda Smith, division manager at Nuance Communications Inc. in Burlington, Mass., which makes the "speech engine" used in many IVRs. (Simpler, menu-structured speech interfaces are called "directed dialog" systems.)


Smith divides speech-based IVRs into four tiers. The lowest tier prompts the user to "press or say 1, and might have a "grammar" (the repertoire of words and phrases it can respond to) of 250 words. Tier 2 would be similar but with a grammar of up to 2,500 utterances. Tier 3 would add a natural language system, and Tier 4 would be capable of handling an open-ended grammar, such as would be needed for a directory look-up application. Prices range from $100,000 to $1 million, she added.


Speech recognition accuracy not an issue


Speech recognition accuracy is, oddly enough, not an issue, since the system can prompt for clarification if it's confused, explained Bob Meisel, telecommunications analyst and head of TMA Associates in Tarzana, Calif. The real issue with IVRs is containment -- how often the callers are able to complete their errands within the IVR application, without aborting the procedure by pressing 0 (or whatever it takes) to get to a live agent.


Smith said that containment rates are better for basic applications such as bank balance inquiries, but overall they can range from 40% to 90% -- assuming that a grammar has been assembled covering every way a caller might ask for the options in question. Meanwhile, the percentage of callers in touch-tone IVRs who immediately try to get to an operator by pressing zero varies from 10% to 40%, and the rate of misrouting ranges from 15% to 35%, she added. Smith claimed that the use of speech cuts the zero-out rate by up to 30% and reduces misdirected calls by up to 50%.


"Most banks were early adopters and had rates of better than 80% using touch-tone," added Mike Moors, director of sales at Genesys Telecommunications Laboratories Inc., an IVR vendor in Daly City, Calif. "A few have moved to speech and have seen a slight increase, but the rate is still in the 80s," he explained. "Health care firms usually see rates in the 15% to 20% range, since people are calling for more complicated reasons, but the system can still gather information about the caller, and 80% to 90% of the callers succeed at giving it within the IVR."


Others were less optimistic. "Normally, the success rate is 25%, rising to 45% or 50% if you put effort into it," said Bern Elliot, an analyst at Gartner Inc. in Stamford, Conn. "If you work at it and motivate people, you might get to 70%, but that is the exception rather than the rule," he added.


Lots of effort required


And considerable effort is required, indicated Skip True, an IVR and Web strategy manager at Chrysler Financial Co. in Farmington Hills, Mich.


"It isn't easy," said True of his migration within the plast year from a touch-tone system (which had an opening menu of seven options) to a speech system. "The biggest advantage of speech is that fewer callers are misdirected," he said. "But a constant effort is needed to edit and tune [the grammars] as opposed to the old touch-tone system, which functioned for years without anyone paying much attention to it.


"We captured 25,000 utterances and used them as a baseline to begin our grammars," said True. Basically, Chrysler Financial recorded calls to a simulated natural-language IVR, which actually used a live agent to route the calls, he recalled. The speech application required nine months to finish, and tuning continues as they encounter callers describing things in unanticipated ways, such as "payment coupon" for "billing payment."


In the end, the containment rate rose 2%. He declined to give the baseline, but customer satisfaction went up on all metrics. Time spent by callers with live agents has actually increased, because it's easier for a caller to get to an agent, he explained. Routine calls are typically contained within the IVR, True also noted


Meanwhile, True's emphasis on customer satisfaction is another trend in the IVR field, as corporations seek to do more than just save money on call agents. "Speech has moved the proposition from cost control to customer satisfaction," said Ken Goldberg, vice president at Dallas-based Intervoice Inc., which credits itself with inventing IVR 24 years ago.


But the cost equation remains compelling, since handling an automated transaction costs 30 cents to complete, as opposed to $6 with a live agent, estimated Elizabeth Herrell, analyst at Forrester Research Inc. in Cambridge, Mass.


"In large call centers, every 1% improvement in self-service can produce millions in savings," added Goldberg.


VoiceXML standard helps out


The other major trend in the field is one that callers won't hear: the advent of industry standards, especially VoiceXML, usually described as serving the same function for speech IVR that HTML serves for the Web.


"Previously, each IVR system had its own proprietary environment, and making changes was difficult and required a dedicated programmer," explained Herrell. "With VoiceXML 2.0 [finalized in 2004], you can make changes on the fly, and there is a wide corps of developers and libraries of reusable components. I've seen some big speech applications completed in 30 days that would have taken six to nine months in the old days, although it still usually takes a couple of months."


While some corporations may still be buying proprietary systems to expand legacy IVRs, new systems are typically sold with VoiceXML, noted Moors. "The game is over -- it's not a choice any more," he said.


Travelocity.com LP's call center in the Dallas suburb of Southlake, Texas, had been using a speech IVR for years, but it decided to migrate to a new one two years ago mostly to take advantage of VoiceXML, explained Ashok Narayanan, manager of software development at Travelocity.


Since then, Narayanan said, "writing an IVR application is like writing a Web application." Travelocity has since been able to mimic several of its Web applications on its IVR, including one that lets callers get copies of their itineraries sent to their mobile phones as text messages.


Narayanan said an application can take two months to create and can involve 10,000 recorded utterances, including city names, airline names, months and numbers.


"We have seen a lot of improvement in calls that terminate in the IVR for a good reason [instead of just being abandoned] or get routed to the correct agent," Narayanan said. "As for the monetary savings, we would not be doing this if the numbers were not to our liking."


Another widely cited trend in IVRs is the increased use of computer telephone integrations (CTI) so that data input by the caller will appear on the agent's screen when the caller finally gets to that agent.


"The leading users recognize that if you want people to do self-service applications, then you have to reward them, and that means getting them to the right agent and having the agent see what they are working on," Elliot said. "If the caller has to start all over after transferring to an agent, what's the point? Next time, they'll just try to get straight to the operator."


Call centers lacking CTI will typically tell the callers that after getting to a live agent, they must repeat their information for "security reasons" or because "the computer is down" -- a practice derided by multiple sources.


Here comes Microsoft


Those call centers may be getting some help from Microsoft Corp., which is seeking to open speech-enabled IVR application development to a much wider market with the introduction of its Office Communications Server 2007, whose Speech Server component offers IVR functionality, noted Albert Kooiman, a senior business development manager at Microsoft.


"Most of our customers are professionals, and the effort required to set up an IVR will be no different from setting up an Exchange server," he said. The list price for the standard edition is $649, he said, adding that the application will also require a server, a router and various software tools. One server should be able to simultaneously handle as many as 200 touch-tone calls, but the number will fall to about 30 with complex speech applications such as directory assistance, he added.


But however they're building them, IVR application developers must also come to terms with a simmering public backlash against automated customer service, Meisel noted. He pointed to the www.gethuman.com Web site, which details how to bypass the IVR menu and go directly to a live agent at the customer service numbers of hundreds of corporations. (It also grades the service that the callers receive -- and most of the IVRs get an F.)




Technorati :

Closing the Imaging Gap Between Optical and Electron Microscopy


A new tabletop SEM combines the high magnification of electron microscopy with the ease of use of optical microscopy to improve performance in a benchtop instrument.

A radical new breed of microscope fills the critical gap between optical and electron microscopy. Optical microscopes are easy to use but generally limited to useful magnifications of 1,000x or less. Electron microscopes routinely operate at magnifications of 100,000x but can also be orders of magnitude more difficult to operate. The new microscopes, known commonly as tabletop or benchtop scanning electron microscopes (SEM), provide useful magnifications up to 20,000x and are as easy to use as the typical laboratory-grade optical microscopes.

The new instruments could not have come at a better time since the performance gap they fill corresponds to the ability to resolve features with sizes between 5 nm and 100 nm, a range that is critical in the booming field of nanotechnology. Microelectronics, microelectromechanical systems (MEMS), composite materials, and pharmaceuticals are but a few of the most obvious technologies with a pressing need for fast, easy access to structural and morphological characterization in this size range.

















A typical SEM









The tabletop Phenom. All images: FEI Co.



The most striking development in the new tabletop SEM is its ease of use. Although small, inexpensive SEMs have been introduced more than once over the half century history of the technology, their widespread acceptance has been hindered by their operational difficulty. One of the new tabletop SEMs, the Phenom, from FEI Co., Hillsboro, Ore., is as easy to use as an optical microscope and accepts virtually any sample that will fit into the sample holder. Achieving this level of operational simplicity required much more than simply scaling down existing SEM technology; it required a redesign of many of the microscope's core components.

Back to basics
Optical microscopes use transparent lenses to focus light from the specimen into a real image, either directly onto the retina of the eye or into a camera or digital imaging system for capture and storage. SEMs create a virtual image by scanning a finely focused beam of electrons over the sample surface and mapping the intensity of various signals emitted at each point into an image array that is captured and displayed electronically.

Operating an optical microscope requires little more than placing the sample on the stage and focusing the image and is usually accomplished in a matter of seconds. Conventional SEMs, which require a high vacuum in the sample chamber, typically require several minutes to pump down the sample chamber in addition to any preparation required to make the sample compatible with the vacuum (cleaning, drying, coating techniques etc.). The time required to get an SEM image can easily become many minutes or hours. The Phenom cuts away the time, difficulty, and expense of the conventional SEM. The operator simply places the sample in the sample holder on the microscope. The automatically focused image is displayed less than 30 seconds later, with the resolution and depth of field typical of full-size SEMs.
















Tabletop SEMs can be used in forensic analysis, showing traces of materials found on clothing, such as this diatom.









A quick look with a tabletop SEM at bulk particles can show their morphology. This sample has primarily spheroidal morphology.



Improving performance
To achieve this level of performance, engineers focused on a few basic requirements: size (including facility requirements), image quality, sample requirements, and easy operation.

Size -Successful imaging at high magnification in a conventional SEM requires a quiet environment. The presence of general lab equipment and a lively, vocal workforce in the same room as the SEM causes vibrations that distort the image. It is not unusual for a customized SEM room to cost as much as the SEM itself.

The SEM's sensitivity to vibration is a function of the resonant frequency of the column and sample holder, determined primarily by its length and diameter. The Phenom's miniaturized column is approximately 10 times smaller than a conventional SEM column and is energized by permanent magnets rather than the commonly used electromagnets. The sample holder is also smaller and rigidly mounted to the SEM column. The result is a dramatic increase in the resonant frequency and a system that is virtually unaffected by the noise and vibration of a typical lab environment.

Image quality -The key determinants of image quality in an SEM are resolution and signal-to-noise ratio. In simple terms, these factors are themselves determined by the choice of electron source and the accelerating voltage of the electrons. SEMs use one of three types of electron sources. Field emission sources produce very high resolution but require expensive high vacuum systems inconsistent with the cost and sample requirements of a tabletop SEM. At the other extreme, tungsten sources have the lowest vacuum requirements but need to operate at relatively high accelerating voltages to provide acceptable signal-to-noise characteristics. Unfortunately, increasing the accelerating voltage decreases the image resolution as a result of electron penetration into the sample. The third choice, LaB6, has properties and requirements that lie between those of field emission and tungsten sources. As a result, this type of source can be run at 5kV, providing the optimal combination of resolution and signal-to-noise in a tabletop SEM.

Sample requirements -The vacuum requirements in an SEM's sample chamber affect many aspects of its operation. Higher sample chamber vacuum requirements increase pump down and sample exchange time and impose tighter constraints on sample type and condition. The Phenom's sliding vacuum seal reduces chamber volume and sample exchange time. Reduced vacuum levels in the chamber also contribute to faster sample exchange, but, more importantly, relax the constraints on sample type. FEI pioneered the development of low vacuum (ESEM) technologies that allow just about any sample that will fit in the holder, with little or no need for coating, cleaning, drying or other preparations.

Ease of use-Perhaps the most important characteristic of a lab instrument is its ease of use. A traditional electron microscope has an often bewildering user interface with an extremely large set of choices and parameters to optimize. A large majority of the variables of a traditional SEM are not needed in a workhorse tool. By eliminating variables, automating adjustments, and creative software design, the Phenom has reduced operating the microscope to driving the stage and changing the magnification. With this kind of software interface, training a novice to use the system takes a matter of minutes- the instrument becomes productive almost immediately.









This image of a fruit fly shows the underside of the head. One of the compound eyes, the mandible, and two palps are visible.

Industrial applications SEMs are used in many fields of industry, such as pharmaceuticals, composite materials, and life science. These industries, as well as many others, will benefit from the new tabletop SEMs in characterizing structures and morphologies in the nanometer range easily and quickly in the lab. Some industrial applications include:

Particle characterization: Size, distribution, and morphology of particles and powders are critical parameters for industries such as pharmaceuticals, composites, cosmetics, and catalysts.

Often these measurements are derived from bulk analyses such as laser scattering; however, interpreting the data requires some knowledge of the particles themselves, i.e., are they spherical, rod-shaped, a mix of large and very small particles, etc. Tabletop microscopes with magnification ranges up to 20,000x are ideal for this requirement. A "quick look" at these materials can greatly improve the efficiency of analysis and characterization. Measuring the dimensions and uniformity of coatings is also of critical importance in materials research.

Quality assurance in MEMS: MEMS are miniaturized components commonly used in high volume, low-cost applications, such as the automotive and electronic industries. One of the most common applications is the use of a MEMS accelerometer inside automotive air bags. The reliability of this class of device is paramount so inspection and quality assurance are frequent steps in the manufacturing process. Images of 3-D objects at magnifications in the range 100-5,000 are required, which is a task ideally suited for a tabletop SEM microscope.

Crime scene investigation: Forensic scientists have long used microscopic images as an aid to their investigations, and the trend in this field is to look at finer and finer levels of detail. Traces of foreign material found on clothing can often be used to help establish where the clothing has been. For example, the presence of a specific species of pollen can potentially indicate a particular area where the clothing must have been. Similarly, the presence of specific species of algae attached to clothing can indicate conclusively that the clothing has been in water and potentially which body of water.

Building the future technology workforce
It is a common complaint that increasingly fewer students are enrolling in science and technology degree programs. Of the many reasons for this decline, one seems to be perennial: the difficulty for students to become excited about science when it is so hard to visualize and internalize the basic concepts. The opportunity to have an easy to learn and easy to use use tabletop electron microscope inside the classroom could help mitigate this problem.

This new class of tabletop microscope that goes far beyond the resolution capabilities of a light microscope, but does not require all the cost and complexity of using a typical scanning electron microscope, will find applications in many industries and educational environments. It is not unreasonable to speculate that this capability could have the same kind of effect on the efficiency of development of new methods and materials as the PC had on the productivity of the office.



Technorati :

Find here

Home II Large Hadron Cillider News