Search This Blog

Friday, October 19, 2007

NASA clears shuttle for launch despite wing-panel concerns

From left, space shuttle program manager Wayne Hale, space shuttle launch director Mike Leinbach, and NASA Engineering and Safety Center director Ralph Roe discuss the shuttle Discovery's flight readiness Tuesday.

NASA late Tuesday cleared the shuttle Discovery for launch next week on a pivotal space station assembly mission even though some safety experts urged a delay so that three slightly degraded heat-shielding panels on the wings could be replaced.

Discovery is scheduled to lift off from NASA's Kennedy Space Center in Florida on Tuesday.

During a two-week mission, the shuttle's seven astronauts plan to deliver and install a gateway component at the space station in preparation for the future installation of European and Japanese science modules.

The decision to proceed with Discovery's launch followed a daylong review at Kennedy in which top space agency officials sifted through information on months of planning for the mission.

"We looked at everything and we're ready to go fly," said NASA's Bill Gerstenmaier, the agency's spaceflight chief, who chaired the review. "We understand the risks in front of us."

A safety concern raised by the NASA Engineering and Safety Center focused on a slight degradation of the coating on three of the 44 carbon composite panels that shield the wings' leading edges from a 3,200 degree Fahrenheit heat buildup that accompanies the shuttle's descent to Earth.

Replacing the panels would have forced NASA to haul Discovery from the launch pad to a protective hangar at Kennedy. The task would have delayed the launch by at least two months.

'We will take action'

The safety center was established after the 2003 shuttle Columbia tragedy as an independent advisory group to those who manage the agency's human and robotic missions. Columbia broke apart over Texas because of what was later determined to be damage to the leading edge of a wing, caused by foam debris during launch.

Last week, Wayne Hale, NASA's shuttle program manager, recommended the agency go ahead with plans for Discovery's launch based on assessments of the panels that showed no worsening in the shield coating after the ship's three most recent missions.

He re-embraced his position after NASA experts debated it for hours.

"At the end of the day, we decided we were in an acceptable risk posture to go fly," Hale told reporters. "We will continue to work hard on it. As we go from flight to flight, if the risk grows to an unacceptable level we will take action."

Cause of damage unknown

The assessments were based on postflight diagnostics using thermography, a technique that examines the panels using heat to reveal internal and external damage. Discovery's damage was confined to narrow regions of the silicon carbide and a glass coating on two panels on the right wing and one on the left.

Since Columbia, NASA has added inspections of the wing panels by the astronauts during the day after they liftoff and the day before their descent to Earth. The shuttle crew now carries some repair tools and materials. If severe damage is detected they could move aboard the space station to await a rescue mission.

Shuttle engineers attributed the erosion of Discovery's protective material to oxidation, a slow degradation in the presence of oxygen, and possibly to the salt in the air on Florida's Atlantic coast.

However, the safety center offered another view - that some other little understood process was responsible for the degradation, making it difficult to reliably predict when a panel might fail, said center director Ralph Roe.

Technorati :

Physicists Build Unparticle Models Guided by Big Bang and Supernovae

Unparticles and their interactions with neutrinos and other standard model particles might cause distortions in the cosmic microwave background. (WMAP image of the CMB temperature anisotropy.

Cosmology and astrophysics may help guide physicists in building a model of “unparticles,” a newly proposed sector of physics. Recently, Hooman Davoudiasl of Brookhaven National Laboratory has investigated some basic requirements that unparticles must fulfill to ensure that our standard picture of the universe remains intact.
Davoudiasl is one of a quickly growing number of physicists that have become intrigued by Harvard University professor Howard Georgi’s suggestion earlier this year that there might exist a new type of substance that cannot be described in terms of particles because its components are scale-invariant. This characteristic means that these unparticles don’t change appearance when viewed at different scales—which is very different from objects we’re familiar with. However, unparticles might be observed interacting with the standard model particles through suppressed operators.

“Georgi's proposal has motivated many physicists to inquire whether we could have missed something so exotic, in our current or future experiments, if we did not consider its signatures,” Davoudiasl told

Now, Davoudiasl has tried to place constraints on what unparticles might be so that physicists might have an idea of where to look for them in new experiments, notably in the Large Hadron Collider.

“My work points to stringent cosmological and astrophysical constraints that could shape how we view the viable parameter space of unparticle models,” he explained.

One of the strongest constraints, Davoudiasl says, might be imposed by Big Bang nucleosynthesis (BBN), the model that describes the creation of light elements after the universe cooled during its first few minutes. By the time the first particles coupled with each other to form light atomic nuclei, the unparticles must have already separated themselves out from the particles if they were to not interfere with BBN (a model that works very well).
Davoudiasl explains that this decoupling of unparticles and particles would likely have occurred during some earlier and hotter seconds of the universe when temperatures reached at least 1 GeV. After the transition to BBN, latent heat would only heat up the particles, leaving the unparticles much cooler. As in previously studied extra-dimensional models, although unparticles might cool the hot particle plasma to a degree, most of the cooling must still be caused by expansion.

Although unparticles would have to be decoupled during BBN, they might be able to recouple later, where they might interact with neutrinos. If so, then Davoudiasl predicts that the resulting fluid might lead to nonstandard shifts in the location of the acoustic peaks of the cosmic microwave background.

Another possibility is that unparticles may couple to dark matter such as WIMPS (Weakly Interacting Massive Particles). Possibly, unstable dark matter produced in the early universe could have decayed into unparticles, WIMPS, and/or other types of dark matter. Davoudiasl explains that about 5-10% of the original WIMP population could have decayed into unparticles, and still fit within today’s measured cosmological parameters. If so, then unparticles would have about the same energy density as baryonic matter.

If unparticles are to make up any part of the cosmic energy density, they could be able to decay back into standard model particles, Davoudiasl explains. He assumes that such a cold scale-invariant gas could return back into the visible sector by leaking energy into massless photons. If this leakage occurs on very short time scales compared with Hubble time, it could again distort the cosmic microwave background.

Together with BBN cosmology, astrophysical processes such as supernovae could constrain unparticle models, as they have helped constrain other types of light physics (such as axions). Davoudiasl hopes that these constraints may serve as guides for building unparticle models and ultimately observing them--if possible--in experiments.

“So far, unparticle physics does not seem to address any urgent problems, and as such its discovery is not generally anticipated,” he explained. “However, considering novel possibilities is worthwhile when you are searching for the unknown. To quote Louis Pasteur, ‘Chance favors the prepared mind.’”

Citation: Davoudiasl, Hooman. “Constraining Unparticle Physics with Cosmology and Astrophysics.” Physical Review Letters 99, 141301 (2007).

Is Mars alive, or is it only sleeping?

This is a shaded relief image derived from Mars Orbiter Laser Altimeter data, which flew onboard the Mars Global Surveyor. The image shows Olympus Mons and the three Tharsis Montes volcanoes: Arsia Mons, Pavonis Mons, and Ascraeus Mons from southwest to northeast. Credit: NASA

The surface of Mars is completely hostile to life as we know it. Martian deserts are blasted by radiation from the sun and space. The air is so thin, cold, and dry, if liquid water were present on the surface, it would freeze and boil at the same time. But there is evidence, like vast, dried up riverbeds, that Mars once was a warm and wet world that could have supported life. Are the best times over, at least for life, on Mars?
New research raises the possibility that Mars could awaken from within -- three large Martian volcanoes may only be dormant, not extinct. Volcanic eruptions release lots of greenhouse gasses, like carbon dioxide, into the atmosphere. If the eruptions are not complete, and future eruptions are large enough, they could warm the Martian climate from its present extremely cold and dry state.

NASA-funded researchers traced the flow of molten rock (magma) beneath the three large Martian volcanoes by comparing their surface features to those found on Hawaiian volcanoes.

"On Earth, the Hawaiian islands were built from volcanoes that erupted as the Earth's crust slid over a hot spot -- a plume of rising magma," said Dr. Jacob Bleacher of Arizona State University and NASA's Goddard Space Flight Center in Greenbelt, Md. "Our research raises the possibility that the opposite happens on Mars - a plume might move beneath stationary crust." The observations could also indicate that the three Martian volcanoes might not be extinct. Bleacher is lead author of a paper on these results that appeared in the Journal of Geophysical Research, Planets, September 19.

The three volcanoes are in the Tharsis region of Mars. They are huge compared to terrestrial volcanoes, with each about 300 kilometers (186 miles) across. They form a chain heading northeast called the Tharsis Montes, from Arsia Mons just south of the Martian equator, to Pavonis Mons at the equator, to Ascraeus Mons slightly more then ten degrees north of the equator.

No volcanic activity has been observed at the Tharsis Montes, but the scarcity of large impact craters in the region indicates that they erupted relatively recently in Martian history. Features in lava flows around the Tharsis Montes reveal that later eruptions from large cracks, or rift zones, on the sides of these volcanoes might have started at Arsia Mons and moved northeast up the chain, according to the new research.

The researchers first studied lava flow features that are related to the eruptive history of Hawaiian volcanoes. On Hawaii (the Big Island), the youngest volcanoes are on the southeastern end, directly over the hot spot. As the Pacific crustal plate slowly moves to the northwest, the volcanoes are carried away from the hotspot. Over time, the movement has created a chain of islands made from extinct volcanoes.

Volcanoes over the hot spot have the hottest lava. Its high temperature allows it to flow freely. A steady supply of magma from the hot spot means the eruptions last longer. Lengthy eruptions form lava tubes as the surface of the lava flow cools and crusts over, while lava continues to flow beneath. After the eruption, the tube empties and the surface collapses, revealing the hidden tube.

As the volcano is carried away from the hot spot, magma has to travel farther to reach it, and the magma cools. Cooler magma makes the lava flow more slowly compared to lava at the younger volcanoes, like the way molasses flows more slowly than water. The supply of magma is not as steady, and the eruptions are shorter. Brief eruptions of slowly flowing lava form channels instead of tubes. Flows with channels partially or completely cover the earlier flows with tubes.

As the volcano moves even further from the hot spot, only isolated pockets of rising magma remain. As the magma cools, it releases trapped gas. This creates short, explosive eruptions of cinders (gas bubbles out of the lava, forming sponge-like cinder stones). Earlier flows become covered with piles of cinders, called cinder cones, which form around these eruptions.

"We thought we could take what we learned about lava flow features on Hawaiian volcanoes and apply it to Martian volcanoes to reveal their history," said Bleacher. "The problem was that until recently, there were no photos with sufficient detail over large surface areas to reveal these features on Martian volcanoes. We finally have pictures with enough detail from the latest missions to Mars, including NASA's Mars Odyssey and Mars Global Surveyor, and the European Space Agency's Mars Express missions."

Using images and data from these missions, the team discovered that the main flanks of the Tharsis Montes volcanoes were all alike, with lava channels covering the few visible lava tubes. However, each volcano experienced a later eruption that behaved differently. Lava issued from cracks (rifts) on the sides of the volcanoes, forming large lava aprons, called rift aprons by the team.

The new observations show that the rift apron on the northernmost volcano, Ascraeus Mons, has the most tubes, many of which are not buried by lava channels. Since tube flows are the first to form over a hot spot, this indicates that Ascraeus was likely active more recently. The flow on the southernmost volcano, Arsia Mons, has the least tubes, indicating that its rift aprons are older. Also, the team saw more channel flows partially burying tube flows at Arsia. These trends across the volcanic chain indicate that the rift aprons might have shared a common source like the Hawaiian volcanoes, and that apron eruptions started at Arsia, then moved northward, burying the earlier tube flows at Arsia with channel flows.

Since there is no evidence for widespread crustal plate movement on Mars, one explanation is that the magma plume could have moved beneath the Tharsis Montes volcanoes, according to the team. This is opposite to the situation at Hawaii, where volcanoes move over a plume that is either stationary or moving much more slowly. Another scenario that could explain the features is a stationary plume that spreads out as it nears the surface, like smoke hitting a ceiling. The plume could have remained under Arsia and spread northward toward Ascraeus. "Our evidence doesn't favor either scenario, but one way to explain the trends we see is for a plume to move under the stationary Martian crust," said Bleacher.

The team also did not see any cinder cone features on any of the Tharsis Montes rift apron flows. Since cinder cone eruptions are the final stage of hot spot volcanoes, the rift apron eruptions might only be dormant, not extinct, according to the team. If the eruptions are not complete, and future eruptions are large enough, they could contribute significant amounts of water and carbon dioxide to the Martian atmosphere.

Technorati :

Researchers measure carbon nanotube interaction

An artist's representation of an amine functional group attached to an AFM tip approaching a carbon nanotube surface in toluene solution. Translucent blue shape on the nanotube represents the polarization charge forming on the nanotube as the result of the interaction with the approaching molecule. Chemical force microscopy measures the tiny forces generated by this single functional group interaction. (Illustration by Scott Dougherty, LLNLnanotubeCarbon nanotubes have been employed for a variety of uses including composite materials, biosensors, nano-electronic circuits and membranes.

While they have proven useful for these purposes, no one really knows much about what's going on at the molecular level. For example, how do nanotubes and chemical functional groups interact with each other on the atomic scale? Answering this question could lead to improvements in future nano devices.

In a quest to find the answer, researchers for the first time have been able to measure a specific interaction for a single functional group with carbon nanotubes using chemical force microscopy - a nanoscale technique that measures interaction forces using tiny spring-like sensors. Functional groups are the smallest specific group of atoms within a molecule that determine the characteristic chemical reactions of that molecule.

A recent report by a team of Lawrence Livermore National Laboratory researchers and colleagues found that the interaction strength does not follow conventional trends of increasing polarity or repelling water. Instead, it depends on the intricate electronic interactions between the nanotube and the functional group.
This work pushes chemical force microscopy into a new territory," said Aleksandr Noy, lead author of the paper that appears in the Oct. 14 online issue of the journal, Nature Nanotechnology.

Understanding the interactions between carbon nanotubes (CNTs) and individual chemical functional groups is necessary for the engineering of future generations of sensors and nano devices that will rely on single-molecule coupling between components. Carbon nanotubes are extremely small, which makes it particularly difficult to measure the adhesion force of an individual molecule at the carbon nanotube surface. In the past, researchers had to rely on modeling, indirect measurements and large microscale tests.

But the Livermore team went a step further and smaller to get a more exact measurement. The scientists were able to achieve a true single function group interaction by reducing the probe-nanotube contact area to about 1.3 nanometers (one million nanometers equals one millimeter).

Adhesion force graphs showed that the interaction forces vary significantly from one functionality to the next. To understand these measurements, researchers collaborated with a team of computational chemists who performed ab initio simulations of the interactions of functional groups with the sidewall of a zig-zag carbon nanotube. Calculations showed that there was a strong dependence of the interaction strength on the electronic structure of the interacting molecule/CNT system. To the researchers delight, the calculated interaction forces provided an exact match to the experimental results.

"This is the first time we were able to make a direct comparison between an experimental measurement of an interaction and an ab initio calculation for a real-world materials system," Noy said. "In the past, there has always been a gap between what we could measure in an experiment and what the computational methods could do. It is exciting to be able to bridge that gap."

This research opens up a new capability for nanoscale materials science. The ability to measure interactions on a single functional group level could eliminate much of the guess work that goes into the design of new nanocomposite materials, nanosensors, or molecular assemblies, which in turn could help in building better and stronger materials, and more sensitive devices and sensors in the future.

Thin films of silicon nanoparticles roll into flexible nanotubes

By depositing nanoparticles onto a charged surface, researchers at the University of Illinois at Urbana-Champaign have crafted nanotubes from silicon that are flexible and nearly as soft as rubber.

"Resembling miniature scrolls, the nanotubes could prove useful as catalysts, guided laser cavities and nanorobots," said Sahraoui Chaieb, a professor of mechanical and industrial engineering at Illinois and a researcher at the Beckman Institute for Advanced Science and Technology.

To create their flexible nanotubes, Chaieb and his colleagues - physics professor Munir Nayfeh and graduate research assistant Adam Smith - start with a colloidal suspension of silicon nanoparticles (each particle is about 1 nanometer in diameter) in alcohol. By applying an electric field, the researchers drive the nanoparticles to the surface of a positively charged substrate, where they form a thin film.

Upon drying, the film spontaneously detaches from the substrate and rolls into a nanotube. Nanotubes with diameters ranging from 0.2 to 5 microns and up to 100 microns long have been achieved.

Using an atomic force microscope, the researchers found that the Young's modulus (a measure of a material's elasticity) of the film was about 5,000 times smaller than that of bulk silicon, but just 30 times larger than that of rubber.

"We suspect that the nanotubes consist of silicon nanoparticles held together by oxygen atoms to form a three-dimensional network," Chaieb said. "The nanotubes are soft and flexible because of the presence of the oxygen atoms. This simple bottom-up approach will give other researchers ideas how to build inexpensive active structures for lab-on-chip applications."

"Because the silicon nanoparticles - which are made using a basic electrochemical procedure - have properties such as photoluminescence, photostability and stimulated emission, the resulting nanotubes might serve as nanodiodes and flexible lasers that could be controlled with an electric field," Nayfeh said.

The results will be reported in an upcoming issue of the journal Applied Physics Letters. The work was funded by the National Science Foundation and the state of Illinois.

Technorati :

Brain Images Make Cognitive Research more Believable :neuroscience

Brain is most complexive thing, how barin works , how brain think, what accept or reject,, and much more is the vital research matter .People are more likely to believe findings from a neuroscience study when the report is paired with a colored image of a brain as opposed to other representational images of data such as bar graphs, according to a new Colorado State University study. (Credit: iStockphoto/Aaron Kondziela)

People are more likely to believe findings from a neuroscience study when the report is paired with a colored image of a brain as opposed to other representational images of data such as bar graphs, according to a new Colorado State University study.

Persuasive influence on public perception.

Scientists and journalists have recently suggested that brain images have a persuasive influence on the public perception of research on cognition. This idea was tested directly in a series of experiments reported by David McCabe, an assistant professor in the Department of Psychology at Colorado State, and his colleague Alan Castel, an assistant professor at University of California-Los Angeles. The forthcoming paper, to be published in the journal Cognition, was recently published online.

"We found the use of brain images to represent the level of brain activity associated with cognitive processes clearly influenced ratings of scientific merit," McCabe said. "This sort of visual evidence of physical systems at work is typical in areas of science like chemistry and physics, but has not traditionally been associated with research on cognition.

"We think this is the reason people find brain images compelling. The images provide a physical basis for thinking."

Brain images compelling

In a series of three experiments, undergraduate students were either asked to read brief articles that made fictitious and unsubstantiated claims such as "watching television increases math skills," or they read a real article describing research showing that brain imaging can be used as a lie detector.

When the research participants were asked to rate their agreement with the conclusions reached in the article, ratings were higher when a brain image had accompanied the article, compared to when it did not include a brain image or included a bar graph representing the data.

This effect occurred regardless of whether the article described a fictitious, implausible finding or realistic research.

Conclusions often oversimplified and misrepresented

"Cognitive neuroscience studies which appear in mainstream media are often oversimplified and conclusions can be misrepresented," McCabe said. "We hope that our findings get people thinking more before making sensational claims based on brain imaging data, such as when they claim there is a 'God spot' in the brain."

Article: "Seeing is believing: The effect of brain images on judgments and scientific reasoning."

Technorati :

MIT finds new role for well-known protein

Research could lead to treatments for Alzheimer's, Parkinson's,

Fluorescent micrograph (scale bar: 10 micrometers) shows yeast cells (red) with septin (green), which enables the budding of daughter cells. MIT researchers have found septin also helps neurons sprout the branch-like protrusions used to communicate with other neurons. Image / Philippsen Lab, Biozentrum B

In a finding that may lead to potential new treatments for diseases such as Alzheimer's and Parkinson's, researchers at the Picower Institute for Learning and Memory at MIT report an unexpected role in the brain for a well-known protein.

A study by Morgan H. Sheng, Menicon Professor of Neuroscience and a Howard Hughes Medical Institute investigator, and colleagues appearing in the Oct. 23 issue of Current Biology shows that the same protein that enables a yeast cell to bud into two daughter cells also helps neurons sprout the branch-like protrusions used to communicate with other neurons.

The work revolves around septins--proteins known since the 1970s to play an essential function in the process through which the cytoplasm of a single yeast cell divides. "In yeast, septin is localized exactly at the neck between the yeast mother cell and the bud or emerging daughter cell," Sheng said. "Amazingly, we found septin protein localized at the base of the neck of neuronal dendritic spines and at the branchpoint of dendritic branches."

Nine of the 14 septins found in mammals are found in the brain. One of them, Sept7, appears the most, but its role was unclear. Septins form long filaments and act as scaffolds, recruiting other proteins into their assigned roles of builders of the cell infrastructure.

While neurons don't divide, they do form protrusions that eventually elongate into dendritic branches. Dendrites, from the Greek word for "tree," conduct electrical stimulation from other neurons to the cell body of the neuron from which the dendrites project.

Electrical stimulation is transmitted via synapses, which are located at various points along the dendritic branches. Dendrites play a critical role in receiving these synaptic inputs. "Because dendritic spines are important for synaptic function and memory formation, understanding of septins may help to prevent the loss of spines and synapses that accompanies many neurodegenerative diseases," said co-author Tomoko Tada, a postdoctoral associate in the Picower Institute. "Septin could be a potential target protein to treat these diseases."

Moreover, in the cultured hippocampal neurons the researchers used in the study, septin was essential for normal branching and spine formation. An abundance of septin made dendrites grow and proliferate while a dearth of septin made them small and malformed.

"Boosting septin expression and function would enhance the stability of spines and synapses, and therefore be good for cognitive functions such as learning and memory," Sheng said. His laboratory is now exploring ways to prevent septin degradation and loss.

In addition to Sheng and Tada, authors are MIT affiliates Alyson Simonetta and Matthew Batterton; Makoto Kinoshita of Kyoto University Graduate School of Medicine; and Picower postdoctoral associate Dieter Edbauer.

This work is supported by the National Institutes of Health and the RIKEN-MIT Neuroscience Research Center

Technorati :

Toxic Releases Down From North American Industry Leaders

Source :

Pollution , and purification is always a burning question , awaring the CEC's goal ,The latest Taking Stock report from the Commission for Environmental Cooperation (CEC) reveals that a continued decline in releases of toxic chemicals to the environment--15 percent for the United States and Canada from 1998 to 2004--is being driven by a group of industrial facilities that are the largest generators of emissions
The CEC report, however, also reveals that the leading role of the largest waste-producing facilities stands in stark contrast to a substantial increase in chemical releases and transfers by a much larger group of industrial facilities that report lower volumes of emissions.

Released October 18, the annual report compares industrial pollution from a matched set of facilities in Canada and the United States--three million tonnes of chemicals released or transferred in the two countries in 2004. Over one-third of that amount was released at the location of reporting facilities, including over 700,000 tonnes released to the air, with another third transferred to recycling. For the first time, the CEC report also provides data from Mexico. Across the three countries, metals and their compounds--lead, chromium, nickel and mercury--were reported by the highest proportion of facilities.

"The evidence is clear that industry and government action to limit chemical releases is showing steady progress," said Adrián Vázquez-Gálvez, CEC's executive director. "It is equally clear that a large number of small and medium-size industrial facilities need to do a better job in reducing their waste and emissions if we are going to see even greater progress in North America. We trust the progress shown by industry leaders and the fact that pollution prevention is a proven strategy will encourage everyone to tackle pollution issues at the source."

The CEC's analysis demonstrates that facilities from Canada and the United States that reported pollution prevention activities--product and process redesign, spill and leak detection, and substituting raw materials--showed reductions from 2002--2004. Facilities not engaged in these activities did not show similar progress.

A new chapter provides a detailed look at industrial recycling, finding that over one-third of US and Canadian releases and transfers reported in 2004--more than 1 million tonnes--were recycled. Recycling has increased in recent years due to increases in production and in scrap metal prices. Most of the materials were metals, including copper, zinc, lead and their compounds.

The trilateral analysis is based on matched data from some 9 industrial sectors, 56 chemicals, and 10,000 facilities, comparing releases and transfers for similar facilities in Canada, Mexico and the United States. The report identifies a different pattern of releases and transfers in each of the three countries.

Comparisons of the three countries' industrial emissions will continue to improve as the CEC works with governments, industry and NGOs to expand the number of chemicals and facilities that are comparable.

Taking Stock compiles data from Canada's National Pollutant Release Inventory, the United States' Toxics Release Inventory, and, starting with its first year of mandatory reporting in 2004, Mexico's pollutant release and transfer register, the Registro de Emisiones y Transferencia de Contaminantes

Technorati :

Nobel Awarded in economics for "mechanism design theory,"

"WHAT on earth is mechanism design?" was the typical reaction to this year's Nobel prize in economics, announced on October 15th. In this era of "Freakonomics", in which everyone is discovering their inner economist, economics has become unexpectedly sexy. So what possessed the Nobel committee to honour a subject that sounds so thoroughly dismal? Why didn't they follow the lead of the peace-prize judges, who know not to let technicalities about being true to the meaning of the award get in the way of good headlines?

In fact, despite its dreary name, mechanism design is a hugely important area of economics, and underpins much of what dismal scientists do today. It goes to the heart of one of the biggest challenges in economics: how to arrange our economic interactions so that, when everyone behaves in a self-interested manner, the result is something we all like. The word "mechanism" refers to the institutions and the rules of the game that govern our economic activities, which can range from a Ministry of Planning in a command economy to the internal organisation of a company to trading in a market.

The real world rarely behaves like economics models do, so mechanism design is used to design markets and auctions that will better reflect the actions of the participants. Mechanism design is also used to look at how companies behave and to consider how governments can best provision public goods like defense or infrastructure. In general, mechanism design is applied to interactions where people or companies participating in the mechanism may have reasons to behave in a non-truthful or less than optimal way, and attempts to create rules and incentives to discourage this unwanted behavior.

The winners of the 2007 Nobel Memorial Prize in Economics, announced yesterday, are Leonid Hurwicz, Eric Maskin, and Roger Myerson. The three men received the prize for their work on "mechanism design theory," a field of economics that focuses on creating incentives and rules for an economics interaction such that the desired outcome or some desirable properties are achieved.

Hurwicz began working on mechanism design over 50 years ago by applying mathematical analysis to companies and economics systems like capitalism and socialism. His major theoretical contribution is "incentive compatibility," where participants in a mechanism will want to vote or play honestly. It's an important result, since we tend to want mechanisms like voting systems to encourage truthful voting, rather than encouraging people to disguise their true opinions.

Although "mechanism design theory" may not sound like something you or I would need to interact with very much, it pops up in quite a few places. Take the upcoming 700MHz spectrum auctions, for example. For this auction, the government has some set of goals, including perhaps getting some payment and fairly allocating the spectrum. The companies also have goals, which may be to gobble up as much of the spectrum as possible. By applying some mechanism design theory to the situation, economists can then design an auction mechanism that best meets the goals of all the parties. This type of game theoretical analysis of auctions has been done by Roger Myerson, whose work has influenced these types of spectrum auctions.

Software patents are another area where mechanism design comes into play. One of the Nobel laureates, Eric Maskin, has done some work on patent valuation. In particular, Maskin is critical of the software patent system, which he believes is harmful to innovation when new inventions are closely related to old ones. His (very) basic argument is that in many technology fields, competition is actually better for firms in the long run. Patents generally lead to less innovation in a particular field, and also lead to less competition since companies can't work on the same types of products. Thus, in the end, patents are bad for software and technology companies, because of how they limit competition.

If you're at all interested in mechanism design theory, I would highly recommend checking out the scientific background for the prize, since it provides a nice overview of the key results from the work of Hurwicz, Maskin, and Myerson. It can be a bit daunting to delve into, particularly since it's not a field of economics that gets talked about at your average cocktail party, but it's worth a look due to the sheer number of social and governmental situations that rely on mechanism design to operate more efficiently

Technorati :

Microsoft Partners Back Unified Communications

Microsoft always augments the release of its software with an avalanche of partner support, and it was no different with Office Communications Server 2007 and Office Communicator 2007. The two combine to form the heart of Microsoft's unified communications (UC) platform, which is made up of many interconnected parts.

Microsoft's partners Tuesday lined up behind the launch of the company's unified communications platform, with Nortel releasing six products to cement its year-long development relationship with the software giant, and vendors such as FaceTime adapting their management wares for Office Communications Server 2007.

More than 50 partners released software, hardware and services to support the new platform. These partners include systems integrators, telephony providers, ISVs and phone/device vendors.

Nortel leads the way

Nortel led the charge by releasing a portfolio of products from infrastructure servers to phone handsets.

The announcements are a logical extension of the Microsoft/Nortel Innovative Communications Alliance (ICA) established in 2006, which incorporates Microsoft's unified communications software and Nortel's Communications Server 1000 IP-PBX.

"We want to lead this transition to unified communications," says Ruchi Prasad, Nortel's vice president and general manager for ICA.

Tuesday, Nortel released Converged Office, which integrates Nortel IP-PBX telephony features with OCS and supports the Office Communicator client as the single end-user interface for all UC functions; Multimedia Conferencing 5.0, which integrates with OCS to support audio conferencing with an option to switch to video conferencing; Secure Router 4134, which now includes a SIP gateway to help users extend UC into branch offices; integration of OCS with Nortel's applications switches and application accelerator hardware; and the IP Phone 8500 Series, which is optimized for use with OCS and Communicator.

Instant messaging management and hygiene vendor FaceTime announced that it has updated its suite of management, security and compliance software to support OCS. The management lets companies control access to IM networks per user, bar communication between specific user groups, and block such actions as file transfers. The software also lets users control IM spam known as SPIM, protect against worms and viruses, and meet compliance needs with logging, auditing and controls to prevent tampering with messages.

And telephony provider NEC announced it will release a USB handset that works with OCS, a middleware server called Univerge OW5000 to connect its PBX to OCS, and the MGW Gateway for Office Communicator to provide PSTN/ISDN interconnection functions to OCS and VoIP connectivity to existing PBX/KTS deployments.

NEC also plans to optimize its voice, video and data technologies to integrate with other Microsoft collaboration tools, including Exchange Server 2007 and SharePoint 2007.

Others releasing products to support OCS included Aculab, Covergence, Dell, Dialogic, EMC, Ericsson, Foundry Networks, Mitel, Palm, PolyCom, Quest Software, Samsung, Tandberg and Unisys.

Microsoft has existing partnerships with Alcatel-Lucent, Avaya, Cisco Systems and Siemens Communications

Technorati :

Spaceship race

 Spaceship race

Company eliminated in spaceship race

NASA on Thursday dropped Rocketplane Kistler from its competition to build a ship that can reach the space station, leaving one closely held company in the running and opening a spot for at least one more.

NASA will request proposals next week for the $174.7 million contract that the Oklahoma City-based company lost out on. The money is part of $485 million that NASA awarded to Rocketplane and SpaceX of El Segundo, Calif., last year.

"RPK failed to meet its financial milestone," which included raising $500 million in private funding, said Alan Lindenmoyer, manager of NASA's Commercial Orbital Transportation Services. "We've come to the conclusion that it's in NASA's best interest to reopen the competition."

Rocketplane spokesman George French said, "We thought COTS was a great program, we appreciated the opportunity, and we hope to be involved again."

COTS is meant to plant the seeds for a commercial industry and perhaps even yield a stand-in for the space shuttle after it is retired in 2010, delivering cargo and crew to the international space station.

Defense company Lockheed Martin Corp. is building the shuttle's successor, the Orion, which should take flight in 2015.

During the five-year gap, NASA must rely on Russia to ferry U.S. crew and cargo to the space station.


NASA has terminated its agreement with Rocketplane Kistler, one of the winners of a $500 million spaceship competition, and is reopening its competition for the $174.7 million that the company lost out on. The winner of that renewed competition would have to demonstrate the ability to deliver cargo to the international space station, just as Rocketplane Kistler was required to do.

The termination came a month after NASA put Rocketplane Kistler on notice that it was in danger of losing out on further money because it hadn't met the required financial and technical milestones. The company was supposed to raise $500 million in private investment by May, but the company didn't hit that goal - and as a result stopped development work on its K-1 launch vehicle.

The official notice following up on that warning was delivered to the Oklahoma-based company this afternoon, said Alan Lindenmoyer, manager of the Commercial Crew and Cargo Program Office at NASA's Johnson Space Center.

Rocketplane's chairman, George French, said in a statement that he was "deeply disappointed in NASA's decision today."

"I am very proud of the technical progress that our superb team has made over the last year," he said. "Through August of this year, we received glowing reviews from NASA on the technical progress we are making on the K-1 vehicle, including cargo modules being developed specially for NASA. On the financial side, I believe we received more commitments from private investors to finance the K-1 program than any purely commercial space venture to date."

George French III, Rocketplane's business development associate as well as the chairman's son, told me that NASA's effort to promote space privatization was "a good program."

"We're very thankful for the opportunity to be involved, and we do hope to be involved sometime in the future," he said. "NASA needs an option to get privatization into space."

Rocketplane said it was still reviewing the details of NASA's decision.

After Rocketplane won the funding from NASA in August 2006, the company reorganized itself into two units: Rocketplane Kistler, which is focusing on the K-1; and Rocketplane Global, which is developing a completely different suborbital space plane. Today's action does not directly affect Rocketplane Global's operations, and in fact that unit is due to announce a new space plane design next week at the X Prize Cup in New Mexico.

NASA already had paid $32.1 million to Rocketplane Kistler for meeting earlier milestones. The company won't have to give that money back, but it will lose out on the remaining $174.7 million that was set aside for future milestones.

Lindenmoyer told reporters that Rocketplane Kistler could continue to work on the K-1 for NASA on an unfunded basis. It could also try to regain NASA funding in the renewed competition, along with other would-be contractors, he said.

"We would welcome a new proposal from them to be evaluated against other proposals from industry," Lindenmoyer said.

NASA needs to have a replacement system for resupplying the space station by 2010, when the space shuttle fleet is due to be retired. The $500 million effort - known as the Commercial Orbital Transportation System program, or COTS - was designed to encourage low-cost options for that resupply. NASA has other options as well, including other countries' space transports (such as the Russian Soyuz or Europe's ATV) and eventually the agency's own Orion crew transportation vehicle.

The Orion is being developed under a multibillion-dollar contract with Lockheed Martin, and is due to enter service in the 2014 time frame.

NASA's agreements with Rocketplane Kistler and SpaceX, the other company to receive money through the COTS program, called for resupply capability to be demonstrated by 2010. But Lindenmoyer said the winners of second-round funding would not necessarily have to hit that 2010 goal.

"We have our need in that time frame," he told reporters. "It's not a requirement."

Several companies are waiting in the wings to vie for the leftover $174.7 million, including Transformational Space and PlanetSpace, as well as SpaceDev, Spacehab and Constellation Systems International. Lindenmoyer said other companies would be welcome to compete as well. That could include Rocketplane Kistler as well as SpaceX, which has hit all its milestones so far, he said.

Lindenmoyer said the full requirements for the second round would be issued Monday, and proposals would be due 30 days from the date of the announcement. NASA would select the winner or winners "as quickly as we can," he said.

"We hope by the first part of next year, 2008, no later than the first quarter of 2008 we should be in a position to complete the evaluation," Lindenmoyer said.

Lindenmoyer said NASA's parting of the ways with Rocketplane Kistler should not be seen as a failure for the COTS philosophy of encouraging private enterprise in space. The program is working "exactly the way it was designed," he said.

"This is not a traditional NASA program, so therefore, recognizing that level of risk that we are undertaking, and the potential payoff, it is not a surprise. ... We had a quantifiable risk, and it was mitigated by the fact that we were able to make a decision early on in the program," he said.

Lindenmoyer said the prospect of lowering the cost of spaceflight - first for cargo deliveries and eventually for crews - was worth the financial risk.

"This is incredibly important to NASA and the nation, so absolutely we will stick with it," he said.


Technorati : ,

Time To Plot A Comeback

 Time To Plot A Comeback

It happened. Microsoft is now the underdog to Google in the game of technology world domination.

The Redmond, Wash.-based software giant faces growing competition in its core software business, which dominated the industry for two decades, and it hasn't had a bottom line-galvanizing success in any other area recently. It was late to online advertising, letting Google (nasdaq: GOOG - news - people ) all but run away with that sector. It hasn't had a big Web 2.0 hit yet. Thank god for Halo 3!

But Steve Ballmer isn't worried. The bombastic chief executive of Microsoft (nasdaq: MSFT - news - people ) was brimming with confidence and good cheer about the company's future at the Web 2.0 conference on Thursday in San Francisco. He told a packed room that Microsoft is working hard on several fronts so it can become a "three- or four-trick pony," holding onto the top spot in business software, but also becoming a force in search, advertising and entertainment.

Just give Microsoft a little more time, Ballmer said.

"I'm happy with everything but everything needs some improvement -sort of like your kids," said Ballmer, who strode onto the conference stage toting a vente-sized Starbucks iced tea. "There will be more operating systems releases. In the enterprise business we're going gangbusters, but there's so much of the enterprise market we haven't tapped."

Asked if he thought Google's word processing and spreadsheet applications were good, Ballmer was blunt: "No, I don't."

Microsoft has been criticized for getting into online search and advertising late and with services that weren't quite ready for prime time. But Ballmer said he's not upset and knows it will be a long haul to go head-to-head with Google. To Microsoft's advertising unit, he'd offer these encouraging words: "You're 3-years-old and you're playing basketball with 12-year-olds. You're going to dunk on the other guys some day!"

On the acquisition front, Ballmer did not address questions about whether Microsoft will take a stake in Facebook. He did say, however, that he's pleased with his company's recently announced advertising partnership with the social networking site.

And what about Microsoft's rumored offer on the table for Yahoo! (nasdaq: YHOO - news - people )? Industry observers think the deal makes sense since Yahoo! would give Microsoft instant heft in search technologies. "That may or may not make sense to us or to Yahoo!" Ballmer said. "We believe in our independent path. We like what we're going. If you talk to Jerry at Yahoo! he'd say they like what they're doing."

Ballmer indicated that buying Yahoo! would be a stretch. He said Microsoft would prefer to acquire 20 companies a year for up to $1 billion, rather than a single business that would cost far more.

Still, Ballmer knows Microsoft needs another hit -soon. For the past year, the company's stock has waffled in the high $20s. It closed on Thursday at $31.16. The stock hasn't been north of $32 since 2003. And there's nothing like a new blockbuster product or service to lift share price. "If we produce an advertising business that generates another $5 to $10 billion profit, Wall Street will reward that," Ballmer predicted. "We just have to do that."

Yeah. High time to start working on that comeback strategy.

Technorati : , ,

Lust trumps love when it comes to having sex

Love or Lust ?

Study finds there aren't many gender differences in reasons for intimacy.

After exhaustively compiling a list of the 237 reasons why people have sex, researchers found that young men and women get intimate for mostly the same motivations.

It's more about lust in the body than a love connection in the heart..............more

Technorati :

Europe edges closer to mobile phones on planes

Presented by 24hoursnews :The likelihood of mobile-phone usage being allowed on flights within Europe increased on Thursday after telecommunications regulator Ofcom issued a consultation on the matter.

The issue has been brewing for many years but has been hampered until now by concerns over safety and the commercial viability of business models. The regulator's new proposals are the result of negotiations within the European Union, and will therefore cover all European airspace--although what will happen with flights leaving that airspace remains to be seen.

Ofcom is proposing a situation where a mobile base station would be allowed to be installed on a plane. Calls would be routed by satellite and would be treated as if the user were roaming. The revenue would come from a deal between the airline and an onboard operator. There are two operators currently able to offer such a service: OnAir and Aeromobile.

According to an Ofcom representative, the drive toward the new proposals has come from OnAir (a joint venture with Airbus and the airline industry body SITA) and Aeromobile (a joint venture between the Norwegian telco Telenor and the transport communications company ARINC), rather than the airlines themselves. However, many airlines--including Ryanair, BMI and Air France--have previously expressed interest.

As is currently the case, all mobile telephony equipment would need to be switched off during landing and takeoff. It would then be allowed to be on at a minimum height of 9,842 feet (3,000 meters). The first phase of the service's introduction would enable GSM voice and GPRS data, but it may extend to 3G and beyond in the future.

Ofcom's representative conceded that, despite pan-European agreement and similar moves being undertaken in some other countries, including Australia, the service may hit problems when flying over countries without similar regulations.

"Potentially, once the system is up and running, when you fly into other airspace outside the EU you would have to comply with the individual countries' jurisdiction and their regulations," Ofcom's spokesperson said. "If they haven't got the system in place it might be turned off."

The U.S. Federal Aviation Authority ruled earlier this month that it would not allow mobile calls on planes for the foreseeable future. And the Daily Telegraph has launched a campaign against in-flight mobile use.

Technorati :

All the Energy We Could Ever Need? Space-Based Solar Power Looking Better

Published by the Pentagon's National Security Space Office, the report says the US should demonstrate the technology by building a pilot "space-based solar power" station, big enough to continuously beam up to 10 megawatts of power to the ground, in the next decade.

The good news? Beaming all the solar energy we could ever need down to Earth from space appears more feasible than ever before. The bad news? It's going to take a lot of money and political will to get there.

While the idea of sending giant solar panels into orbit around the Earth is nothing new - the idea has been kicked around with varying degrees of seriousness since the '60s and 70s - changing times have made the concept a lot more feasible today, according to a study released Oct. 10 by the National Security Space Office (NSSO). Fossil fuels are a lot more expensive, and getting harder to access, than they were in past decades. And technology advances are making possible today projects that were all but inconceivable in years past.

"The magnitude of the looming energy and environmental problems is significant enough to warrant consideration of all options, to include revisiting a concept called Space-Based Solar Power (SBSP) first invented in the United States almost 40 years ago," the report's executive summary states.

Oil prices have jumped from $15/barrel to now $80/barrel in less than a decade. In addition to the emergence of global concerns over climate change, American and allied energy source security is now under threat from actors that seek to destabilize or control global energy markets as well as increased energy demand competition by emerging global economies.

By collecting solar energy before it passes through the Earth's atmosphere, losing much of its power, a space-based solar power could provide the planet with all the energy it needs and then some, the NSSO report said. The output of a single one-kilometer-wide band of solar panels at geosynchronous orbit would equal the energy in all the world's remaining recoverable oil: an esimated 1.28 trillion barrels.

Because it didn't have the time or funds to study the feasibility of space-based solar power the traditional way, the NSSO's Advanced Concepts Office (known as "Dreamworks") developed its report through a unique strategy: an open-source, Internet-based forum inviting worldwide experts in the field to collaborate online. More than 170 contributors joined into the discussion, with the mission to answer one question:

Can the United States and partners enable the development and deployment of a space-based solar power system within the first half of the 21st Century such that if constructed could provide affordable, clean, safe, reliable, sustainable, and expandable energy for its consumers?

Their answer, delivered in the form of the Oct. 10 report: it's possible, but a lot remains to be done.

The study group ended up making four major recommendations. First, it said, the U.S. government should move to resolve the remaining unknowns regarding space-based solar power and act effectively to allow for the technology's development. Second, the government should also reduce as much as possible the technical risks faced by businesses working on the technology. Third, the government should set up the environment - policy, regulatory and legal - needed to develop space-based solar power. And, fourth, the U.S. should commit to becoming an early demonstrator, adopter and customer of space-based solar power and set up incentives for the technology's development.

"Considering the development timescales that are involved, and the exponential growth of population and resource pressures within that same strategic period, it is imperative that this work for 'drilling up' vs. drilling down for energy security begins immediately," the NSSO report stated.

If it could be done, space-based solar power would have incredible potential, the NSSO said: It could solve our energy problems, deliver "energy on demand" for troops in the field, provide a fast and sustainable source of energy during humanitarian disasters, and reduce the risk of future conflict over dwindling or risky energy supplies.

Considering that, over the past 30 years, both NASA and the Department of Energy have invested a meager $80 million in space-based solar power research (compared to $21 billion over the last half-century for nuclear fusion - which still remains out of reach as a feasible power source), maybe it's time to directing our research energies - and dollars - upward

Technorati :

Sony Forks Over Chip Production To Toshiba :can do a better job of supplying the brains for the PlayStation 3.

Sony and Toshiba are collabrating for chip development :

Sony announced on Thursday a plan to sell its loss-making multimedia microprocessor operations to Toshiba for an undisclosed amount, hoping that its trusted joint venture partner can do a better job of supplying the brains for the PlayStation 3.

Following a month of leaks to various media outlets by unnamed sources, the world's second-largest consumer electronics company made an announcement late on Thursday that it will unload production lines for its Cell processor in a joint venture arrangement for a reported amount of about 100 billion yen ($856.38 million) to Toshiba.

The sale, drawn up in preliminary form as a nonbinding memorandum of understanding, is the latest swing of the ax by Sony Chief Executive Howard Stringer, who has cut the workforce and closed factories to boost profitability. In February he promised to cut back on development costs for the expensive, loss-making chips and to consider outsourcing production to outside partners.

Toshiba appears to be the best available buyer: Along with IBM (nyse: IBM - news - people ), Toshiba helped Sony develop the Cell, which bundles multimedia game features onto a single chip using 65-nanometer technology. Cell is produced in a plant in Nagasaki, in southwestern Japan; costly investment would be needed to prepare it to produce chips using next-generation 45-nanometer technology.

In addition to the sale of the Cell line, Toshiba is also taking over the manufacturing equipment for a line of image-processing chips also used in the PlayStation 3. Both sides were mum on how much the sale was worth but Nikkei Business Daily reported before the announcement that the sale price was about 30 billion yen ($256.92 million).

In a joint announcement, the two companies said Sony would transfer to Toshiba its advanced 300-millimeter wafer line fabrication facilities installed in a plant operated by its subsidiary, Sony Semiconductor Kyushu Corp., by the end of March 2008. The facilities house the Nagasaki Technology Center, responsible for developing the Cell line.

While the ownership of the asset would go to Toshiba, Sony and its gaming unit, Sony Computer Entertainment, would jointly participate in the production process as a minority shareholder of 40% in a new joint venture to be set up in April with Toshiba, which would take the remaining 60%.

Sony (nyse: SNE - news - people ) shares swooned after initial press rumors of the sale a month ago, but on Thursday afternoon, they were up 30 yen, or 0.55%, at 5,430 yen ($46.57).

The sale allows Sony to pass on the heavy cost of microprocessor development to Toshiba, Japan's largest microchip maker. It could possibly lower Sony's procurement costs for Cell chips if Toshiba can reap production efficiencies from commercializing the chips in a broader range of applications. Sony will also be able to invest the proceeds of the sale to bolster its world-leading position in image-processing chips for its digital cameras and cell phones.

For Toshiba (other-otc: TOSBF - news - people ), buying the Cell line would give it a huge upgrade in the system chip business, where it is lagging far behind Intel (nasdaq: INTC - news - people ) and Samsung (other-otc: SSNLF - news - people ), with the anchor of having Sony as a reliable buyer.

Both companies said they would jointly advance the Cell chips to the next stage of technology: Toshiba was reported by Nikkei Business Daily as intending to roll out a 45-nanometer version of the Cell in two years and employinh the cutting-edge chips in personal computers and flat-panel televisions.

The market has been concerned about Sony's growth prospects over the longer term beyond its recent obsession with asset sales and cost cutting. The sale of Cell follows after it sold part of Sony Financial last week, bringing the unit public in a 320 billion yen ($2.74 billion) share sale. (See: " Sony Cashes In On Financial Unit")

While it is planning to sink more money into research and development to reclaim technical leadership, exciting new growth areas seem far off, and it will now have to rely on Toshiba, a competitor on the consumer electronics front, for future generations of microprocessors for its games consoles.

Technorati :

DNA pioneer James Watson says he is 'mortified' by race comments : black people are less intelligent than whites.

24HOURS LONDON: The DNA pioneer James Watson today apologised "unreservedly" for his apparent claim that black people are less intelligent than whites.

"I am mortified about what has happened," he told a group of scientists and journalists at the launch of his new book, Avoid Boring People, at the Royal Society in London.

The American scientist at the center of a media storm over comments suggesting that black people were not as intelligent as whites said Thursday he never meant to imply that the African continent was genetically inferior, adding that he was mortified over the attention his words had drawn.

James Watson, who won the Nobel Prize for co-discovering the molecular structure of DNA, has been sharply criticized in Britain for reportedly saying tests showed Africans did not have the same level of intelligence as whites.

In its profile of Watson, The Sunday Times Magazine quoted him as saying he was "inherently gloomy about the prospect of Africa" because "all our social policies are based on the fact that their intelligence is the same as ours - whereas all the testing says not really."

Watson's interview in the magazine received wide play, touching off a furious reaction in Britain. The Independent newspaper put Watson on its front page Wednesday, and on Thursday the Daily Mail devoted a column to criticism of his "incendiary claim."

Watson, who arrived in Britain on Thursday to promote his new book, "Avoid Boring People: Lessons From a Life in Science," appeared at a reception Thursday night at the Royal Society, Britain's leading scientific academy.The Associated Press was refused entry to the event, described by his publicist as a private gathering with friends. But in a written statement given to the AP, Watson said he was "mortified by what had happened."

"I cannot understand how I could have said what I am quoted as having said," he said. "To all those who have drawn the inference from my words that Africa, as a continent, is somehow genetically inferior, I can only apologize unreservedly. That is not what I meant. More importantly from my point of view, there is no scientific basis for such a belief."

Kate Farquhar-Thomson, his publicist, refused to say whether Watson believed The Sunday Times had quoted him accurately. "You have the statement. That's it, I'm afraid," she said.

Watson, 79, is a molecular biologist who serves as chancellor of the Cold Spring Harbor Laboratory in New York, a world leader in research into cancer and neurological diseases. The laboratory issued a statement saying its board of trustees vehemently disagreed with his remarks and that they were "bewildered and saddened if he indeed made such comments."

The author of several books, Watson has been well-known in Britain since his days at Cambridge University in the 1950s and 1960s on the trail of DNA's molecular structure. Watson, Francis Crick and Maurice Wilkins won the 1962 Nobel Prize for their work on the subject.

In the magazine interview, Watson was quoted as saying he opposes discrimination and believes that "there are many people of color who are very talented." But he also was quoted as saying that while he hopes that everyone is equal, "people who have to deal with black employees find this not true." Watson's statement did not directly address those remarks.

The interview caused outrage in Britain.

David Lammy, the government's skills minister, said Thursday that Watson's remarks were "deeply offensive" and would "succeed only in providing oxygen" for the British National Party, a small, far-right political party that has been accused of being racist.

"It is a shame that a man with a record of scientific distinction should see his work overshadowed by his own irrational prejudices," Lammy said. "It is no surprise to me that the scientific community has condemned this outburst, and I think people will recognize these comments for what they are."

Watson has caused controversy in the past, reportedly saying that a woman should have the right to abort her unborn child if tests could determine it would be homosexual.

He also suggested a link between skin color and sex drive, proposing a theory that black people have higher libidos.

Jan Schnupp, a lecturer in neurophysiology at Oxford University, said Watson's remarks "make it very clear that he is an expert on genetics, not on intelligence."

Schnupp said undernourished and undereducated people often perform worse on intelligence tests than the well off.

"Race has nothing to do with it, and there is no fundamental obstacle to black people becoming exceptionally bright," Schnupp said.

Terms of Use

Technorati :

Find here

Home II Large Hadron Cillider News