Search This Blog

Tuesday, March 18, 2008

create a robot with artificial intelligence.


Brainy Robots Start Stepping Into Daily Life
Robot cars drive themselves across the desert, electronic eyes perform lifeguard duty in swimming pools and virtual enemies with humanlike behavior battle video game players.

These are some fruits of the research field known as artificial intelligence, where reality is finally catching up to the science-fiction hype. A half-century after the term was coined, both scientists and engineers say they are making rapid progress in simulating the human brain, and their work is finding its way into a new wave of real-world products.

The advances can also be seen in the emergence of bold new projects intended to create more ambitious machines that can improve safety and security, entertain and inform, or just handle everyday tasks. At Stanford University, for instance, computer scientists are developing a robot that can use a hammer and a screwdriver to assemble an Ikea bookcase (a project beyond the reach of many humans) as well as tidy up after a party, load a dishwasher or take out the trash.

One pioneer in the field is building an electronic butler that could hold a conversation with its master — á la HAL in the movie “2001: A Space Odyssey” — or order more pet food.

Though most of the truly futuristic projects are probably years from the commercial market, scientists say that after a lull, artificial intelligence has rapidly grown far more sophisticated. Today some scientists are beginning to use the term cognitive computing, to distinguish their research from an earlier generation of artificial intelligence work. What sets the new researchers apart is a wealth of new biological data on how the human brain functions.

“There’s definitely been a palpable upswing in methods, competence and boldness,” said Eric Horvitz, a Microsoft researcher who is president-elect of the American Association for Artificial Intelligence. “At conferences you are hearing the phrase ‘human-level A.I.,’ and people are saying that without blushing.”

Cognitive computing is still more of a research discipline than an industry that can be measured in revenue or profits. It is pursued in various pockets of academia and the business world. And despite some of the more startling achievements, improvements in the field are measured largely in increments: voice recognition systems with decreasing failure rates, or computerized cameras that can recognize more faces and objects than before.

Still, there have been rapid innovations in many areas: voice control systems are now standard features in midpriced automobiles, and advanced artificial reason techniques are now routinely used in inexpensive video games to make the characters’ actions more lifelike.

A French company, Poseidon Technologies, sells underwater vision systems for swimming pools that function as lifeguard assistants, issuing alerts when people are drowning, and the system has saved lives in Europe.

Last October, a robot car designed by a team of Stanford engineers covered 132 miles of desert road without human intervention to capture a $2 million prize offered by the Defense Advanced Research Projects Agency, part of the Pentagon. The feat was particularly striking because 18 months earlier, during the first such competition, the best vehicle got no farther than seven miles, becoming stuck after driving off a mountain road.

Now the Pentagon agency has upped the ante: Next year the robots will be back on the road, this time in a simulated traffic setting. It is being called the “urban challenge.”

At Microsoft, researchers are working on the idea of “predestination.” They envision a software program that guesses where you are traveling based on previous trips, and then offers information that might be useful based on where the software thinks you are going.

Tellme Networks, a company in Mountain View, Calif., that provides voice recognition services for both customer service and telephone directory applications, is a good indicator of the progress that is being made in relatively constrained situations, like looking up a phone number or transferring a call.

Tellme supplies the system that automates directory information for toll-free business listings. When the service was first introduced in 2001, it could correctly answer fewer than 37 percent of phone calls without a human operator’s help. As the system has been constantly refined, the figure has now risen to 74 percent.

More striking advances are likely to come from new biological models of the brain. Researchers at the École Polytechnique Fédérale de Lausanne in Lausanne, Switzerland, are building large-scale computer models to study how the brain works; they have used an I.B.M. parallel supercomputer to create the most detailed three-dimensional model to date of a column of 10,000 neurons in the neocortex.
“The goal of my lab in the past 10 to 12 years has been to go inside these little columns and try to figure out how they are built with exquisite detail,” said Henry Markram, a research scientist who is head of the Blue Brain project. “You can really now zoom in on single cells and watch the electrical activity emerging.”
Blue Brain researchers say they believe the simulation will provide fundamental insights that can be applied by scientists who are trying to simulate brain functions.

Another well-known researcher is Robert Hecht-Nielsen, who is seeking to build an electronic butler called Chancellor that would be able to listen, speak and provide in-home concierge services. He contends that with adequate resources, he could create such a machine within five years.

Although some people are skeptical that Mr. Hecht-Nielsen can achieve what he describes, he does have one successful artificial intelligence business under his belt. In 1986, he founded HNC Software, which sold systems to detect credit card fraud using neural network technology designed to mimic biological circuits in the brain. HNC was sold in 2002 to the Fair Isaac Corporation, where Mr. Hecht-Nielsen is a vice president and leads a small research group.

Last year he began speaking publicly about his theory of “confabulation,” a hypothesis about the way the brain makes decisions. At a recent I.B.M. symposium, Mr. Hecht-Nielsen showed off a model of confabulation, demonstrating how his software program could read two sentences from The Detroit Free Press and create a third sentence that both made sense and was a natural extension of the previous text.

For example, the program read: “He started his goodbyes with a morning audience with Queen Elizabeth II at Buckingham Palace, sharing coffee, tea, cookies and his desire for a golf rematch with her son, Prince Andrew. The visit came after Clinton made the rounds through Ireland and Northern Ireland to offer support for the flagging peace process there.”

The program then generated a sentence that read: “The two leaders also discussed bilateral cooperation in various fields.”

Artificial intelligence had its origins in 1950, when the mathematician Alan Turing proposed a test to determine whether or not a machine could think or be conscious. The test involved having a person face two teleprinter machines, only one of which had a human behind it. If the human judge could not tell which terminal was controlled by the human, the machine could be said to be intelligent.

In the late 1950’s a field of study emerged that tried to build systems that replicated human abilities like speech, hearing, manual tasks and reasoning.

During the 1960’s and 1970’s, the original artificial intelligence researchers began designing computer software programs they called “expert systems,” which were essentially databases accompanied by a set of logical rules. They were handicapped both by underpowered computers and by the absence of the wealth of data that today’s researchers have amassed about the actual structure and function of the biological brain.

Those shortcomings led to the failure of a first generation of artificial intelligence companies in the 1980’s, which became known as the A.I. Winter. Recently, however, researchers have begun to speak of an A.I. Spring emerging as scientists develop theories on the workings of the human mind. They are being aided by the exponential increase in processing power, which has created computers with millions of times the power of those available to researchers in the 1960’s — at consumer prices.

“There is a new synthesis of four fields, including mathematics, neuroscience, computer science and psychology,” said Dharmendra S. Modha, an I.B.M. computer scientist. “The implication of this is amazing. What you are seeing is that cognitive computing is at a cusp where it’s knocking on the door of potentially mainstream applications.”

At Stanford, researchers are hoping to make fundamental progress in mobile robotics, building machines that can carry out tasks around the home, like the current generation of robotic floor vacuums, only more advanced. The field has recently been dominated by Japan and South Korea, but the Stanford researchers have sketched out a three-year plan to bring the United States to parity.

At the moment, the Stanford team is working on the first steps necessary to make the robot they are building function well in an American household. The team is focusing on systems that will consistently recognize standard doorknobs and is building robot hands to open doors.

“It’s time to build an A.I. robot,” said Andrew Ng, a Stanford computer scientist and a leader of the project, called Stanford Artificial Intelligence Robot, or Stair. “The dream is to put a robot in every home.”

Greenhouse Gases: The Developed World's Role


Global warming and green house effect is the vital issue now of the GLOBE.
Most references to the role of China and India in global mitigation of emissions of greenhouse gases (GHGs) are generally simplistic. The typical argument put forward highlights the fact that these countries would continue increasing their emissions substantially, and, therefore, any efforts at reduction in the developed world would be more than neutralized by increases in the former. The reality is in fact much more complex. It is important to remember that the problem of human induced climate change has been caused by the cumulative emissions of GHGs with concentration levels at 280 parts per million of CO2 in pre-industrial times growing to around 380 parts per million currently. This increase is largely the result of substantial increase in use of fossil fuels in the industrialized world. For this reason, the UN Framework Convention on Climate Change (UNFCCC) included the principle of "common but differentiated responsibility", requiring the developed countries to take the first steps in mitigating emissions of GHGs. However, the record of the developed world has been less than satisfactory in this regard.

The Fourth Assessment Report of the IPCC recorded that global GHG emissions due to human activities increased 70% between 1970 and 2004. Developing countries have argued that they have a long way to go in eradicating poverty and providing the benefits of development to their people, for which they would have to burn fossil fuels on an increasing scale. Hence, if stabilization of the atmosphere is to take place for limiting increase in global average temperatures, the developed countries would have to mitigate emissions of GHGs to not only compensate for the increase in emissions in the developing countries but also to bring about a net reduction globally. One of the stabilization scenarios assessed by the IPCC which would limit the temperature increase to 2.0 ? 2.4oC would require that GHG emissions peak by 2015, and decline thereafter. This presents a formidable challenge for the world as a whole.

The reality today is that the world knows and pursues only one distinct pattern of economic development. This is based on the industrialized world's dream of large homes with air conditioning, increasing use of electrical appliances using energy from fossil fuels, growing ownership of private cars and increasing consumption of goods and services in general, with an expanding carbon footprint on the earth's ecosystems. There are some exceptions to this universal dream, such as the kingdom of Bhutan which emphasizes gross national happiness as a measure of human progress or Iceland which has moved from total dependence on fossil fuels to the use of renewable energy resources. But these two countries together account for a population of about a million.

If the emerging economies were to continue on the same path of development as the industrialized world there would not only be negative consequences for the globe but for these countries themselves. Already, the increasing consumption of energy is leading to concerns about their energy security. China, which was self-sufficient in oil production is now scouring every corner of the globe to gain access to hydrocarbon resources. The current pattern of development would not only place an unsustainable demand on the earth's natural resources but would cause an unmanageable level of pollution locally, with serious consequences for human health. It is, therefore, essential for the emerging economies to question the unsustainability of a pattern which emulates the model of growth followed in the industrialized world. A totally new path much lower in intensity of natural resource use, including energy, is essential. But is this really possible? Can we accept a single globe and a globalized economy but two distinct patterns of development and standards of living? This would be clearly unacceptable to the deprived populations of the world and may become disruptive of peace and stability. The award of the Nobel Peace Prize in 2007 to the Intergovernmental Panel on Climate Change (IPCC) and Mr. Al Gore signals the dangers of conflict if the earth's climate is not stabilized. Hence, the answer clearly lies in both parts of the globe moving towards a sustainable path of development, requiring some societies, which could be termed as maldeveloped, adopting "sustainable retreat" as termed by James Lovelock. There is need, therefore, for enlightened minds from north and south getting together to define the sustainability imperative and persuade political leaders of the benefits of growth with lower natural resource intensity in all countries — developed and developing. And for reasons of history, economic capability and institutional strength the developed world must begin such a movement. Or can China and India define this new paradigm and lead?

Microsoft Jabs At Apple With Adobe Mobile Deal


Microsoft Windows Mobile users will be able to use Adobe's Flash Lite and Reader software on their Windows Mobile devices now. This announcement somes just two weeks after Microsoft announced that its upcoming Silverlight browser plug-in would be available to millions on Nokia smartphones. Microsoft is working on a version of Silverlight for the iPhone according to reports.

While this may enable Windows mobile phones to access flash and pdfs and such, it still depends on your connection to the cell network, which arguably isn't up to supporting such features at this time. Plus, you have to be using a Windows Mobile phone...no thanks. I'll take my iPhone and all its sexiness over anything Windows/Microsoft. This will probably run as well as Vista.
In a bid to beef up the functionality of Windows Mobile devices and compete more effectively with Apple's iPhone, Microsoft on Monday announced it has licensed Adobe's Flash Lite and Reader LE offerings and will make both available to its worldwide OEMs.
Under the agreement, OEMs that license Windows Mobile will be able to offer a Flash Lite 3.x browser plug-in for Internet Explorer Mobile on Windows Mobile, as well as Adobe Reader LE, the mobile version of its PDF viewer.

The deal could be interpreted as Microsoft's acknowledgement that Silverlight, often referred to as a 'Flash Killer,' isn't yet ready for mobile devices. But Scott Stanfield, CEO of Vertigo Software, Richmond, Calif., sees the Microsoft-Adobe mobile tie-up as a simple business decision, and not an indication that Microsoft doesn't have confidence in Silverlight for mobile devices.

"This is an emerging market, and OEMS want to be able to provide the features that consumers want," said Stanfield. "Although Flash is entrenched on the desktop, the mobile space is still up for grabs."

At Apple's annual shareholders meeting earlier this month, CEO Steve Jobs criticized Adobe's Flash Lite and said it "is not capable of being used with the Web," comments that reportedly disappointed a significant number of Apple developers who've been waiting in earnest for Flash support on the iPhone.

Sean Christmann, experience architect with Adobe partner EffectiveUI, a Denver-based rich Internet application design and development firm, sees Microsoft's support for Adobe technologies as a shot across Apple's bow.

"This could be a pretty big deal, because now Windows has a leg up on the iPhone in terms of being able to access rich media on the Web, whether it's in the form of video or rich Internet applications on the mobile device," said Christmann. "For example, the mapping technology that currently leverages Flash provides more rich interface features than a standard Ajax application."

Silverlight 1.0 for Windows Mobile, which is due in Q2, will likely stand alongside Flash in the future as another way for users to access rich media content on mobile devices, and that diversity of options will likely present challenges for the iPhone, according to Stanfield.

"As more Websites support mobile versions of Silverlight and Flash, Apple could eventually find itself in a difficult spot with the iPhone," Stanfield said.

Like Flash, developers can use Silverlight to develop video, animation, vector graphics and rich user interfaces, and many Microsoft partners who've been working with Silverlight say its features are on par with those of Flash.

Silverlight also includes 720p high-definition video, digital rights management, and the ability to design deep, interactive user interfaces, all features that Microsoft sees as opportunities to take market share from Adobe.

Intel Talks Up Six Cores- new vector of graphics

Intel released a few incremental details about its future graphics chip on Monday, but left a lot of unanswered questions about the company's push into uncharted waters.

Larrabee, a "many-core" graphics processor scheduled for 2009 or 2010, will come with a brand-new set of vector-processing instructions as part of its design, said Pat Gelsinger, senior vice president and co-general manager of Intel's digital enterprise group. Vector-processing instructions are used to improve the performance of graphics and video applications; you may have heard of previous vector-processing implementations such as SSE4.

These new instructions, combined with Larrabee's compatibility with the x86 instruction set, will make life easier for software developers, according to Gelsinger. In addition to regular graphics tasks currently dominated by Nvidia and Advanced Micro Devices, Intel wants Larrabee to be able to take on a wider variety of tasks.

This is an emerging area of PC chip development--designing PC chips that use the best parts of graphics chips to improve performance. It's referred to by several names, with perhaps the most common label "GPGPU," or general-purpose graphics processing unit.

High-performance graphics chips are generally designed to do one thing, and do it fast. They aren't designed to handle the wide variety of workloads that PC chips tackle every day. As it becomes possible to add more and more cores to an individual chip, however, Intel, AMD, and Nvidia are investigating ways to build developer-friendly versions of graphics chips that can take on wider varieties of workloads.

The trouble is that "developer-friendly" line. Some of the current approaches for GPGPUs involve learning specialized programming techniques that are applicable just to that chip, and many of those are still very, very new compared with the 30-plus years of experience that people have had developing for the x86 instruction set.

"Attempts to create new programmable architectures are painful heavy-lifting over time, and for the most part they fail," said Gelsinger. And he should know: Intel's last attempt to create a new programmable architecture with the Itanium processor's EPIC instruction set hasn't come close to what Intel had once hoped to accomplish. Itanium hasn't been an abject failure, since people are buying the chips and development continues, but it's quite clear that Itanium is not, and will not be, the future of computing.

So this is Intel's pitch: it wants to get in on the graphics/multimedia game, since PC workloads are expected to head more and more in that direction. But it wants Larrabee to be like the release of a new Core 2 Duo processor: you'll have to learn how to use the new vector instructions to unlock the new performance, in the same manner you'd have to learn the new SSE4 instructions introduced last year with the Penryn chips, but you won't have to otherwise reinvent the wheel. Larrabee will also support familiar APIs (application programming interfaces) like DirectX and OpenGL, Gelsinger confirmed, an overview of the Nehalem microarchitecture that will replace Core as part of its "tock" strategy, and some tantalizingly vague details about Larrabee, the discrete graphics chip the company plans to release in either 2009 or 2010.
Dunnington will ship in the second half, said Pat Gelsinger, senior vice president and general manager of Intel's Digital Enterprise Group. Gelsinger, addressing media in San Francisco in a briefing on major topics on the slate for April's Intel Developer Forum in Shanghai, China, said the six-core CPU will be socket-compatible with the Santa Clara, Calif.-based chip maker's Caneland server/work station platform.

Featuring 16 MB of L3 cache and virtual machine migration technology called FlexMigration, Dunnington and Caneland will be "the industry's virtualization platform of choice for multi-processor servers," he said.

Intel's new microarchicture, Nehalem, will go into production in the fourth quarter, Gelsinger said. The first devices will be fabricated with Intel's 45-nanometer process technology, though a 32nm version of Nehalem codenamed Westmere is planned for 2009.

Nehalem features an integrated memory controller, bringing Intel's microarchitecture more in line with rival Advanced Micro Devices' "native" CPU design. AMD is in a race of its own to move to the 45nm process, which Intel achieved last year. A source close to Sunnyvale, Calif.-based AMD said OEMs and system builders expect 45nm sample devices from the chipmaker in the mid-Q3 timeframe.

The benefits of Nehalem, Gelsinger said, included increased parallelism, faster "unaligned" cache accesses, branch prediction enhancements and simultaneous multi-threading. Dual-core, quad-core and even octal-core Nehalem devices will roll out of the fab before the end of the year, he said.

Larrabee, Intel's latest attempt at discrete graphics, would theoretically challenge Nvidia and AMD's ATI division in the two top graphics chip makers' own superheated sandbox. According to Gelsinger, Larrabee represents Intel's "bold view of the transformation of visual computing."

But Gelsinger offered few details about Larrabee other than that it would arrive in 2009 or 2010, and that software developers inform him they are "really excited" about the multi-cored GPU's programmability. Or to be more accurate, really excited about what Intel says is the programmability of Larrabee -- no one has one of the devices yet, Gelsinger admitted.

A spokesperson for Santa Clara, Calif.-based Nvidia expressed doubts about a product that's at least a year away from shipment, if not considerably more.

"If you look at every other time they've launched a discrete graphics part, it's a failure. The reason why is that our rate of innovation is so much faster on the GPU side than theirs is," said the spokesperson.

"They're targeting this to come out in two years. That means they're not targeting anything moving," he said, noting that both Nvidia and AMD would both be in the midst of their own innovation cycles in the time period before Larrabee arrives.

Installing maintenance robot finished to the space station


Endeavour mission specialists Richard Linnehan and Michael Foreman (top center) installed a mechanical arm to the Canadian-built robot Dextre yesterday outside the International Space Station during the second of five planned spacewalks.

24hoursnews- Two astronauts Tuesday completed the third spacewalk of the latest shuttle mission to the International Space Station, putting the finishing touches on a Canadian-built, double-armed robot. Rick Linnehan and Robert Behnken worked for nearly seven hours to equip the robot with tools and spare parts before Dextre, as the robot is called, begins to help astronauts with their spacewalks and takes over some maintenance and service work on the expanding space station.

On an earlier spacewalk, Linnehan and Mike Foreman attached Dextre's two arms, each 3.35 metres long, to the station.

Dextre, which cost more than 200 million dollars, is the final component of the station's mobile servicing system.

Before the shuttle returns to Earth March 26, the Endeavour astronauts are to perform two more spacewalks outside the station on Thursday and Saturday.

Over the past year and a half, shuttles have transported huge elements to the space station construction site, including large solar collectors and truss structures.

The goal is to finish construction by 2010 with double the space for orbiting astronauts and an expanded capacity for experiments so the US space agency could retire the 26-year-old shuttles.

After the shuttle programme retires, Russia's Soyuz craft would continue to lift astronauts to the station, but because of their small size, they are unable to carry the huge construction pieces and experimental modules now being transported by shuttles.


More description from AP

Two spacewalking astronauts attached 11-foot arms to the international space station's huge new robot yesterday, preparing the giant machine for its handyman job on the orbital outpost.

more stories like thisThe Canadian-built robot, named Dextre, will be 12 feet high and weigh 3,400 pounds when it's fully assembled. It is designed to assist spacewalking astronauts and possibly take over some of the tougher chores, such as lugging around big replacement parts.

Hours after the spacewalk, the shuttle crew tested Dextre's electronics, joints, and brakes, finding one small problem. One of the wrist joint brakes in Dextre's left arm slipped more than engineers wanted, but officials said that would not affect its operations.

The already challenging spacewalk turned grueling as Richard Linnehan and fellow astronaut Michael Foreman struggled to release one of the robot's arms from the transport bed where it had been latched down for launch.

Two of the bolts wouldn't budge, even when the astronauts banged on them and yanked as hard as they could. They had to use a pry bar to get it out.

The other arm came out much more smoothly and quickly, paving the way for Linnehan to pull up Dextre's body 60 degrees, like Frankenstein rising from his bed. That was the ideal position for plugging in Dextre's gangly arms to its shoulders.

"The whole team did a spectacular job today," Mission Control radioed the crew after the spacewalk. "You guys ought to be proud of yourselves."

Zebulon Scoville, the lead spacewalk officer for Endeavour's mission, said the ground team was ecstatic when Linnehan and Foreman got the last bolt out.

The seven-hour overnight spacewalk - which lasted into the early hours of yesterday - came close to being drastically altered or even delayed. For nearly two days, a cable design flaw prevented NASA from getting power to Dextre, lying in pieces on its transport bed.

But Dextre got the power it needed to wake up and keep its joints and electronics from freezing when the astronauts gripped it with the space station's mechanical arm on Friday night.

After the spacewalk, the crew hooked Dextre back up to the mechanical arm to keep the robot warm. That also allowed NASA to perform tests to ensure all of Dextre's electronics are working properly.

Dextre - short for dexterous and pronounced like Dexter - has seven joints per arm and can pivot at the waist. Its hands, or grippers, have built-in socket wrenches, cameras, and lights. Only one arm is designed to move at a time to keep the robot stable and avoid a two-arm collision.

Space station astronauts will be able to control Dextre, as will flight controllers on the ground. The robot will be attached at times to the end of the space station arm.

The crew will finish building Dextre during a third spacewalk, set for tonight.

Mission Control jokingly told the astronauts on Saturday that in order to guard against a mutiny, some new flight rules were being instituted, which recalled two of the "three laws of robotics" devised by writer Isaac Asimov.

"Dextre may not injure a human being or, through inaction, allow a human being to come to harm," Mission Control wrote in an e-mail. "Dextre must obey orders given to it by human beings, except where such orders would conflict with the First Law."

Slow Global Warming (Green Buildings May Be Cheapest Way)

Green Buildings May Be Cheapest Way to Slow Global Warming


By incorporating elements such as a solar water heater in a house built in the 1940s, the Now House Project in Toronto is aiming to eliminate energy costs and reduce greenhouse gas emissions.

By building green--and retrofitting existing buildings--the countries of North America could cut greenhouse gas emissions by more than 25 percent

North American homes, offices and other buildings contribute an estimated 2.2 billion tons of carbon dioxide to the atmosphere every year—more than one third of the continent's greenhouse gas pollution output. Simply constructing more energy-efficient buildings—and upgrading the insulation and windows in the existing ones—could save a whopping 1.7 billion tons annually, says a new report from the Montreal-based Commission for Environmental Cooperation (CEC), an international organization established by Canada, Mexico and the U.S. under the North American Free Trade Agreement to address continent-wide environmental issues.

"This is the cheapest, quickest, most significant way to make a dent in greenhouse gas emissions," says Jonathan Westeinde, chief executive of green developer Windmill Development Group in Ottawa, Ontario, and chair of the CEC report (who admits that green building regulations would be good for his business). But "buildings are not on the radar of any governments … despite being an industry that represents 35 percent of greenhouse gas emissions."

The report echoes the findings last year of the U.N. Intergovernmental Panel on Climate Change (IPCC), which identified building improvements as one way to reduce global warming pollution with "net economic benefit."

"Residential is a slam-dunk, it's just a matter of applying the technology we have," says IPCC author Mark Levine, a senior staff scientist at Lawrence Berkeley National Laboratory in California who studies these issues. "It's the biggest sector. It's the biggest savings."

Yet, "green buildings"—defined by the report as "environmentally preferable practices and materials in the design, location, construction, operation and disposal of buildings"—represents only 2 percent of the commercial edifices in the U.S. and 0.3 percent of new homes.

"In Europe, they are much ahead of us on building," Westeinde says. "As North Americans we pride ourselves on smaller government and bigger activity in the marketplace. We're not seeing the market react fast enough."

A big part of the problem, he says, is that many builders are loath to invest extra money for more efficient energy and water systems that only translate into cost savings for the eventual owners. Westeinde's company gets around this dilemma by working out long-term financing arrangements with owners, who agree share a portion of their future cost savings with the developer.

He notes, too, that the price gap between energy-efficient and conventional construction might eventually disappear as green buildings become more common. "If everyone is using a certain type of window that drives cost down," Westeinde says. "Green construction is only 4 percent of the market which means the other 96 percent are creating a volume discount for themselves. But if green becomes 50 percent of the industry, that cost differential goes away."

The report calls for the Canadian, Mexican and U.S. governments to set specific targets for green buildings as well as to adopt continental standards, such as siting buildings in a way that maximizes passive solar heating and cooling.

"There is not that great a difference between green building in Oaxaca and Ohio," says Evan Lloyd, CEC director of programs. "It is the best systems and technology that can be applied to reduce energy consumption as well as paying attention to resource inputs."

more.....
Combating Climate Change: Building Better, Wasting Less
New York City has nearly one million buildings—many of them woefully energy inefficient. Insulation is spotty at best, single-paned windows leak heat in winter and cool air in summer, and the untold millions of electric appliances they hold suck energy from the grid—many even when turned off. As a result, buildings contribute 79 percent of the Big Apple's 60 million metric tons of greenhouse gas (GHG) emissions, according to the Mayor's Long-Term Planning and Sustainability Office; the remaining 21 percent stems from cars, trucks and mass transit. In fact, New York City emits roughly the same amount of GHGs as the entire country of Ireland and contributes 1 percent of total U.S. emissions. Worldwide, buildings—both commercial and residential—contribute roughly one third of all GHG emissions despite covering only 0.2 percent of land worldwide. And experts say that reining in pollution from them will be key in the fight to contain climate change.

The good news? "By 2030, about 30 percent of the projected GHG emissions in the building sector can be avoided with net economic benefit," scientists write in the Intergovernmental Panel on Climate Change (IPCC) report on ways to stave off the effects of global warming.

Five international economic institutions around the world—ABN AMRO, Citi, Deutsche Bank, JPMorgan Chase and UBS—as well as four multinational energy services companies—Honeywell, Johnson Controls, Siemens and Trane—cut a $5 billion deal to work together to retrofit existing buildings in 16 of some of the world's biggest cities, including New York, London, Johannesburg, Karachi, Mexico City, Mumbai and Tokyo.

The effort calls for the banks to loan cash to the cities and building owners to make needed changes, and for the power companies to provide the retrofits and guarantees of energy savings. The cities and building owners will repay the loans from monies saved by reduced energy costs. It is hoped that the plan will stimulate the sluggish U.S. and world market for such retrofits: The U.S., which has had an energy efficiency retrofit program for 25 years, is the world leader in this area, yet fewer than 1 percent of the country's buildings have been redone.

"We will all benefit from this, whether small or big, rich or poor," says Berhanu Deresa, mayor of Addis Ababa in Ethiopia, whose city is not yet part of the initiative. For example, he notes, the many international buildings that hold United Nations and African Union offices in this capital city could be retrofitted with solar panels to generate electricity. "We have 13 months of sunshine," Deresa adds. "Even if [the U.N.] only equipped their own buildings, that would be a great help."

But retrofitting alone is not enough, says Mark Levine, a senior staff scientist at Lawrence Berkeley National Laboratory in California, a federal institution that studies a wide range of engineering and scientific issues. He says that builders constructing new commercial and residential buildings in the developing world must incorporate more energy-efficient—and more costly—materials and designs to control greenhouse emissions. "China built an awful lot of buildings very quickly," Levine says. "Until fairly recently [its] buildings were not very good from an energy point of view."

New and existing homes hold the greatest and cheapest hope for gains. According to the Clinton Climate Initiative (an effort to curb climate change launched by former President Bill Clinton), installing thick insulation and double-paned windows as well as using energy-efficient electrical appliances can cut emissions by as much as 50 percent. Simply using efficient ovens, dishwashers, refrigerators, washers, dryers and other appliances as well as compact fluorescent light bulbs instead of incandescent ones could negate the need for the energy output equivalent of 110 coal-fired 600-megawatt power plants that might otherwise be needed in the U.S. by 2020, according to a report by the consultancy think tank McKinsey Global Institute.

Adding features like solar generators or solar water heaters can pay for themselves over a longer term as well. "I think residential is a slam dunk," Levine says. "It's just a matter of applying technology that we already have."

Another area where cities can play a major role in warding off warming is waste management, enhancing efficiency by taking relatively cheap and simple steps such as capturing the latent power in landfills or recycling. "Things like waste recycling or waste minimization actually avoid greenhouse gas generation," says waste management expert Jean Bogner, a lead author of the IPCC report. For instance, making a beverage can from recycled aluminum uses only 5 percent of the energy it would take to create one from scratch. "If you recycle in the case of aluminum, you are talking about a 95 percent savings of energy," says IPCC author Lenny Bernstein, an environmental consultant.

The bottom line: buildings contribute a tremendous amount of GHG emissions whereas garbage produces relatively little (roughly 5 percent, according to the IPCC), but both offer immediate cost-effective remedies for climate change. A key to limiting global warming will be making sure that cities from New York to Addis Ababa implement those changes. "Now is the time to start doing real work," says Ken Livingstone, mayor of London. "Retrofitting our buildings and bringing down our carbon emissions."

A machine that churns out three-dimensional artificial tumours could help improve anti-cancer drug


A growth solution flows past traps adding new cells to each growing tumour
A machine that churns out three-dimensional artificial tumours could help improve anti-cancer drug testing, researchers say. The "tumour factory" offers a better alternative to the flat cultured cells currently used to test new anticancer drugs.
"Cells grown in a monolayer are very useful in many studies," says Maria Teresa Santini of the Istituto Superiore di Sanità in Rome, Italy, who was not involved with the work . "But they cannot represent a three-dimensional tumour."
In a real cancer, different parts of a tumour are fed different amounts of oxygen. Cells growing in a flat monolayer all receive the same amounts of oxygen are all exposed to an equal quantities of nutrients. "Testing anticancer drugs on these models may be very inaccurate," Teresa says.
Small clumps of cells known as 3D tumour spheroids provide a better model. But, until now, spheroids have had to be made one at a time in a process that produces different sizes each time.
Luke Lee's group at the University of California in Berkeley, US, has developed a technique to quickly generate spheroids of a standard size at low cost. The breast cancer drug Taxol has already been shown half as effective on spheroids as it is on 2D cell cultures.
Breaking the mouldAt the heart of the Berkeley team's device is an array of U-shaped traps each 35 micrometers across and 50 micrometers deep, which are made from polydimethylsiloxane (PDMS), a silicon-based organic polymer.
The array is held inside a chamber through which flows breast cancer cells suspended in growth solution. Cells that flow into the microscopic traps cannot flow out again, although the growth solution can escape from a small gap underneath the trap too large for a cell.
Over the course of a few hours, empty traps become filled with cells and, over about 7 hours, they attach to one another and form tumour spheroids containing 9-11 cells inside each trap. Solution constantly supplies the outer layer of the spheroids with fresh nutrients and oxygen, and removes waste excretions. The PDMS polymer also allows oxygen to reach the cells.
"The continuous flow in our device plays an important role in spheroid formation since it helps maintain the cells in a compact group," says Lee. "The cells have more chance to contact each other and adhere."
Mass productionSantini is impressed by Lee's study. "This represents an important development," she told New Scientist. "It's been difficult forming spheroids of the same characteristics before – having same size spheroids makes the tumour response to a particular concentration [of drug] more statistically relevant, since it can be repeated without error due to cell number."
But Wolfgang Mueller Klieser of Johannes Gutenberg University in Mainz, Germany, is not convinced that growing uniform spheroids is necessary because collections of mismatched ones can quickly be sorted by size. "But, one great advantage here is that they can produce spheroids relatively rapidly – the standard methods take 10 days to produce spheroids 0.5 mm across."
Lee's spheroids are ten times smaller than that, points out Helene Bobichon of the University of Reims, France. "Making spheroids with a too low a number of cells could be inefficient for getting similar results to an "in vivo" response," she says.
Lee says small spheroids less accurately represent tumours at an earlier stage of development, and has plans to make larger ones using new prototypes of the device. "We focused on small spheroids for the proof of concept," he says,

WordPerfect Case : U.S. Supreme Courtrejects Microsoft antitrust appeal

The U.S. Supreme Court on Monday denied a Microsoft appeal to an antitrust case that dates back to Novell's desktop PC software business in the mid-1990s.

The move leaves standing a lower court ruling that says Novell can sue Microsoft under federal antitrust laws. Novell argued that Microsoft used its monopoly power to sink Novell's QuattroPro spreadsheet and WordPerfect word processor.

Microsoft and Novell are partners now, but the companies used to be fierce competitors in the office software space. We know how that war turned out: Word and Excel gradually squeezed WordPerfect and Quattro Pro out of the market, and Novell eventually got out of the office software business. Today, the Supreme Court rejected Microsoft's attempts to halt an antitrust lawsuit filed by Novell, ruling that the case can proceed to trial (Chief Justice John Roberts, a Microsoft shareholder, recused himself from the case.
Novell sued Microsoft in 2004, accusing the software giant of anticompetitive practices in the office software market. According to Novell, Microsoft withheld technical information necessary to get WordPerfect and Quattro Pro up and running under Windows 95.

A key piece of evidence comes from a 1994 e-mail from outgoing Microsoft chairman Bill Gates in which he ordered that some details on Windows' inner workings not be provided to his company's competitors. "I have decided that we should not publish these extensions," wrote Gates. "We should wait until we have away to do a high level of integration that will be harder for likes of Notes, WordPerfect to achieve, and which will give Office a real advantage... We can't compete with Lotus and WordPerfect/Novell without this."

Novell also accuses Microsoft of improperly using the operating system monopoly that got it in trouble with the Department of Justice. In order to raise the profile of Word and Excel, the software giant reportedly used its OS clout with manufacturers to keep WordPerfect and Quattro Pro from being bundled with new machines
Microsoft argued unsuccessfully that the statute of limitations over its conduct had expired, and that Novell should be barred from bringing an antitrust lawsuit because it did not compete in the operating system market. The software giant has consistently blamed Novell for WordPerfect's woes, saying that the company's mismanagement was the real issue. "WordPerfect deliberately chose not to develop a version for early versions of Windows in the hope that depriving Windows of a key application would limit the success of Windows," argued Microsoft in 2004.

At the beginning of the 1990s, WordPerfect had almost half of the word processor market, while Word had about 30 percent between the DOS and Windows versions (the next-most-popular app was IBM's DisplayWrite). By June 1994, when Novell purchased WordPerfect, the application's market share had dipped below 20 percent while Word for Windows soared past the 50 percent mark. Novell unloaded WordPerfect to Corel in 1996 for $186 million, a mere fraction of the $855 million it had paid less than two years previous.

Microsoft expressed its disappointment in the Supreme Court's decision in a statement given to Ars. "We realize the Supreme Court reviews a small percentage of cases each year, but we filed our petition because it offered an opportunity to address the question of who may assert antitrust claims," a company spokesperson said. "We look forward to addressing this and other substantive matters in the case before the trial court. We believe the facts will show that Novell's claims, which are 12 to 14 years old, are without merit."

Microsoft and Novell's 2006 interoperability and patent cross-licensing agreement allowed Novell to continue pursuing the WordPerfect case, and the Supreme Court's ruling clears the way for an eventual trial. Microsoft may choose to settle this case out of court, as US antitrust laws allow for damages to be trebeled.

Planeta Celular a information portal of mobile world

Planeta Celular is All about Telefonia Celular, Internet, Tecnologia, Mobilidade, Clipes, Videos, Ringtones, Toques, Telecomunicações, Redes, Torpedos. Tips Iphone, iPhone 8Gb, Nokia, N95, Motorola, Samsung, LG, Cellular telephony, Technology, Mobility, Clips, Ringtones, Touchs, Telecommunications, Nets, MP3. Tu fuente de tonos de llamada, juegos, protectores de pantalla, salvapantallas y accesorios para tu celular Motorola, Nokia, Samsung, LG, Sony Ericsson.

AirPort Express to support 802.11n promises speeds five times


Apple's tiny wireless base station has been updated with technology to support the high speed connectivity for the full satisfaction as customer demand .
The iPod gets a lot of credit for Apple's success, and rightly so, but another Apple device deserves more attention than it has received: the humble Airport Express. If you use iTunes, have WiFi, and have a stereo somewhere in your house -- a combination that probably describes a decent percentage of the population -- you should consider the Airport Express, updated today to support faster 802.11n wireless networks.
Apple enthusiast sites including AppleInsider began hinting at a possible release of a new version of Airport Express over the weekend. The overall design of the player has not changed much, nor has the price -- it remains at $99.

802.11n promises speeds five times that of 802.11g, with about twice the range of today's wireless networking technologies.

At this price, it would be competitive with offerings from other network hardware providers. Most companies have put a similar $99 price point on their Wireless-N router products, although there are a few options available for less than that.

All Mac computers with Intel Core 2 Duo and Intel Xeon processors except the Mac mini and the 1.83 GHz iMac support the 802.11n technology in AirPort Express, Apple said. Windows computers have begun to ship with the technology built in as well.

A $1.99 download is required to enable Wireless-N on some computers, unless the Extreme model is purchased, where it comes with the device for free. Both Windows and Mac users can connect to the Express, and can also share a printer through the included USB port.

In addition, speakers can be connected to the device to allow users to stream audio from iTunes wirelessly.

The Airport Express is the third 802.11n product from Apple, following the $179 Airport Extreme router for up to 50 concurrent wireless users, and the 500 GB and 1 TB Time Capsule wireless storage systems, priced at $299 and $499 respectively.

Find here

Home II Large Hadron Cillider News