Search This Blog

Tuesday, April 1, 2008

Robotic mind

Robotic next time is really may make our life easy and hassle free, the robot scientist is descovering the matix of the human mind to apply with robotic mind
schoolchildren struggle to learn geometry, but they are still able to catch a ball without first calculating its parabola. Why should robots be any different? A team of European researchers have developed an artificial cognitive system that learns from experience and observation rather than relying on predefined rules and models.
Led by Linköping University in Sweden, the researchers in the COSPAL project adopted an innovative approach to making robots recognise, identify and interact with objects, particularly in random, unforeseen situations.

Traditional robotics relies on having the robots carry out complex calculations, such as measuring the geometry of an object and its expected trajectory if moved. But COSPAL has turned this around, making the robots perform tasks based on their own experiences and observations of humans. This trial and error approach could lead to more autonomous robots and even improve our understanding of the human brain.

“Gösta Granlund, head of the Computer Vision Laboratory at Linköping University, came up with the concept that action precedes perception in learning. That may sound counterintuitive, but it is exactly how humans learn,” explains Michael Felsberg, coordinator of the EU-funded COSPAL.

Children, he notes, are “always testing and trying everything” and by performing random actions – poking this object or touching that one – they come to understand cause and effect and can apply that knowledge in the future. By experimenting, they quickly find out, for example, that a ball rolls and that a hole cannot be grasped. Children also learn from observing adults and copying their actions, gaining greater understanding of the world around them.

Learning like, and from, humans


Applied in the context of an artificial cognitive system (ACS), the approach helps to create robots that learn much as humans do and can learn from humans, allowing them to continue to perform tasks even when their environment changes or when objects they are not pre-programmed to recognise are placed in front of them.

“Most artificial intelligence-based ACS architectures are quite successful in recognising objects based on geometric calculations of visual inputs. Some people argue that humans also perform such calculations to identify something, but I don’t think so. I think humans are just very good at recognising the geometry of objects from experience,” Felsberg says.

The COSPAL team’s ACS would seem to bear that theory out. A robot with no pre-programmed geometric knowledge was able to recognise objects simply from experience, even when its surroundings and the position of the camera through which it obtained its visual information changed.

Getting the right peg in the right hole

A shape-sorting puzzle of the sort used to teach small children was used to test the system. Through trial and error and observation, the robot was able to place cubes in square holes and round pegs in round holes with an accuracy of 2mm and 2 degrees. “It showed that, without knowing geometry, it can solve geometric problems,” Felsberg notes.

“In fact, I observed my 11-month-old son solving the same puzzle and the learning process you could see unfolding with both him and the robot was remarkably similar.”

Another test of the robot’s ability to learn from observation involved the use of a robotic arm that copied the movement of a human arm. With as few as 20 to 60 observations, the robotic arm was able to trace the movement of the human arm through a constrained space, avoiding obstacles on the way. In subsequent trials with the same robot, the learning period was greatly reduced, suggesting that the ACS was indeed drawing on memories of past observations.

In addition, by applying concepts akin to fuzzy logic, the team came up with a new means of making the robot identify corresponding signals and symbols such as colours. Instead of specifying three numbers to represent a red, green and blue component, as used in most digital image processing applications, the team made the system learn colours from pairs of images and corresponding sets of reference colour names, such as red, dark red, blue and dark blue in a representation known as channel coding. Similar to how colours are identified by the human brain with sets of neurons firing selectively to differentiate green from black, for example, channel coding offers a biologically inspired way of representing information.

“As humans, we can use reason to deduce what an object is by a process of elimination, i.e. we know that if something has such and such a property it must be this item, not that one. Though this type of machine reasoning has been used before, we have developed an advanced version for object recognition that uses symbolic and visual information to great effect,” Felsberg says.

AMD unlives ATI FireGL V7700 for professional


AMD unveiled first commercially available 3D workstation graphics card with DisplayPort support, the ATI FireGL V7700. The new card provides superior rendering speed, 3D performance and color fidelity for Computer Aided Design (CAD), Digital Content Creation (DCC) and Medical Imaging professionals.

The card features a 10-bit display engine that can produce more than one billion colors at any given time. The ATI FireGL V7700 features 512MB of memory and Dual Link DVI Output. The combination of DisplayPort output and a Dual Link enabled DVI output, generates a multi-monitor desktop over 5000 pixels wide from a single accelerator. With native multi-card support, users can see more and do more using four displays driven by two ATI FireGL products in the same workstation

ATI FireGL accelerators are engineered to offer the AutoDetect feature which frees the end user from the burden of configuring the graphics driver in order to obtain optimal application performance for many applications. These are automatically detected when started by the user and graphics driver settings are automatically configured. The issues with optimally configuring the driver when running multiple applications simultaneously are also eliminated.

The automatic detection feature can recognize applications when launched and configure the graphics settings as users switch between multiple applications. Because of the “Default” settings this powerful new feature also simplifies enterprise deployment, as there is no need to manually configure systems across the enterprise for software applications supported by the AutoDetect feature.
Additional features of the ATI FireGL V7700 workstation graphics accelerator include:

512MB of memory enabling effortless real-time interaction with large datasets and complex models and scenes.

Dual Link DVI Output wherein the combination of DisplayPort output and Dual Link DVI Output generates a multi-monitor desktop over 5,000 pixels wide from a single accelerator. With native multi-card support, users can do more using four displays driven by two ATI FireGL products in the same workstation.

Application Certification wherein ATI FireGL workstation graphics accelerators are thoroughly tested and certified with major CAD and DCC applications.

AutoDetect wherein the ATI FireGL V7700 (based on a new generation GPU with 320 unified shader units) maximizes throughput by automatically directing graphics horsepower where it's needed most.

According to the company, the ATI FireGL V7700 3D workstation graphics accelerator will be shipped in April 2008 for a MSRP of $1,099 USD.

Numonyx :A new type of memory hits market


Who dont want more and more easy and soft use of utility?
Numonyx's memory chip plans pan out, tech companies could save big and PC users could get the performance that's been promised for years
Numonyx, the memory joint venture between STMicroelectronics and Intel, is already shipping samples of phase change memory (PCM) chips to customers and will start shipping PCM chips commercially later this year, CEO Brian Harrison said at a press conference Monday.

"We expect to bring it to market this year and generate some revenue," Harrison said. "It is one to two years before it becomes widely commercially available."

Hearing a CEO talk about existing samples and near-term commercial shipments is a big deal for PCM. The technology has been stuck in the proverbial "a few years away" phase for a long time.

"It could be cheaper than flash within a couple of years," analyst Richard Doherty in said in 2001, predicting the technology might hit the market in 2003.

"We are making good progress," Stefan Lai, one of Intel's flash memory scientists, said in 2002.

Gordon Moore, co-founder of Intel and the man for whom Moore's Law was named, had an article in the September 28, 1970 issue of Electronics predicting that Ovonics Unified Memory, another name for the same type of memory, could hit the market by the end of that decade. (The same issue of Electronics also included this article: "The Big Gamble in Home Video Recorders.")

The delays have largely stemmed from two sources. First, it's not an easy technology to master. In phase change memory chips, a microscopic bit on a substrate gets heated up to between 150 degrees and 600 degrees Celsius. The substrate is made of the same stuff as CD disks. The heat melts the bit, which when cooled solidifies into one of two crystalline structures, depending on how fast the cooling takes place. The two different crystalline structures exhibit different levels of resistance to electrical current, and those levels of resistance in turn are then as ones or zeros by a computer. Data is born.

Both Intel and ST made a significant amount of progress in controlling the material in the past few years, Harrison said.

Size matters
Second, the makers of flash memory have continued to improve their technology. Back in 2001, some believed that flash would hit a wall at the 65-nanometer level of chip design. Then that got moved to 45 nanometers. Today, manufacturers mass-produce flash at 65 nanometers and have samples at 45 nanometers. Numonyx has samples of traditional NOR flash at 32 nanometers. Why switch when the existing technology continues to work?

Again, in the past few years, Intel and ST have made progress and figured out a way to produce PCM chips on the manufacturing lines developed for standard chips. That has eroded the barriers to bringing PCM out.

Although Philips, IBM, and others have made progress in PCM, only Samsung is close to coming out with chips commercially, Harrison said.

Why will the world want PCM? Performance, says Numonyx CTO Ed Doller. PCM chips can survive tens of millions of read-write cycles, he said, or far more than flash. Reading data to PCM chips takes 70 to 100 nanoseconds, or as fast as NOR flash. Data can be written to the chips at a rate of 1 megabyte a second, or equivalent of NAND flash. There is also no erase cycle, making it similar to DRAM.

In other words, you have the best attributes of three different types of memory--plus, PCM will potentially use far less power.

The cost premium is also coming down fast. By next year, Numonyx hopes to make PCM chips, using 45-nanometer processes, that can hold two bits of data per cell. If that's possible, those chips would compete in price with single-bit-per-cell NAND flash, the memory that's being put into solid-state drives today, said Doller.

But the most important thing is that scientists believe they will be able to increase the density of these chips comparatively easily. In the future, standard flash chips will need additional circuitry for error correction and other functions. Not so with PCM. The smaller the bits get, the less heat that will be required to flip them, Doller added.

"The most important thing is that it is scalable," Doller said.

Google is introducing new code to allow Google Docs to be used offline.


Google gears up for offline word processing
Users will be offered offline access to docs.google.com over the coming weeks allowing them to work on documents when web access is unavailable.

On reconnecting to the internet any changes will be automatically synchronised, the company said in a statement.

Offline editing is part of the Google Gears initiative introduced 15 months ago to allow application developers to build offline features into their own programs.

Google Gears already works within the Google Reader news feed reader and third party applications such as RememberTheMilk.com.

Some industry watchers see the announcement as clearing a major hurdle ahead of a direct assault on Microsoft Office, but the inability to create new documents may persuade serious users to adopt a wait and see approach before switching sides.

"This is still early days. Google is working to make more web applications and functions work where connections are unavailable," said a spokesperson for the company.

Google claimed that the move is intended to give users "a taste of the future " and that next steps include the ability to edit spreadsheets and presentations.

This latest enhancement to Google's free suite of tools comes just as OpenOffice.org released the latest version of its free open source office suite.

more.....

Google Docs Now ‘Geared’ With Offline Functionality
Starting Monday, Google announced the offline access for Google Docs will be possible through Google Gears, still in beta version, an open source browser extension that provides offline functionality for Web applications. With Google Docs and Google Gears, users can now access their desktop apps without any inconvenient, such as the lack of an Internet connection.

In May last year, Google launched Google Gears as “an important step in the evolution of web applications,” making data availability problems when there’s no Internet connection an issue of the past. Eric Schmidt, Google CEO, said last year that Google Gears is “tackling a key limitation of the browser” and improves user experience in the “cloud.”

This year, Google wants to give users the chance to take the cloud with them anywhere, with an all-time accessibility: “With Google Docs offline (powered by Google Gears), I can take my little piece of the cloud with me wherever I go,” said Philip Tucker, Software Engineer for Google Docs, in the company’s blog.

All the user’s documents will be synced once the Gears extension is installed, and even if they don’t have an Internet connection, they will still be able to open and edit their documents. “When I lose my connection, I sacrifice some features, but I can still access my documents. Everything I need is saved locally,” said Tucker in the blog.

Google Gears currently offers support for Windows Vista Internet Explorer and Firefox, Windows XP Internet Explorer and Firefox, Mac OS/X Firefox and Linux Firefox. Everything will work through the web browser when offline, and once the Internet connection restored, the documents will sync with the server.

“It’s all pretty seamless,” said Tucker, as the user no longer needs to remember to save docs when leaving for a trip or save changes when internet connection is restored. “With the extra peace of mind, I can more fully rely on this tool for my important documents,” he added in the blog. For the time being, offline access in only available in English.

Find here

Home II Large Hadron Cillider News