Search This Blog

Monday, March 31, 2008

New robotic intelligence


A new robot is able to learn by itself and can solve increasingly complex tasks with no additional programming.
Designers of artificial cognitive systems have tended to adopt one of two approaches to building robots that can think for themselves: classical rule-based artificial intelligence or artificial neural networks. Both have advantages and disadvantages, and combining the two offers the best of both worlds, say a team of European researchers who have developed a new breed of cognitive, learning robot that goes beyond the state of the art.

The researchers’ work brings together the two distinct but mutually supportive technologies that have been used to develop artificial cognitive systems (ACS) for different purposes. The classical approach to artificial intelligence (AI) relies on a rule-based system in which the designer largely supplies the knowledge and scene representations, making the robot follow a decision-making process – much like climbing through the branches of a tree – toward a predefined response.

Biologically inspired artificial neural networks (ANNs), on the other hand, rely on processing continuous signals and a non-linear optimisation process to reach a response which, due to the lack of preset rules, requires developers to carefully balance the system constraints and its freedom to act autonomously.

“Developing systems in classical AI is essentially a top-down approach, whereas in ANN it is a bottom-up approach,” explains Michael Felsberg, a researcher at the Computer Vision Laboratory of Linköping University in Sweden. “The problem is that, used individually, these systems have major shortcomings when it comes to developing advanced ACS architectures. ANN is too trivial to solve complex tasks, while classical AI cannot solve them if it has not been pre-programmed to do so.”

Beyond the state of the art

Working in the EU-funded COSPAL project, Felsberg’s team found that using the two technologies together solves many of those issues. In what the researchers believe to be the most advanced example of such a system developed anywhere in the world, they used ANN to handle the low-level functions based on the visual input their robots received and then employed classical AI on top of that in a supervisory function.

“In this way, we found it was possible for the robots to explore the world around them through direct interaction, create ways to act in it and then control their actions in accordance. This combines the advantages of classical AI, which is superior when it comes to functions akin to human rationality, and the advantages of ANN, which is superior at performing tasks for which humans would use their subconscious, things like basic motor skills and low-level cognitive tasks,” notes Felsberg.

The most important difference between the COSPAL approach and what had been the state of the art is that the researchers’ ACS is scalable. It is able to learn by itself and can solve increasingly complex tasks with no additional programming.

“There is a direct mapping from the visual precepts to performing the action,” Felsberg confirms. “With previous systems, if something in the environment changed that the low-level system was not programmed to recognise, it would give random responses but the supervising AI process would not realise anything was wrong. With our approach, the system realises something is different and if its actions do not result in success it tries something else,” the project coordinator explains.

“Like training a child or a puppy”

This trial-and-error learning approach was tested by making the COSPAL robot complete a shape-sorting puzzle, but without telling it what it had to do. As it tried to fit pegs into holes it gradually learnt what would fit where, allowing it to complete the puzzle more quickly and accurately each time.

“After visual bootstrapping, the only human input was from an operator who had two buttons, one to tell the robot it was successful and another to tell it that it had made a mistake. It is much like training a child or a puppy,” Felsberg says.

Though a learning, cognitive robot of the kind developed in COSPAL constitutes an important leap forward toward the development of more autonomous robots, Felsberg says it will be some time before robots gain anything close to human cognition and intelligence, if they ever do.

“In human terms, our robot is probably like a two or three year old child, and it will take a long time for the technology to progress into the equivalent of adulthood. I don’t think we will see it in our lifetimes,” he says.

Nonetheless, robots like those developed in COSPAL will undoubtedly start to play a greater role in our lives. The project partners are in the process of launching a follow-up project called DIPLECS to test their ACS architecture in a car. It will be used to make the vehicle cognitive and aware of its surroundings, creating an artificial co-pilot to increase safety no matter the weather, road or traffic conditions.

“In the real world you need a system that is capable of adapting to unforeseen circumstances, and that is the greatest accomplishment of our ACS,” Felsberg notes.

No comments:

Find here

Home II Large Hadron Cillider News