Presently, most robots are programmed to perform definite tasks and they perform them with absolute precision, but when precise movements are replaced with thoughtful movements, the robot will need to learn how to perform a task and then proceed. The robot will no longer be the precise machine that it was, but one that is prone to errors as we humans are.
Len Calderone for | RoboticsTomorrow
It’s a beautiful sunny day. You’re sitting on a park bench absorbing the sun’s rays. Sitting next to you is a robot in a human form. The shinny skin gives it away. You look over and the robot looks back. Your first thoughts are: Does this robot recognize me as a human? Is it thinking? Can it think?
For a robot to operate in a human environment, it should have mobility, as well as manipulation abilities, but since the human environment is constantly changing, a robot must have cognitive skills, not just fixed repetitive moves. A robot must be able to understand its spatial surroundings, and make decisions based on existing conditions.
Such ability cannot be preprogrammed. Therefore, the robot must have the capacity to learn and adapt to surrounding humans if they are to be any help. If a robot is to serve humans, it must improve its capabilities in a constant process of acquiring new knowledge and skills.
Most robots today have the capacity to sense, move and perform some action. Future robots will need to display a social behavior and communicate with us humans regarding the activity going on at the time. The Defense Advanced Research Projects Agency (DARPA) was established in 1958 to prevent strategic surprise from U.S. adversaries by maintaining the technological superiority of the U.S. military. The pentagon is funding a team of researchers to develop a robotic brain that would allow robots to function independently of a computer. Unlike traditional artificial intelligence systems that rely on conventional computer programming, this one sees and thinks like a human brain.
Imagine this robot coming to get you! Is the terminator that far in the future?
The above robot traverses a simulated hallway containing a very tall step and a thin walkway, as it balances and leaps across narrow terrain, conducting its own autonomous decision-making and keeping upright. It uses its strong arms to balance itself as it climbs a step, then it leaps down, stretching its legs to continue its journey along the thin edges of a gutted hallway floor.
What sets this robot’s brain apart from other mechanical brains is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and this brain is capable of remembering information. Each connection is a synthetic synapse, which is what allows a neuron to pass an electric or chemical signal to another cell. Conventional computers move information from memory to a processor, but this brain processes information in a totally new way using microscopic wires to imitate the electrical and chemical pulses sent from cell to cell within the human brain.
Professor Bram Van Heuveln, who organized the Cognitive Robotics Lab, said,“Suppose we wanted to build a robot to catch fly balls in an outfield. There are two approaches: one uses a lot of calculations -- Newton's law, mechanics, trigonometry, calculus -- to get the robot to be in the right spot at the right time, but that's not the way humans do it. We just keep moving toward the ball. It's a very simple solution that doesn't involve a lot of computation but it gets the job done."
As a cognitive scientist, I want this to be built on elements that are cognitively plausible and that are recyclable -- parts of cognition that I can apply to other solutions as well," said Van Heuveln. "To me, that's a heck of a lot more interesting than the computational solution."
Early investigations in the Lab show how a more cognitive approach, employing limited resources, can easily outpace the more powerful computers using a brute force method.
Robotic applications are being conducted in a cognitive context, because our future robots will live in a human inhabited environment where interactions and communication requires cognitive skills. Research consists of endowing robots with some cognitive capabilities which are key elements to autonomous systems, such as perception processing, attention allocation, anticipation, planning, and reasoning.
The biggest challenge for robots lies in making sense of the world around it. We are good at fabricating and actuating robots, but in order for them to use their abilities to the fullest, they need to logically know their surroundings, such as a robot servant that needs to be able to recognize jam from ketchup. Because they are able to embrace uncertainty and the rules of probability, robots can make sense of their surroundings like never before.
To function in unstructured surroundings, a robot needs to use sensing to understand the world relative to itself. Sensing is the key to successful robots, and probability is the key to successful sensing. A robot cannot be sure about the true state of objects and surroundings, and it needs to specify how much accuracy comes from a sensor. This is accomplished by using the mathematics of probability. These probabilities represent how true each possible value is; thereby, quantifying uncertainty.
To move around, a robot has to know where it is and it also needs to identify what it encounters. In other words, the robot needs to know where it can go and where it can’t. This means that the robot must recognize the ground and obstacles. To recognize the environment, the robot will use algorithms. A kind of laser radar (LiDAR) is used to identify passable areas versus impassable areas. It can then determine the correct path to travel.
The LiDAR pulse bounces off whatever it encounters
In making identifications, the robot will view an object and then compare the features that the object would be expected to have, in order to determine what the entity is. As an example, if the robot senses an object that has the general features of a human, it will then look for certain features, such as a head, arms, and legs to make a positive identification.
The basic idea is that the algorithm can search through learning examples and count the number of times various features appear that are associated with the object of interest. This then gives the robot the probability that a given feature will be found for each part of the object.
Now that the robot knows where it is and what is around it, it can either react to the surroundings or plan for its next move. If reactive, the robot will analyze the situation and if a certain action happens, the robot will perform a certain reaction.
Planning algorithms are more complicated, as they look forward and choose the best procedure to perform from multiple choices. Using these algorithms, the robot can predict the outcome of its actions, and evaluate each operation using a metric. Then, the robot will select the best action to take.
Presently, most robots are programmed to perform definite tasks and they perform them with absolute precision, but when precise movements are replaced with thoughtful movements, the robot will need to learn how to perform a task and then proceed. The robot will no longer be the precise machine that it was, but one that is prone to errors as we humans are. Whenever there are choices to be made, there is a probability that a wrong choice could be made. A cognitive robot will need to learn through watching humans in specific conditions, or be placed in hit-or-miss situations, where it will learn through mistakes.
As mankind builds the ultimate humanoid, the real question is: Where will all this take us?
For additional information:
- http://www.cs.toronto.edu/~hector/Papers/cogrob.pdf
- http://www.cogniron.org/final/RA5.php
- http://cogprints.org/6238/1/tikhanoff-et-al-final-acm.pdf
- http://cdn.intechopen.com/pdfs/58/InTech-Cognitive_robotics_robot_soccer_coaching_using_spoken_language.pdf
- http://www.slideshare.net/IBMIndiaSS/watson-white-paper
- http://felipe.trevizan.org/papers/trevizan05:lowCostRC.pdf
- http://www-kasm.nii.ac.jp/papers/takeda/private/00/RSJAR98041.pdf
- http://www.robot.uji.es/EURON/pdfs/Lecture_Notes_IROS07.pdf
- http://link.springer.com/chapter/10.1007%2F978-3-540-74565-5_3#
- https://www2.lirmm.fr/lirmm/interne/BIBLI/CDROM/ROB/2010/IROS_2010/data/papers/0922.pdf
The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow
Comments (0)
This post does not have any comments. Be the first to leave a comment below.