...Starting with first principles
America First Trust services
Education, analysis, and advice for uncertain times

 

 

Brian Scasellati and Cog

photo: © Robo Sapiens: Evolution of a New Species ..(Peter Menzel and Faith D'Alusio/The MIT Press)

 

Entrepreneur by Bill Fox

 

Part 3

MODULAR FRONTIERS


First posted Feb 2005


OVERVIEW

It is an interesting paradox in robotics that analogies with human body parts are both immediately useful and at the same time extremely self-limiting.

Unlike human body parts, robot parts can be mixed and matched. As one example, the same operating system that enables a small robot to clean sludge out of a pipe might also be used for a Hollywood entertainment robot. A robot company might have to get established in business by first making money in a conventional waste application before broadening to the Hollywood variety.

The future is in plastics, young graduate. Also unlike humans, robot dimensions have a high degree of plasticity (or malleability). Through re-engineering, they can be shrunk, expanded in size, or chopped up into multi bot systems. In addition to serving as self-contained systems, they can also become embedded inside existing machines and also inside plants, animals, minerals, and things.

If we change metaphors from human body parts to a poker hand, lets look at the cards that we are currently being dealt when it comes to the state of the art in robotics modules:

BRAINS:

The Neal Scanlan Studio Performance Animation Controller (PAC) used by Hollywood

No need to feel threatened quite yet, since robot intelligence currently rivals insects. Sometimes it rivals two month old human infants as well. (Except for when a robot is hooked to a supercomputer, which I will get to later). It may be another ten to fifteen years before the average autonomous robot functions on the level of dogs. Add on another five to ten years after that before robots rival most humans. For folks who can’t wait, there are four strategies worth mentioning. They are remote presence, layering, external database management, and (you guessed it), augmenting external processing power via a supercomputer.

Remote presence: In this approach, human and robot brains work together in partnership. Imagine, as one example, that we have a human standing in front of his computer screen. He is connected by Internet to see the camera vision of a robot located far away. He wears a full body exoskeleton that extends over his hands, arms, torso, and legs. It has a cord that attaches like a mouse to his computer. As he moves his hands, he sees the robot move its humanoid arms and hands in the exact same way. As he guides a robot hand to grasp a pole, the haptics (touch) system allows him to feel resistance in his own hand as the robot grasps an actual object. SARCOS robotics has an excellent demo video web page of humans with full suit and hand exoskeletons manipulating robots.

The Walt Disney Imagineering Company coined the term "animatronics" in the 1960's to connote "robotic puppeteering" for Disney theme park creatures and other entertainment applications, so the underlying concept has been around for a while. The Scanlan PAC depicted above is an example of a system that enables human hand and finger movements to bring fantasy creatures to life.

Where the future really becomes now in the remote presence concept is where an artificial intelligence program can remember all the moves made by a human and even fill in voids in instructions to give the robot various measures of autonomy. As an example, if a human directs a robot to climb some stairs, the human does not have to guide each step. In addition, the robot is able to comprehend when it has reached the top of the stairs to go back to a normal walking mode on its own.

At the 11-14 Oct 2004 Carnegie Mellon Robotics Institute 25th Anniversary Event, I listened to neuroscientist Dr. Mitsuo Kawato, Director of ATR CNS Laboratories in Japan, discuss a collaborative R&D project near Osaka using a SARCOS robot. He has called on the Japanese government to commit $446 million a year for the next three decades for the "Atom Project." He wants to build a robot with the mental, physical, and emotional capabilities of a five year old child, and mimic inside the robot all the different functional areas of the human brain. Already the SARCOS robot has the autonomous ability to play a drum and bounce a ball on a racket. Dr. Kawato showed us a video featuring a Japanese TV reporter having fun playing a scrabble game against the robot with Georgia Tech researcher Dr. Chris Atkeson standing between the contestants.

One might imagine agricultural applications where people who live in deserts or economically depressed areas would be happy to offer their services over the Internet from their homes to operate robots in lush areas. Complex farm tasks might be broken into lots of little tasks handled by an “Ag Ant” or "coolie-labor" army of robots. This is another way one could bring factory assembly line concepts to the field. As individual robot behaviors become increasingly shaped with artificial intelligence programs, a human handler could supervise more robots simultaneously on a split screen on his computer.

Remote presence can not only mean one person controlling more robots, but also the other way around where many people guide one robot from different locations. Dr. Robin Murphy, Director of Center for Robot-Assisted Search and Rescue in Tampa, FL, made this point at her Oct 2004 talk at CMU. As search and rescue robots were going through the World Trade Center rubble following the 9-11 disaster, the grayed-out appearance of dust-covered objects made it hard to identify what they were and undermined depth perception. The more people with expertise in different areas watching the robot camera pictures, the better the interpretative process. In addition, when search and rescue robots are racing against time to save victims, overlapping viewers and operators eliminate robot downtime. According to Dr. Murphy, about 20% of collapsed building survivors are entombed, and their life expectancy rapidly diminishes after 48 hours. Also, in addition to looking for survivors, emergency personnel have an urgent need to assess the probability of further collapse, trace the origins of any smoke, and conduct other immediate surveys and assessments of the site. People in their homes or offices in far away cities can just as easily be watching the cameras and take over the controls as someone on site. Lastly, getting back to the aforementioned "Ag Ant" concept, it may be better to have lots of little robots swarming around, each carrying their own cameras to view the same object from different angles, and each being monitored by many different people in different places, than to have everything concentrated on one robot.

For remote presence applications by wireless or over the Internet, the delay in round trip signal time (or its latency) can become a big issue depending on the distances and switching units involved. An MIT study in which humans controlled robots moving nuclear materials determined that when latency increases over half a second, a person is unable to keep track of the actions commanded. (Brooks, p. 133). The half second delay issue usually occurs when someone is trying to operate over the Internet from the other side of our planet.

Currently RoboDynamics of Los Angeles appears to be leading the charge to produce a lower-priced remote presence robot platform. Its MILO, the personal robotic assistant, is priced at $2,999. MILO can serve as a remote mobile sentry for home or industrial use. It consists of a camera and microphone mounted several feet above a mobile circular base. On each side of the camera eye are panels that hold various types of blinking lights that have been programmed to show twirl or star patterns to indicate certain moods. The operator uses a joystick to control the robot's movement and has a panel menu on his computer to select certain blinking light moods, make certain sounds, or click a digital snapshot of the current screen view. As part of its strategy to promote early adoption and innovative feedback, RoboDynamics offers a $500 discount to tinkerers who qualify for its early adoption program.

On the specialty high-end arena, Pittsburgh-based Mobot has produced socially interactive robots with remote potential for museums and for a classroom "field trip" inside an aviary.

I personally think the really big mass market commercial applications will require certain features that have not yet come down far enough in price or reduced "intellectual load" (difficulty of operation). They will likely include remote hand manipulation, artificial intelligence behavioral shaping, a high degree of autonomous learning and sensing, extreme ease of control, easy navigation systems, a modularized ability to fit on a wide variety of exotic propulsion units, and two way video-conferencing. The latter may include snake robots that reach disaster victims amidst rubble or spider bots who work beside oil workers on rigs.

 

Intuitive Surgical's Da Vinci System

On the commercial hand manipulation side, surgeons are currently using remote presence robots created by Intuitive Surgical Corp (Nasdaq: ISRG) for serious procedures. The surgeons insert their hands into loops that control robotic manipulators elsewhere in a room. A brain surgeon can damp down his precision movements by gearing the robot’s movements in ratios such as one to five relative to his own. The robot system costs about $1 million and can just as easily be located 100 km away as in the same room. (Brooks, p. 224).

RP-6 making rounds

 

 

In the mobile video-conferencing arena, In Touch Health has developed the RP-6, informally dubbed “Robo Doc” by some journalists and hospitals. It allows a doctor at home or in his office to control a robot in a hospital over the Internet. Currently the robots are being used in emergency rooms and intensive care facilities, and their use is growing. A doctor can navigate an RP-6 down corridors into patient rooms or onto elevators using a joystick. Sensors and safety software help to avoid inadvertent collisions. The latency factor between commands and robot response over the Internet is about 200 milliseconds. In Touch Health's research suggests that this latency has been very acceptable, and does not become an issue until it approaches 400 milliseconds. (This is consistent with the .5 millisecond maximum latency factor determined by the MIT study mentioned elsewhere in this article). Interestingly enough, In Touch is finding doctor acceptance without including such Robonaut-like features as hand-manipulators. The company is focusing on making the system as user-friendly as cell phones to speed its acceptance within their specialty niche.

 

The Muscle Suit

Another interesting variation of the remote presence concept in the propulsion and ease of control arena is the Intelligent Assist Device (IAD). In this case, the robot is actually on the human himself. As the human initiates a movement with an arm or leg, the robot augments it. As an example, Hiroshi Kobayashi, one of Japan's leading robot scientists, has designed a muscle suit to help disabled people move about. An exoskeleton can also be used by industrial workers and military personnel to help lift heavy objects.

Apart from these kinds of specialty niches, the most obvious use for current remote presence technology at current prices is security/surveillance. The problem here from a business viewpoint is that the robot competes with the static "sensor suite" concept. It is still much cheaper to rig lots of cameras, microphones, and other sensors in one location than to buy, operate, and maintain a mobile robot. But as processing power steadily increases and component costs steadily decline, this some day will change.

Layered intelligence: The human body actually consists of several “brains” in different locations that interface with each other. If all human motion had to be directed by “central planning” from the skull area, we would not be able to walk, digest food, or keep our hearts beating. Thanks to the autonomic nervous system, when we touch a hot object, out hands pull away faster than nerve impulses travel to the brain and back.

Genghis

Subsumption architecture refers to layered and distributed intelligence in a robot. This is analogous to the autonomic nervous system. One example is the robot Genghis, now in the Smithsonian Air & Space Museum, built by Dr. Rodney Brooks and Colin Angle in the late 1980's. Intelligence modules at lower levels allow the legs to immediately react to their environment in certain ways that are not controlled by the higher "brain."

Another example involves robotic "cockroaches" being built at Case Western University. The robot designers are trying to replicate nature as much as their man-made materials will permit. Although cockroaches in nature appear to move very quickly over uneven surfaces, slow motion studies reveal that their myriad feet actually stumble and bumble and feel their way along in fast motion, and are not precisely coordinated by whatever a cockroach has for a main brain.

Robo-Knee demonstration, photo © Yobotics

Last, but not least, layered intelligence can also be found in robot prosthetics (artificial limbs for humans). While attending the March 2004 robot show in Cambridge I saw a video of a girl with an amputated leg. The “before” video showed her pivoting sideways as she struggled to hobble up and down stairs with a long stiff artificial leg. The “after” video showed her using a robotic leg produced by start-up firm Yobotics of Boston, MA. The robot leg has an autonomous sensing capability to flex at the knee. The girl was able to walk up and down stairs looking fairly close to normal.

I was part of a large audience that watched this video. It drew a strong emotional response that led to spontaneous applause.

External Database Management: This is another important "brain" function for robots. ActivMedia CEO Jeanne Dietsch described an example of a data-gathering bot in an interview. “Hewlett-Packard uses our PatrolBot in their data centers. It's got temperature sensors. It drives around several times a day, and the data from these patrols is used to create 3-D models of the heat in the facility. If there's a problem in the facility, the computer or a human can send the robot out to check on it.”

According to David Hyams, chief technology officer of Seattle-based robotics systems integrator Coroware, a robot can be the tip of the iceberg of a database management and retrieval system. Whether a task involves picking grapes or checking for metallurgical stress, the robot can sense and permanently store and analyze everything that is relevant about the jobs that it is doing. As an example, let us imagine a robot of the future that grows grapes. The robot/data system can monitor how much water and sunlight each plant is getting, and know exactly how much water or artificial light to add to produce the desired quantity and quality of grapes.

Cog plays with a slinky

photo: © Robo Sapiens: Evolution of a New Species ..(Peter Menzel and Faith D'Alusio/The MIT Press)

If all else fails, connect to a supercomputer. When earlier I compared average robot intelligence to the level of insects, I was referring to what can normally be stored inside a mid-sized autonomous robot. Sony recently gave its 23 inch tall QRIO humanoid robot a colossal"swelled head" by using broadband wireless to connect it to 250 personal computers. As another example, Dr. Rodney Brooks can boost his robot Cog (short for “cognition”) into a mini-HAL in the MIT research lab by hooking it up to dozens of computers racked and stacked together. Many of them are on different operating systems, but can still process data together.

250 personal computers linked together takes us to where chip sets inside the average robot will probably be in about five to ten years.

Currently, the biggest constraint for these jury-rigged supercomputer systems involves developing robotic software that can effectively use all this capacity. In keeping with his “fast, cheap, and out of control” (or “bottom up”) approach to Artificial Intelligence, Dr. Brooks, his project team leader Brian Scassellanti, and other staff members are teaching Cog how to perform simple tasks in an order analogous to the way that a human infant matures. One of Cog’s more recent projects has been to learn how to play with a slinky toy and follow people around the MIT lab with its camera eyes. According to Dr. Brooks, his robot, “Knows how to sling a coil, but can not compare one coil with another. To do that we would need to get to the intelligence level of a three year old, and we are not there yet.”

NAVIGATION

To date, most mobile industrial robots have required such navigational tools as bar codes on walls, transmitters, or tape on floors.

For farming applications that use heavy equipment, which are not constrained by the size, weight, or cost of the navigation systems, agricultural engineers at the University of Illinois have already developed completely autonomous robotic tractors that can systematically go down one row, turn, and then go down the next row. A remaining issue is how to detect and avoid running over a pipe or some other item left lying in the field.

Outdoor satellite GPS systems have played a major role towards increasing autonomous operation. So have newly developed indoor GPS systems. Arc Second Navigation’s 3-D grid coordinate indoor laser system enables mobile robots to approach aircraft or vehicles in a hanger and know exactly where to place rivets, perform scans, or do other useful work.

Still in drag mode. For many robotic applications, the navigation system is the core of the “robot.” With certain types of heavy equipment, the propulsion systems and work devices come ready made. This is why considerable robotic research has been focused on aircraft, ground vehicle, or farm equipment applications. At the March 2004 robot show one of the speakers voiced concern that this has created a robotic development mind set overly focused on “dragging things around.”

The Stanford Team robot vehicle
finished first in the DARPA race

Getting racy as well. A holy grail of navigation R&D has been the annual DARPA Grand Challenge. On Oct 9, 2005, the Stanford Racing Team won a $2 million grand prize check. Their robotic vehicle traveled the 132 mile course between Los Angeles and Las Vegas with only robotic navigation. Four other vehicles made it across the finish line as well out of 23 entrants. This was a big improvement over the prior year, where none of the robo-vehicles got very far. In 2004, Carnegie Mellon’s robotized Humvee, traveled the greatest distance of seven miles. In the 2005 race, Carnegie Mellon supplied the second and third place winners.

A bigger potential prize than the $2 million or the prestige may be the prospect of future Army developmental contracts. A few years ago the U.S. Congress mandated the Army to have as its goal one robot for every third Army vehicle by the year 2015. So far the big money robotic navigation contracts have gone towards cruise missiles and predator drones.

If you could see her through my eyes… In terms of building a robot system that tries to see in a way that is similar to humans, SEEGRID's NavSystem (a project of Dr. Hans Moravec mentioned in Part One) comes closer than almost any other system. It uses stereoscopic vision from cameras to build a point by point 3-D map using ambient light. At the Robotic Institute 25th Anniversary, Dr. Moravec told me that some time in the not too distant future this should allow robots to navigate their way around warehouses with reasonable accuracy on ambient lighting alone using relatively cheap cameras.

Assistware, founded by Dr. Dean Pomerleau and Dr. Todd Jochem, has developed a robotic “drowsy driver” technology. It tracks driving patterns and helps alert drivers if they make un-signaled lane changes. According to various studies, driver fatigue is a major cause of traffic fatalities. About one out of five drivers fall asleep at the wheel.

Evolution vision software outlines Kermit the Frog

In a Flash: Canesta Corp in San Jose uses chips that track the travel time of bursts of light off objects to build a 3-D image.

SICK has developed lasers that create 3-D maps of underground mines. This helps miners monitor extraction progress and avoid having one tunnel run into another. In my paper “Mining and Robotics” I discuss advanced systems developed by MD Robotics of Canada, which has partnered with Atlas Copco. A different firm called Workhorse Technologies, founded by Dr. William L. Whittaker of Carnegie Mellon, has created an interesting application that maps flooded or abandoned mines potentially suitable for reuse.

Getting back on the right path: “Seeing” and interpreting the environment are two different things. One test of a robot’s navigational capability is whether it can reorient itself when a human picks it up and moves (or "kidnaps") it to a new spot. Evolution Robotics offers its vSLAM technology that enables to a robot to create a visual outline of its environment. It performs a statistical calculations to establish landmarks. Evolution can also overcome kidnapping with its NorthStar navigation system that enables a robot to determine its location anywhere in a room by sensing two infrared spots projected on a wall.

Evolution Robotics has also made impressive advances in object recognition. Its LaneHawk system recognizes products on the bottom of shopping carts, even when they are turned at odd angles or partially obstructed by the legs of the grocery shopper. Computer processing power has developed to the point that the vision system can recognize a bag of potato chips or can of dog food turned at odd angles without using bar codes. Some experts estimate that grocery stores lose an average of $10 a lane a day by a failure to ring up "bottom-of-basket" items, which amounts to about $1.8 million per store per year. Both this object recognition technology and the vSLAM technology are also currently used by Sony's AIBO in its "pet" application.

SENSORS:

This topic offers another paradox. On the one hand, humans have devised probes that can sense all of the invisible (to humans) as well as visible parts of the electromagnetic spectrum. Humans have developed mechanical devices that can not only simulate their other senses (taste, smell, feel, hearing), but can also exceed the sensing capabilities of various exotic animals in nature. It is relatively easy to mount myriad sensors on robots so that they can perform a wide variety of potentially profitable tasks ranging from surveillance to inspecting the structural integrity of various materials.

On the other hand, it can be relatively hard for autonomous robots to make sense of their sensor inputs, particularly to aid mobile robot navigation and manipulation.

Computers can detect people’s faces in a scene, but have problems recognizing people from non frontal views or as they age. Dr. Brooks points out, “The truth of the matter is that we have no computer vision system that is at all good at recognizing that something is a cup, or a comb, or a computer screen. Our computer vision systems can do a few things with great skill, but still after forty years of effort they are not good at the things we humans and many animals do effortlessly. Because of the increase in computer power over the last thirty years, we can no longer blame a deficiency there on our poor computer algorithms. It is clear that we must be missing something fundamental in the way that vision in humans is organized, although almost no one will admit that.” (Brooks, page 90).

Dr. Brooks points out that when humans try to program machine vision with algorithms on a pixel level, it is hard to instruct a robot how to cognitively outline a pen sitting on a desk from the desk itself. He also talks about how the brain reconstructs a coherent field of vision to overcome a blind spot that provides neural and blood connections in the back of our eyeballs (Brooks, p. 77). We need to not only be able to sense the images of things, but must also be able to mentally model, filter, and reconstruct what we are seeing so that we can recognize the most important aspects from extraneous background.

I am aware of studies in which human infants become disturbed if shown human faces with third eyes. Also, no one needs to school most humans on how to mentally outline and sexually respond to erotica. Obviously quite a lot of outlining and reconstruction in the human brain is innately programmed. On top of that, I would guess that our brains must store hundreds of thousands, if not millions of “mental movies” of 3-D rotating images that we have acquired from interactive experience since childhood. We also have vast mental libraries of symbolic connections, such as how a silhouette of a tree also signifies a tree, or how the kinds of subtle clues a detective might look for in a murder mystery can link back to the “tree” concept. Anyone who has observed the megabytes of memory consumed when they download video MP3 files can appreciate how much processing power all of this must require. All of this may help to explain the other paradox I mentioned earlier in Part One that ditch digging utilizes vastly more processing power than adding a column of numbers.

END EFFECTORS AND MANIPULATORS

For certain business applications, developing a better end effector may be more important than developing a better robot. An end effector can be anything that does useful work, such as the plow unit of a tractor, an x-ray device to check for metallurgical stress, or a bulldozer blade that helps remove contaminated debris at a nuclear site.

Many end effector areas are undergoing their own steady evolutionary process where quality is steadily increasing as prices are coming down. As one example, according to Ralph Miller at General Lasertronics, the power of Yttrium Aluminum Garnet (YAG) lasers has increased more than ten fold over the last seven years, while the size of laser heads connected by fiber optic cables has decreased along with the costs per unit output. Among other things, these lasers can be used to remove paint or to ablate rust and contaminants off of various surfaces.

A good example of the state of the art in remote presence hand manipulation is NASA’s robonaut B project. This is a portable device that can be mounted on wheels for planetary surface mobility or on special legs to attach to the surface of a satellite. The sensitivity of hand manipulation is being enhanced through the use of micro components and other nanotechnologies.

PROPULSION

Certain men once looked at birds and decided to invent human-built flight. Certain men are now looking at other animals moving through the environment in unusual ways (relative to man), and with the aid of robotics, trying to do that too. Develop it and patent it, and show how this platform can carry an end effector that does useful work, and the world may beat a path to your door.

Inuktun Services Ltd. in British Columbia and RedZone Robotics in Pittsburgh are leaders in commercially successful robot configurations that crawl through pipes. This concept gets even more creative with snakebots that can roll, climb, swim, and move in sinusoidal patterns, or the "polymorphic" snake or spider-like robots that can break themselves apart into pieces and recombine into new forms. The latter would be particularly effective to penetrate rubble to bring aid to disaster victims.

At universities around the world, scientists are modeling robots off just about any kind of animal you can think of, ranging from lizards and birds to crabs and kangaroos. These robots serve as ongoing, interactive "lab experiments" and "show me" tinker toys regarding what scientists either know or still do not know about how various animals move. I would like to touch upon three sample areas that are profiled in the book Robo Sapiens, Evolution of a New Species by Peter Menzel and Faith D'Aluisio.

 

Quinn and Ritzmann with their robo-roach

photo: © Robo Sapiens: Evolution of a New Species ..(Peter Menzel and Faith D'Alusio/The MIT Press)

Cockroaches can move fifty times their body length in one second, which on a human scale is the equivalent of 200 mph. Two leaders in cockroach mobility research are Roger Quinn, a mechanical engineer, and Roy Ritzmann, a biologist, at Case Western Reserve University. They have spent many years refining a 16:1 scale robot replica of a cockroach. As mentioned earlier, they watch slow motion videos of cockroaches in action for clues about ways to replicate the cockroach's decentralized nervous systems (subsumption architecture). They have run into weight and stress problems using steel springs and tubes to model the stumble-bumble fast motion of cockroach feet. They hope to find better answers in "plastic muscles" (or electro-active polymer "artificial muscles" as described later -author) which more closely replicate the properties of organic materials. (RS, pages 102-105)

Dr. Robert J. Full at UC Berkeley discovered in 2002 that gecko feet are covered with millions of tiny hairs called setae that allow them to walk along walls and ceilings. Dr. Fuller has worked with Alan DiPietro at iRobot, who developed a tiny $600 robot that uses synthetic hairs to replicate gecko climbing behavior. (RS, pages 91-93). UC Berkeley now holds a patent to a material that works much better than most bandage adhesives for medical applications.

Certain fish such as pike can accelerate underwater at a rate of eight to twelve G’s, which is as fast as any NASA rocket. Scientists have not figured this one out yet. However, you guessed it, Dr. John Kumph at Draper Laboratories in Cambridge, MA has built a 150 kg robo-fish to try to get some answers. The US Navy is also interested. One can not only envision faster torpedoes, but “fish” that can conduct surveillance on enemy subs or minefields. (RS, pages 108-109)

In his October 2004 talk at CMU, Dr. Full pointed out that biological inspiration can be pushed too far. Some of man's most impressive engineering achievements, such as wheels and gears for motion, do not have a direct animal analog. Also, evolution is often a zig zag path that leaves animals with structures that lack engineering sense, such as the pelvic bones in whales. Lastly, 85% of world animals are arthropods with an average length of 5 millimeters, obviously creating a scale problem for humans.

The ultimate aim is to engineer beyond animals by first learning the secrets of their natural engineering and functionality. The latter can include redundancy, which allows animals to continue functioning when they lose an organ or limb, and self-assembly, which allows animals to grow or heal themselves. Once they learn the secrets, humans can scale up or scale down (that is, use nanotechnology to produce nanorobots) and mix and match capabilities to serve specific robotic purposes.

Dr. Cutkosky (left) and a student observe a robot set in motion by a plastic polymer

photo: © Robo Sapiens: Evolution of a New Species ..(Peter Menzel and Faith D'Alusio/The MIT Press)

Reducing costs and mechanical complexity: Dr. Mark Cutkosky is one of many researchers at Stanford and SRI International who develop electro-active plastic polymers that contract like muscles. They are activated by embedded sensors and motors, and have no moving parts. Like muscles, they can not only produce motion, but act as shock absorbers. In the paper he coauthored " Fast and Robust: Hexapedal Robots via Shape Deposition Manufacturing (SDM),” he argues that SDM techniques (which create artificial skeletal structures) can be combined with artificial muscles to propel at a rate of over four body lengths per second. Artificial Muscle, Inc. was funded in March 2004 to capitalize on a market estimated at over four billion.

If legs made of these polymers could be swapped for the wheels and treads, we might see a substantial reduction in the number of parts and subassemblies. We might also see a lowering in costs without a significant degradation of capabilities. The use of polymers might also enable more inherent fluid motion with less need for nervous system direction, a topic of great interest to the aforementioned cockroach researchers.

Walk, jump, jog the humanoid way. Sony's QRIO is a leader in independent stable bipedal motion control. Yes, there are people at MIT's leg lab and other American research institutions who have impressive mobility projects, but you can not take anything away from Sony's elegant execution.

Images courtesy of Sony Corporation

Sony's QRIO has four pressure sensors on the soles of each foot to balance on uneven surfaces. It can cushion itself when tipped over, and pick itself up whether it falls forward or on its back. It has pinch detection sensors so that its limbs go limp when brushing against a person to avoid injuries. It has compliant grasping fingers that can throw a ball. It has precision-engineered joints that are quiet and dependable, as demonstrated in videos. It can even kick a small ball.

Safety in numbers: Another novel approach to propulsion is to exploit the design malleability of robots. Rather than try to get a big robot to go where you want to go in one big piece, one might consider getting there with the surviving remnants of lots of little pieces, or little pieces that are carried as adjuncts to the big piece like lifeboats. This dovetails with the. midget submarine approach mentioned later, which is a variation of the "swarm bot" concept.

COMMUNICATION

A good starting question for this area is: “Who is communicating to whom?” I would like to touch on three areas: human to robot, robot to robot, and Internet to robot.

Human to robot: The easiest way to interact with a robot is to give voice commands. As mentioned earlier, Sony's AIBO can respond to 75 simple voice commands.

The fundamental problem with voice communication is essentially the same that I described earlier with visual understanding. When we listen, we associate, filter, and reconstruct what we hear just like what we see. We draw upon mental libraries consisting of hundreds of thousands, if not millions of thought models. Our speech draws heavily upon visual metaphors or visual interpretations of the environment. If a computer lacks our interactive visual understanding of the world, how can it say back to us "I see what you mean?"

I am reminded of Anne Sullivan's problems in connecting the world of abstract thought to 12 year old deaf, blind, and mute Helen Keller in The Miracle Worker.

Our ability to understand abstract thoughts beyond simple nouns, simple verbs, or simple noun verb combinations (for example “I eat” or "you run") is called syntax. Humans have syntax, other animals such as gorillas and chimpanzees do not. (Brooks, pages 3-4). It may take fifteen to twenty years or more for the average robot to have the internal capacity necessary to rival human capabilities this area.

Currently, most human instructions to robots consist of writing lines of code. Evolution Robotics is developing software to help bring programming into a format reminiscent of the menu-driven, point and click Microsoft Windows environment. According to Coroware's David Hyams, there are folks at Microsoft who are beginning to wake up to the need for making Windows XP embedded more compatible with robotics. He himself likes to install wireless communication capabilities in all his robots so that he can flip open a hand-held wireless LCD device any time he is around them and query their programming.

Robot to robot: The ability of robots to immediately update each other by wireless or by some other means will create business opportunities for risky, exploratory situations. As one example, an Australian group argues that it is more cost effective to explore undersea regions with schools of midget robot submarines that swap information than to risk larger subs with human crews.

Internet to robot: Much of the current thinking in this area involves linking robots to databases available over the Internet, or creating robots that serve as mobile, Internet-connected personal computer platforms for humans.

I think that an especially exciting application will some day involve the ability to download behaviors, skill sets, and professional analytical capabilities over the Internet, either on a pay basis or as freeware. As one example, a real estate developer may design standardized rooms with transmitters embedded in walls to aid robot navigation. As an inducement to buy or rent his units, he can develop downloadable behavioral packages designed to show home robots how to go about their cleaning chores within each unit. Since most household robots will probably be fairly stupid over the next ten years, they will need precise instructions regarding how to orient themselves within each room and how to go about each task. . As another example much further into the future, imagine that a couple wants to lead a relatively self-sufficient in a wilderness area. Imagine that they can download behavioral freeware that show their robots how to perform such tasks as building a wood frame house, creating and tending to gardens, and helping to home-school their children. In regard to the latter, imagine a humanoid robot with a facial screen similar to Robo Doc that can assume the facial appearances, movements, gestures, and speech of some of the world's best teachers.

POWER

Power remains a huge constraint for unplugged, autonomous robots. Batteries are not much different in size and weight today than they were ten years ago. When a robot designer tries to increase battery power, he typically increases battery weight. More weight consumes relatively more power for movement. There is obviously a rapid rate of diminishing returns here.

Roboticists are using a number of strategies to deal with the battery problem. One very obvious approach is to design the lightest and smallest robots possible. Another approach is to reduce electrical power consumption and heat generation by redesigning robot chips and other robot hardware, an area where VIA Technologies has been a leader. A third approach is to make robots autonomously rechargeable, either by carrying solar panels, or by creating the ability to autonomously re dock at recharging stations. The Roomba and Aibo both have this capability. A fourth approach involves the most obvious of all -use an electric cord or some kind of hybrid power system that produces electricity.

 

A summary remark regarding modular frontiers: Some areas of robotics are advancing at a rapid rate, while other areas (such as power) are only plodding along. In many ways the field seems disjointed. This is all the more reason for analyzing the different pieces of the technology jigsaw puzzle in detail. I address this and other issues from a business perspective in the next section.


Link to
...... Part IV of series

Jump back to...... Part One.......Part Two..

Jump forward to...... Part Five
.......Part Six


Disclaimer: This report is for research/informational purposes only, and should not be construed as a recommendation of any security. Information contained herein has been compiled from sources believed to be reliable. There is however, no guarantee of its accuracy or completeness.

Bill Fox is VP/Investment Strategist, America First Trust. Bill welcomes phone calls and email responses to this article. His most current contact information is at his web site: www.amfir.com.

Short URL for this web page: http://tinyurl.com/29xppdr



Flag carried by the 3rd Maryland Regiment at the Battle of Cowpens, S. Carolina, 1781

© William Fox. Sometimes William Fox offers viewpoints that are not necessarily his own to provide additional perspectives.