...Starting with first principles
America First Trust services
Education, analysis, and advice for uncertain times



Bombs away!!!
Artist concept of X-45 robot bombers. Image Source: DARPA


Entrepreneur by Bill Fox

Part 6


First posted Feb 2005

The March 2005 issue of the Technology Review (MIT's Magazine of Innovation) has an interesting article by David Talbot titled: "The Ascent of the Robotic Attack Jet." However, in this article I saw a lot more than just a description of a promising new area of robotic technology.

What I saw also reflected many of my worst fears.

Mr. Talbot talks about the Pentagon's successful bomb run test in April 2004 at California's Edwards Air Force Base by a robotic aircraft that looks like a mini B-2 stealth bomber. According to Mr. Talbot the U.S. Department of Defense would like to start building these types of robotic craft by 2010. After initial prototypes are developed, "The next crop of planes will fly in coordinated groups, with more autonomy. They'll tackle jobs such as attacking enemy air defenses, identifying new targets, and releasing precision bombs."

Well, OK so far, I suppose. I am certainly not anti-defense, and am generally very supportive of robotic development by both private industry and government. I am tolerant of government involvement despite the fact that government as a monopoly is more likely to waste money, distort the free economy, and create other problems compared to competitive private industry.

But then Mr. Talbot quoted John Pike, Director of GlobalSecurity.org, who has become a leading consultant, commentator, and analyst regarding Department of Defense projects.

According to John Pike, "The long -range vision is that the president will wake up some day and decide he doesn't like the cut of someone's jib and send thither infinite numbers of myrmidons -robotic warriors- and that we could wage a war in which we wouldn't put at risk our precious skins."

The article then continues on without skipping a beat, "Realizing this vision will require the creation of new airborne communications networks and a host of control systems that will make these jets more autonomous (though always under the ultimate control of a person) than anything built to date. These are the goals of a $4-billion, five year program at the Defense Advanced Research Projects Agency (DARPA), the Pentagon's advanced research arm..."

The article then goes into more detail, with the tone that the sooner all of these killer robot concepts get funded and implemented, the better.

Let me rewind the tape here a moment, because what I just saw is precisely the attitude that could one day put all of humanity under a ruthless robo-totalitarian yoke.

The U.S. Government wants to spend billions of your taxpayers' money, and --

The long-range vision is that the president will wake up some day and he doesn't like the cut of someone's jib and send thither infinite numbers of myrmidons -robotic warriors

Hello? Article I, Section 8 of the Constitution gives war-making powers to Congress, not the president, despite nasty presidential usurpation habits that have steadily developed since the Abraham Lincoln regime. When did we fracture our value system and suddenly throw down the memory hole Watergate, Ruby Ridge, Waco, Abu Ghraib and Guantanamo, and all of the other endless sordid examples of unchecked executive power by American presidents and their staff members? Why would anyone not feel some trepidation linking arbitrary and subjective centralized authority with unlimited lethal autonomous robot power? What ever happened to the "checks and balances" sentiment espoused by Thomas Jefferson, when he said: "Hear no more of trust in men, but rather bind them down from mischief with the chains of the Constitution" or when he also observed: "When governments fear people, there is liberty. When the people fear the government, there is tyranny."

If despite my protest above, some readers still believe that our executive branch and national media are totally trustworthy and competent, I invite them to also consider the establishment's "fractured" handling of another advanced technology area -- depleted uranium (or "DU") used to aid penetration by American bombs and projectiles. (The term "depleted" is a big misnomer, since the uranium in question is in fact radioactive U-238). Please recollect in Part Five I predict that robotic technological development will increasingly invite debate reminiscent of discussions over nuclear safety and proliferation issues, therefore it is well worth expounding upon the DU issue as a glaring example of government mishandling of advanced technology.

According to Nigel Morris' 13 May 2004 article in the UK Independent, the US expended 300 tons of DU on Iraq in 1991, and five times as much beginning in 2003 with Gulf War II. Researchers have discovered that radioactivity levels from destroyed Iraqi tanks are 2,500 times higher than normal. (Tests by the Seattle Post-Intelligencer have found 1,500 times higher levels). The rate of birth defects in Basra in southern Iraq increased over sevenfold since the first Gulf War, and Down's Syndrome has increased over fivefold. Dr. Marion Fulk, former scientist with the Livermore, CA National Lab, said that between 31% to 100% of the mass of DU gets "aerosolized" upon impact depending on the size of the warhead. Aerosolized sub-micron particles then get breathed by soldiers and civilians alike.

Dr. Alexandra C. Miller released a report for the Armed Forces Radiobiology Research Institute in Bethesda, MA indicating that DU's chemical instability causes one million times as much genetic damage as would be expected from the radiation effect alone. Unlike larger radioactive sources, the sub micron-sized DU particles pass through skin, lungs, and stomach into the blood stream, spread throughout the body and through the blood-brain barrier, and penetrate inside the nuclei of cells. Dr. Katsuma Yagasaki notes that the particles are not only inherently carcinogenic and radioactive, but also have a terrible 4.5 billion year half-life persistency. According to Dr. Chris Busby, unlike larger uranium particles, sub-micron DU absorbs "between 400 and 1200 times more radiation from Natural Background that the equivalent volume of tissue (depending on the photon energy). The energy borrowed is re-emitted as short range photoelectrons which have high ionising [DNA-destructive] power." In other words, the particles act as ultra-magnitude step-up transformers that convert natural radiation into harmful radiation once inside cells of the human body, in addition to emitting their own radiation virtually forever.

More than 500,000 "Gulf War era" veterans are currently receiving disability compensation for a variety of symptoms linked to "Gulf War Syndrome" that many experts trace to DU. According to one source, "In a group of 251 soldiers from a study group in Mississippi who had all had normal babies before the Gulf War, 67 percent of their post-war babies were born with severe birth defects."

The Canadian military has phased out DU completely and the US and British Royal Navy are currently in the process. It is about time, since "The U.S. government has known for at least 20 years that DU weapons produce clouds of poison gas on impact."

Somebody should be thinking about creating mobile robots to help detect and suck up this terrible permanent radioactive debris. We should also think about over-weighting R&D dollars for robotic research in other peaceful areas that provide a clear value proposition in the free market and provide a real benefit to humanity.

The U.S. invaded Iraq on the pretext of finding Weapons of Mass Destruction (WMD), which it did not find. However, Dr. Yagasaki Katsuma of the University of the Ryukyus, Japan, cites an estimate that in Gulf War I alone the U.S. released between 14,000 to 36,000 times more radiation on Iraq than when it dropped the atomic bomb on Hiroshima. Multiply that by five for Gulf War II and we can discern a de facto "dirty nuke" nuclear war on Iraq, fought to ostensibly help save the area from nonexistent nukes. In many ways DU is worse than conventional nukes, because the radioactive byproducts fail to step down in half lives over periods of weeks, months, or even a few years.

Various powerful "neo-conservatives" have generated disastrous "blow back" from their military use of uranium, not to mention their use of torture (and other issues). They have already inflicted heavy direct damage on Iraqi infrastructure and incalculable indirect human costs (such as through famine, disease, and lost productivity) between Gulf War I and Gulf War II by unleashing vast waves of primitive robot attacks, also known as cruise missile strikes.

What level of fractured wisdom can we expect in the future if we grant our "leaders" arbitrary power to deploy far more deadly and sophisticated killer robot systems?

Back to grass roots

Obviously we can not afford to trust government to do our robot-related thinking for us. This should not be a surprise. One can go back to the American Revolution, or go back even further to Anglo Saxon traditions, pre-Christian Norse Althings of Iceland, and the democracies of ancient Greece to find the sentiment that centralized government and autocracy over the long run have tended to be more of the problem than the solution.

We should think for ourselves even under the best of circumstances.

In the future we will need to do some hard thinking in at least three areas. One area will always involve developing better overall social policy and electing more worthy human leaders to uphold it. The second area involves creating a better community around us, to include finding people who support our resistance to tyranny. The third area, new to human events, will involve redefining our relationship towards increasingly intelligent and autonomous robotic systems.

How do we define the robot's proper role in society? While creating more autonomous and capable robots, how do we also define ethical robot behavior and create "kinder and gentler" or at least less dangerous and more predictable robots? How do we reconcile our own human nature with their machine nature?

For the remainder of this article I will address three broad strategies for dealing with these issues.

One strategy involves trying to create better decision rules for programming ethical behavior.

The second strategic area involves trying to achieve better human-robot integration.

The third strategy involves understanding ways that we as humans can never be like robots, and how we must understand our own innermost nature first to ultimately stay in control of both ourselves and our machines.

Dealing with social issues through better programming logic

The movie I, Robot provides a good starting point to explore ethical issues involved in human-robot relations. The setting is in Chicago in the year 2035.

Early in the movie we are told about robot laws laid down in Isaac Asimov's science fiction novel I, Robot: These Sunday School rules are supposed to make robots fail-safe so that they can never hurt humans and only benefit society.

1) Robots must never hurt humans
2) Robots must obey all orders from humans, except where they violate
.... the first rule
3) Robots can protect themselves, except where they violate any of the
.... preceding rules.

Midway through the movie we have a flashback scene where the movie's protagonist, detective Del Spooner, describes a situation where Asimov's robot laws have become too simple and inflexible.

Spooner got involved in an auto accident where his car and another car got thrown off a bridge. The two cars wound up near each other while resting on the bottom of a river. As the cars were filling with water, a little blond girl trapped inside the other car about fifteen feet away signaled to the detective through her window. Then a rescue robot suddenly appeared, smashed the detective's window, reached inside, and hauled the detective to the surface.

The detective recounts after his flashback that he was the only one who survived. The triage/rescue robot was forced to make a choice between him and the little girl. The robot calculated that the black detective had a better chance of survival and so it selected him. However, by making that choice, it helped kill the little white girl through an act of omission.

That scene caused me to have my own flashback experience regarding a business policy class I took in business school back in 1984. The professor explained that all modes of ethical decision-making have been divided by certain philosophers into three categories.

The first area involves duty-based ethics, in which individual actions are judged by the extent to which they obey fixed rules.

The second area involves contractual ethics, in which individual actions are judged by the extent to which they live up their contractual agreements.

The third area involves utilitarian ethics, in which individual actions are judged on a cost-benefit basis by the extent to which they yield the greatest net gain or the lowest net loss.

As the business class explored various ethical issues in different case studies, I discovered that this three part categorization is a very useful way to sort out and analyze ethical problems. In regard to the movie I, Robot, the triage/rescue robot was clearly using utilitarian logic. Obviously Asimov's duty based rules were too inflexible. In a mass casualty situation, an emergency rescue robot acting with very limited resources can not function effectively if it becomes confused by the fact that no matter what it does, its acts of omission may virtually guarantee death for certain people.

One of the teaching points in the business policy class was that there is no one particular ethical approach that works in all situations. The best ethical decisions typically reflect some blending of all three perspectives.

We need to consider ways to modify Asimov's language to reflect both contractual logic and utilitarian logic. Two possible approaches are provided below:

"Contractualism" involves not only negotiating and living up to contracts, but also defining "conditions" regarding existing rules. We might expect contractual instruction code lines to contain "if-then," "provided that," and "subject to" language that define the covenants that must remain in force to avoid breach of contract. The following might serve as some possible hypothetical examples:

1) Robots must never hurt humans, provided that the humans in question
.... are not designated as "enemy combatants" in time of declared war and
.... do not meet other exclusionary tests.
2) Robots must obey all orders from humans if such humans show that they
.... have proper controlling authorization and demonstrate sane behavior.
3) Robots can protect themselves with varying levels of non-lethal force
.... proportional to the threat against them provided that they are being
.... violated by people who lack controlling authorization and who meet
.... various tests for probable criminal behavior.
4) [Miscellaneous other instructions perhaps at least the size of of a small
.... law library that condition other rules and regulations].

Obviously contractual and utilitarian systems tend to be much more complex and require far more autonomous decision-making ability than duty-based systems. It would probably help to use some kind of neural net program to help weigh and resolve code lines that overlap or conflict with each other in certain situations.

Moving on to utilitarian logic, we might see various types of maximization-minimalization language, such as the following hypothetical instruction lines:

1) Robots will maximize the number of human lives saved in a mass
... casualty emergency medical situation and minimize the number of
... lives lost.
2) Robots must give highest priority to orders from humans that optimize
.... profitability in running manufacturing operations while minimizing losses.
3) Robots can protect themselves in self defense as long as they minimize
.... the medical injury costs they inflict on their attackers and minimize potential
.... legal liability for their owners while optimizing defense of their own value as
.... private property.
4) [Miscellaneous other instructions spanning a wide variety of technological
.... areas, operational scenarios, and social situations]

The movie I, Robot goes on to show an instance where combining all the ethical approaches does not provide all the answers either.

The robots have been created by a company headquartered in downtown Chicago. The firm maintains a master computer system that can control the robots it has manufactured through wireless communication. This master computer has utilitarian programming, much like the emergency rescue robots, and by implication it also has duty-based and contractual programming as well.

The master computer decides that humans are grossly mishandling their own affairs. In order to serve the greater long term good of human society, the master computer decides to carry out a robot take-over.

Detective Del Spooner and other citizens of Chicago instinctively resist the robo-putsch, but what really saves the day for them is the assistance of a renegade robot that previously obeyed an order by a human to help him commit suicide. This renegade robot now resists a direct order from the master computer in order to support the human fight for freedom against robo-rule.

Because the renegade robot ultimately helps humans vanquish the master computer, it becomes a hero. Humans now begin to address "it" as "he" and allow it to walk around on its own without being bundled back in crates with ordinary robots. The film shows a mass of robots looking up at this robo-Moses as it ascends a hill on its own.

Those of us in the audience who are poorly attuned to leftist Hollywood mysticism were left to puzzle over what all of this is supposed to mean.

Through some mystical process, by having some kind of programming logic that supports some kind of ultimate "freedom" for humans, our robo-Tin Man now has some kind of "heart." Does this suggest that some day some special interest groups might try to add robots to America's "Liberal-Minority Coalition?" Will we one day see a "gay robot agenda?" and "robot affirmative action" and become conditioned to speak softly lest we be accused of "ugly anti-robotism" or of harboring "beingist" attitudes?

The reader might laugh, but the sad truth is that if cunningly malevolent robots with human-level intelligence ever become powerful enough to influence the ability of people to obtain and hold jobs, quite a few people will bend their speech and attitudes to accommodate their new masters.

As I stated in Part Five, the character of automation in a society typically reflects the character of the underlying human society itself. In viewing the car accident scene with the little blond girl and the black detective, I was left to wonder what sort of utilitarian programming logic might have been involved if this were apartheid South Africa, or if it involved a different scenario with an Arab girl in one car and an Israeli West Bank settler in the other car, and a triage robot programmed by either Al Qaeda or the Likud Party arrived at the scene.

We are not given a detailed fact situation regarding the motivation of the master computer to take over human society. Imagine if the computer were alerted to deranged, suicidal, or criminal human leaders who were in the process of launching an unnecessary and suicidal nuclear war or installing a murderous tyranny over America, in which case using robots to seize control to throw these tyrants out of power might create conditions required to hand control back to humans who respect liberty. Under this new scenario, the unpredictable renegade robot that disobeyed the master computer's plan might be viewed as a corrupted system worthy of being junked rather than as a "respectable" ally of human freedom. The master computer might then be viewed as a digital patriot rather than as a Stalinist device.

Or would it be wiser to take the position that when a high-level situation gets that complicated, all computers and robots involved should be programmed to remain neutral until humans sort things out?

Perhaps humans must never be trumped by computers under any circumstances. Realistically, this may become hard to enforce as increasing levels of intelligence make robots more autonomous, and also as humans inevitably enlist robots for covert operations to infiltrate, spy on, and disrupt their human enemies.

As we look at these scenarios, we see that placing robots on autonomous autopilot to solve ethical problems by applying some combination of duty based, contractual, or utilitarian logic is not enough. We are back to a question that I raised earlier in this series, namely how do we define the heart that we put into Tin Man?

Exploring successful human-robot integration

Earlier in Part Three of this series I mentioned prosthetic research that is enhancing the man-robot interface. The April 2002 MIT Technology Review article, "Lord of the Robots," contained the following dialog with Dr. Rodney Brooks, head of the MIT Computer Science and Artificial Intelligence Lab:

Interviewer: :"Your new book Flesh and Machines: How Robots Will Change Us argues that the distinctions between man and machine will be irrelevant some day. What does that mean?"

Dr. Rodney Brooks: "Technologies are being developed that interface our nervous systems directly to silicon. For example, tens of thousands of people have cochlear implants where electrical signals stimulate neurons so they can hear again. Researchers at the A.I. Lab are experimenting with direct interfacing to nervous systems to build better prosthetic legs and bypass diseased parts of the brain. Over the next 30 years or so we are going to put more and more robotic technology into our bodies. We'll start to merge with the silicon and steel of our robots. We'll also start to build robots using biological materials. The material of us and the material of our robots will converge to be one and the same, and the sacred boundaries of our bodies will be breached. This is the crux of my argument."

The "Forbidden Planet" is coming?

In his Nov 2003 MIT Technology Review article "Toward a Brain-Internet Link," Dr. Brooks also stated:

...the 1999 efforts of Chapin and Miguel Nicolelis at Duke University...enabled rats to mentally induce a robot arm to release water. First, a computer recorded the patterns of neural firing in key areas of the rats' brains when the rodents pressed a lever that controlled the robot arm. Once the computer learned the neural pattern associated with lever-pushing, it moved the robot arm when it detected the rats merely `thinking' about doing so. In later versions of this technology, monkeys were able to control a more sophisticated robot arm as though it were their own.

As other interesting examples, portions of rat brains kept in a petrie dish environment and connected with electrodes have also been used to control flight simulators and guide small robots. Professor John Donoghue at Brown University played a key role in creating a chip produced by Cyberkinetics that has been implanted inside the brains of paralyzed humans to allow them to mentally control cursors on monitors and perform such functions as changing TV channels.

Dr. Brooks prefaced this Technology Review article by stating;

A few weeks ago I was brushing my teeth and trying to remember who made `La Bamba' a big hit back in the late 1950s. I knew the singer had died in a plane crash with Buddy Holly. If I'd been downstairs I would have gone straight to Google. But even if I'd had a spoken-language Internet interface in the bathroom, my mouth was full of toothpaste. I realized that what I really want is an implant in my head, directly coupled into my brain, providing a wireless Internet connection.

In my line of work, an effective brain-computer interface is a perennial vision. But I'm starting to think that by 2020 we might actually have wireless Internet interfaces that ordinary people will feel comfortable having implanted in their heads-just as ordinary people are today comfortable with going to the mall to have laser eye surgery. All the signs -early experimental successes, societal demand for improved health care, and military research thrusts- point in that direction.

Rather than man vs. robot, Dr. Brooks sees man integrating himself with robotic functions, and upgrading his own level of intelligence to stay in control of his technology.

Imagine if you could not only not only mentally link into a supercomputer to find out the maker of a song while you are brushing your teeth, but can also have the supercomputer teach you a conversational level of a foreign language as you drive to work in the morning or sleep at night. Imagine if you were mentally linked to a robot in your house that grants your wish to have a beer brought to you from your refrigerator.

Imagine surfing by mental wireless on a supercomputer built by robots the size of a mountain here on earth that holds the equivalent of several gadzillion Libraries of Congress. In many ways we are already rapidly moving in that direction with the myriad numbers of computers linked by the Internet.

Intuitively, we would seem to need various types of step-down mechanisms to accommodate the biological limitations of our gray matter. I am thinking about that scene in the movie The Forbidden Planet, where a crewman who tried to boost his IQ beyond its physical bounds ended up making the ultimate sacrifice. In addition, we would probably need various filters against viral attacks.

Like I said near the beginning of this series, robotics is ultimately about putting together mechanical things, computers, and artificial intelligence in ways that are almost beyond imagination. However, to put things in perspective, we may need to remember that technology is really nothing more than a tool that leverages capabilities. As an example, we already have problems with human organizations so filled with "yes men" that they are incapable of resisting tyranny. We already suffer informational "viral" attacks in the form of deceitful religious and political propaganda. The new technologies will simply take both the good things and bad things we have to deal with today and vastly amplify them.

Emotional-interactive convergence

Dr. Rodney Brook's Artificial Intelligence Lab has also been involved in creating convergence on an emotional-interactive level as well as on the intellectual level. One of the more famous examples is Kismet, now resident in the MIT Museum.

Dr. Cynthia Breazeal developed Kismet as a Phd thesis while she was a former student of Dr. Brooks. Kismet could not understand specific words. However, its speech recognition software could screen pitch variations to recognize four emotional states. This is the level of cognition of human infants.

Kismet could understand approval, prohibition, attention-getting, and soothing. The program instilled in Kismet included a behavioral drive to achieve mood balance, which was a combination of three variables: valence (happiness), arousal (tired or stimulated), and stance (openness to new stimuli), all of which was translated by servomotors to the robot's eyebrows, lips, and ears.

The mood of the robot also affected how its eyes tracked objects and its ears curled up and down. The robot was programmed to respond to the emotional content of the input of its human partner by uttering syllables. It engaged in turn-taking with a human partner based on such cues as pauses, gaze shifts, and awkward silences. It interacted on an emotional pitch level without understanding the meaning of specific words spoken to it or the words that it uttered. (Brooks, Flesh and Machines, pages 92-95).

Interestingly enough, in regard to the topic of decentralized intelligence or "subsumption architecture" discussed in Part Three, Kismet had a set of fifteen computers controlling it, many running on different operating systems such as QNX, Linux, and Windows NT written by Dr. Brooks. According to Dr. Brooks (F&M, p. 92), “There was no one computer in control, but rather different computers moved different parts of the face and eyes….Kismet was truly a distributed control system with no central command."

Reinforcing the point that all knowledge is interrelated, I can not help but be reminded of economic analyses that show how decentralized, laissez-faire systems that maximize human liberty are typically more flexible, responsive, and efficient than centralized planning. Dr. Brooks has demonstrated an analogous engineering approach with his robots.

Despite the decentralized design, many observers of Kismet's interactions thought they were dealing with a completely integrated entity. The human need to “fill in the gaps” and “anthropomorphize” is so great that many humans found that they were adapting more to the robot than the other way around.

Dr. Brooks likes to wryly observe that humans not only tend to "anthropomorphize" robots, but that humans tend to overly anthropomorphize each other. Since the word "anthropomorphize" means "to make human," one might immediately ask how one can overly "humanize" fellow humans.

I interpret Dr. Brook's provocative comment to mean that the human personality often consists of sub-modules of habit, reflexes, instincts, imitative behaviors, and opportunistic tendencies clumped together. As I discussed in Part Three, we rely on vast libraries of visual, audio, and experiential behavioral "models" that have been layered on each other since childhood.

In his book Flesh and Machines (page 174) Dr. Brooks reveals a very deterministic view of the world. He wrote:

..The body, this mass of biomolecules, is a machine that acts according to a set of specifiable rules...We are machines, as are our spouses, our children, and our dogs...I believe myself and my children to all be mere machines.

But this is not how I treat them. I treat them in a very special way, and I interact with them on an entirely different level. They have my unconditional love, the furthest one might be able to get from rational analysis. Like a religious scientist, I maintain two sets of inconsistent beliefs and act on each of them in different circumstances.

In other words, Dr. Brooks has a heart. He has healthy instincts as well as rational faculties. This brings us to the last section of this paper.




I believe that the ultimate scare story does not take place when we look at the cutaway face of My Real Baby Doll and see the underlying machinery (depicted near the beginning of Part Five). Instead, I believe that what really scares people is when you take man himself and expose the genetic side to his nature, and then explore the full ramifications.

Getting back to the Wizard of Oz allegory, the field of human genetics is what ultimately defines the heart that we give to Tin Man and the courage we give to Cowardly Lion. We must know how our genotypes evolved and where they came from on both an individual and tribal level before we can begin to understand ourselves. To refer back to my discussion of robot modules in Part Three, genetics comprise the ultimate bottom-up "subsumption architecture."

America's liberal national media tend to flip-flop back and forth on genetic issues. Sometimes national publications carry articles claiming that genetic interpretations of human behavior or social history are "discredited," while at other times one sees articles claiming that the academic community is more convinced than ever regarding the role of genetics in just about everything. As an example, Dan Seligman’s May 12, 2003 Forbes article: “Professor Rothman Strikes Again,” stated,

In The IQ Controversy: The Media and Public Policy (1988), Rothman and Mark Snyderman collected data showing that the press overwhelmingly attributed IQ differences in the population to various cultural artifacts. The authors also surveyed 661 experts –academic psychologists, cognitive scientists, test specialists –who decisively rejected these cultural explanations and collectively stated that some 60% of IQ variance reflected the different genes of the high and low scorers.

In other words, national media teach us to overly anthropomorphize and homogenize humans on an individual, tribal, racial, and global level. National media approach this topic with the same level of unreality that neo-conservatives approach such topics as regime change and depleted uranium.

Genetics helps to provide an answer to the ultimate human-robot interface issue

The biological viewpoint forces us to distinguish between subjective knowledge (that which is instinctive in nature) and objective knowledge (that which we can verify by the scientific method). As an example, the will to live and procreate are both subjective in nature. The genes that give rise to their expression exist not because they have inherent meaning, but merely because they have survived. Science can tell us how things work and how to do things, but not why. "Why" connotes meaning. "Why" includes "why" it is worth taking the risks involved in the scientific process to speak logically and truthfully in public as opposed to taking the more secure route of never saying anything that might contradict or hurt the feelings of people in power. I discuss in greater detail in Part Four the need for truthfulness to promote innovation in technology companies.

Since meaning is instinctive and based upon evolutionary survival of particular genes, robots currently do not qualify. They lack an organic drive. Therefore, we have a paradox. On the one hand, the closer man comes to becoming like a robot by hybridizing the organic parts of his brain and body with mechanical devices, or by linking mechanical devices to genes, the more he shows higher technological accomplishment that has meaning for the scientific community. On the other hand, the more an individual person becomes like a robot across all functional areas, the more this emergent creature begins to lose its instinctive drives. Then everything begins to lose its meaning. I would expect such a creature to begin to subjectively care less and less about whether it exists or not. As an example, I can not imagine how I would feel "human" if the glands that produce adrenalin and other hormones in my body were replaced with inorganic materials.

Man --the most dangerous "robot" of all?

It is hard to imagine a better allegory than The Forbidden Planet to make some very important points that address this question.

For the uninitiated, this science fiction classic centers on Dr. Edward Morbius, who is found by an exploratory expedition from earth on a planet in which he is alone with his daughter Altaira and their all-purpose robot named Robby. Dr. Morbius came to the planet with an earlier earth expedition, but every member of that crew except himself and his daughter perished for mysterious reasons.

While alone with this daughter on the planet, Dr. Morbius was able to master an extremely advanced technological infrastructure left over by an extinct race. We find out later that this race learned how to control extremely powerful forces through forms of mental wireless communication, and that the malevolent side of their nature led to fratricidal warfare and their ultimate suicide. We find out that Dr. Morbius has also harnessed a mind-over-matter robotic capability. Unfortunately, he too is out of touch with his subconscious.

While Dr. Morbius sleeps, his subconscious resentment of the invasion of his secret world unleashes destructive forces, killing members of the earth expedition. But Dr. Morbius' aggressive "Id" force is just one part of the instinctive-genetic underpinnings to his human nature. He also holds affiliative, nurturant, and procreative instincts (resulting in his daughter, of course) that also function deep within his subconscious. Overlaying this in his conscious behavior are his duty-based, contractual, and utilitarian decision-making capabilities.

Some of Dr. Morbius' basic individualized instinctive traits such as hunger and satiation might be explained using individualized Darwinian selective models. Other opposing innate traits such as altruism and symbiosis on the one hand and aggression and parasitism on the other require much broader "group" or sociobiological models. All of these factors tug at each other and create a resultant vector that put Dr. Morbius in sharp conflict with other humans on the planet. His internal conflicts become so ferociously leveraged by advanced technology that they create massive and irreparable damage.

Dr. Morbius' subconscious was able to launch only a half-hearted attack on the earth expedition. It ended up killing only a few crewmen and failed to destroy the space ship. Perhaps his ambivalence came from recognition of the fact that his daughter needed to meet up with other humans in order to find a husband. One of the leaders of the expedition might have provided the best chance for Dr. Morbius to have quality grandchildren and achieve long term genetic survival. However, once Dr. Morbius became consciously aware of his Id monster, he realized that he might be condemned for the murder of crew members of both the first and second earth expeditions, and that furthermore he possessed destructive technology that might be too advanced for humans to handle wisely in their current stage of evolution. Perhaps out of panic, concern about technological abuse, a personal death wish, a desire to avoid living with public shame --or some combination of any of these or other possible factors, he launched a full blown attack on himself and ultimately his planet. At the same time, he gave his daughter and the earth crew adequate time to escape. Like the Reverend Jim Jones in Guyana, he found himself in a very bad situation, and ultimately decided to act as his own local governor, psychiatrist, social worker, judge, jury, and catastrophic self-executioner.

The bigger problem in defining what is "human" and what is "robotic" is that individual humans can vary greatly on a biological basis, and in the way in which they resolve conscious and subconscious forces. They not only vary in terms of their individual characteristics, but also on a broader genetic level, to include the tribal and racial level. Some genetic traits relative to other groups might be considered alien (significantly deviant from ones gene pool on a tribal level), mutant (deviant on an individual level), or even parasitic (a term very loosely synonymous with "criminal").

Many people have within themselves undesirable innate traits that they do not want to admit to either themselves or society at large. Their will to live may be so deviant that it might even carry some type of death wish. Many people have fractured personalities in which they seek to maximize personal power and pleasure on a conscious level, yet on a subconscious level they hate themselves for their deficiencies or deviance. In fact, such people may even subconsciously feel that there are logical reasons to terminate themselves from the gene pool.

It is beyond the scope of this article to try to explain possible evolutionary mechanics behind deviant genes that express alien, mutant, parasitic, or suicidal traits. But no doubt these dimensions greatly complicate our ability to define the "human" side of the human-robot interface, and our ability to define the heart we want for Tin Man or the courage for Cowardly Lion.

The amount of genetic variation that exists in the real world should also make us more vigilant regarding humans who wind up in leadership positions in our society. For starters, we must learn not to "overly anthropomorphize" people who have the power to take away our liberty. In fact, there may be people in high places in our society today with highly fractured personalities. It is possible that real life people may act similarly to the fictional Dr. Morbius. They may become cataclysmically destructive once they fully realize how their fractured innate nature has had a hand in running down America over the past several decades. They may seek to divert attention and find ways to tighten their control in America by covertly fueling internal security problems and by drawing this country into various external conflicts.

In finding ways to structure a social order that can wisely handle advanced technology, I would argue for economically and politically decentralized societies in which people are free to express their natural tendency to associate with their own kind. It is much easier for us to observe and judge the character of our neighbors and people in power if we share a similar "gut" and "head." It is also much easier for people in power to feel a sense of shared values and responsibility in smaller and more homogeneous societies. Lastly, there is a greater chance that adjacent societies can provide "checks and balances" and competing models of freedom if they can remain independent and thereby avoid absorption into an imperial collective.

In contrast, it is much harder to discern fractured and malevolent personalities if we are forcefully integrated into an imperial order and are ruled by politicians, central bankers, and media bosses in far off places with alien backgrounds who are good at being all things to all people while slyly pursuing agendas that might be alien or destructive to our interests. These types of people are the last ones that I would want to empower with arbitrary control of "infinite numbers of myrmidons -robotic warriors."

And now to explain a final social issue...

At the beginning of Part One I mentioned that I think both the overall stock market and the tech sector are due for a very serious correction. I also mentioned that I greatly prefer the precious metals and other commodity-related areas at this time.

Given this viewpoint, the reader may wonder why I have gone to this effort to research certain robot companies if the sector is still very premature from a publicly traded stock investment perspective.

Let me begin by explaining that I expect precious metals prices to skyrocket once both the general public and foreign holders of America's debt accept the fact that America is no longer credit-worthy. Before the dam finally breaks, our central bankers and their Wall Street and national media allies will probably completely exhaust all the resources they have available to continue manipulating markets and maintaining walls of illusion. I discuss underlying market-related aspects behind this in more detail in Part Three of my gold series.

Robot enterprise and commodities may not in fact be mutually exclusive in certain areas. In an effort to research ways that robotic technological development can benefit from the continued commodities bull market that I see ahead, I created another paper titled "Mining and Robotics." I believe that the natural resource area will be one of the few industrial sectors that can support advanced robotic research in the hard stagflationary times that will probably dominate the next decade.

To better grasp the strategic nature of advanced robotic research, and why it is so important to find industries with the means to support it, simply ask yourself how you would feel if you discovered fifteen to twenty years from now that a hostile country has millions of robots with a human level of intelligence that are in turn producing millions of other robots at an accelerating rate, and we have nothing that comes even close to this.

The really deep macro fundamentals

While investigating opportunities in contrarian industries, we need to remember that the underlying story behind gold and other key commodities is unfortunately part of a tragic broader story.

Dr. Paul Craig Roberts observed in "Watching the Economy Disintegrate" that, "The last two years have seen startling declines in American higher education enrollments in electrical and computer engineering, as American youth looks to non-tradable domestic services for employment stability." In his article "America's Has-Been Economy," Dr. Roberts also mentioned the other half of a cruel vicious circle, where the Bureau of Labor reported a net loss of 221,000 jobs in six major engineering job classifications in the last five years due to offshore outsourcing.

In his 30 Oct 2004 Financial Sense Newshour update, James Puplava noted that in the last two decades the financial services sector has increased from 5% to 25% of S&P Sectors. I view all of this as symptomatic of an extremely imbalanced economy.

From my own experiences in the FIRE economy (Finance, Insurance, and REal estate), I have observed huge philosophical differences between the nature of FIRE and engineering jobs and their output.

Most people in the financial services area deal with products that relate risk to the prospect of eventual cash flows. These products are usually very removed in time and feedback channels from objective verification. Many people in the industry "succeed" by functioning like roving evangelists, in which dream-spinning is more important than actual results in order to bring in financial assets and generate fee or transactional income. In this world of "intangibles," there is tremendous pressure to maintain a perfect guru image and avoid being associated with any form of failure or "problems" at all costs.

It is also important to note that in overcrowded FIRE jobs, a speculative "devil-take-the-hindmost" attitude has become widely accepted to help create more transactions. If a stock is overvalued, people think nothing about running it higher and dumping it on a "greater fool." However, this national sport of intangible and impersonal hot potato creates a conflict of interest where the holder of a good has an incentive to withhold his best information from customers, as he seeks to pawn off bad values at even worse values to them.

The engineering world generally has a completely different orientation. It is specifically focused on identifying real problems and overcoming them in ways that have objective verification. Secondly, it is focused on creating real and useful products. The producer has an incentive to educate customers about true value fundamentals with his best information in ways that are win-win for everyone.

In short, a major problem in America is that we have too many bright people in the financial services sector working full time to sell intangible fantasies and subtly defraud others, and too few bright people creating real and useful tradable goods that create a win-win situation for everyone.

Parting remarks

"We have now sunk to a depth at which restatement of the obvious is the first duty of intelligent men," said George Orwell, who added: "In a time of universal deceit - telling the truth is a revolutionary act."

I have found it refreshing to spend time describing the positives of robot yang to offset the negatives of gold yin, and restate what most readers already intuitively know, namely that the robot story symbolizes the vanguard of advanced automation, the continuing industrial revolution, and all that is still innovative and right about America.

Despite various concerns and reservations that I have expressed in this series, I believe that overall, in a healthy society, automation and new technology should always be a very good thing.


It is no different than with cowboys and farmers. Robots and humans should be friends. (Forbidden Planet actress Anne Francis as Altaira Morbius having a round with dance pal Robby. Copyright © 2004 Anne Francis. Please visit her web site at www.annefrancis.net).



Jump back to...... Part One ......Part Two ..... Part Three ......Part Four ... ..Part Five

Disclaimer: This report is for research/informational purposes only, and should not be construed as a recommendation of any security. Information contained herein has been compiled from sources believed to be reliable. There is however, no guarantee of its accuracy or completeness.

Bill Fox is VP/Investment Strategist, America First Trust. Bill welcomes phone calls and email responses to this article. His most current contact information is at his web site: www.amfir.com.

Short URL for this web page: http://tinyurl.com/2alpolu

Flag carried by the 3rd Maryland Regiment at the Battle of Cowpens, S. Carolina, 1781

© William Fox. Sometimes William Fox offers viewpoints that are not necessarily his own to provide additional perspectives.