Amid all the hype about south korea's proposed robot charter, let's not forget the more important question of whether robots should assume human roles in the first place
Ottawa Citizen, 04 May 2007
A few months ago, as part of its bid to put a robot in every household by 2020, the south korean ministry of commerce, industry and energy announced its intention “to draw up an ethical guideline for the producers and users of robots as well as the robots themselves …”
Responsible computer programming, corporate accountability and consumer protection in the electronics sector — these are all good things.
Pause. rewind. replay.
What? an ethical guideline for the robots themselves?
Anticipating an event horizon — only one bar mitzvah away — in which intelligent service robots become a part of daily life, the south korean call for a “robot ethics charter” smacks of the science fiction of isaac asimov.
When thinking through the south korean agenda, asimov is definitely worth considering. intentionally or not, his fiction charted a path that has inspired the actual development and implementation of artificial intelligence (AI). asimov was totally underwhelmed by mary shelly’s frankenstein and the “dull, hundred-times-told tale” about humanly created, intelligent monsters that will rise up to destroy us. so he constructed a new narrative where robots “were machines designed by engineers, not pseudo-men created by blasphemers.”
South Korea certainly seems to be taking its cue from asimov’s writings, imagining friendly, intelligent robots that are dedicated to helping people. asimov’s famous robbie, for example, was a nursemaid tasked with caring for a child who loved the robot like a best friend. asimov went to great pains in his storytelling to normalize robots — to undo a technophobia he dubbed the “frankenstein complex.”
To further ensure that humanity would remain undaunted, the prolific asimov articulated the three laws of robotics that he subsequently described as his most enduring literary contribution. expressed in 61 words and examined in thousands of stories and letters over a period of more than 40 years, asimov imagined what would happen if we were able to embed core morality into machine code and by doing so ensure that “it would never even enter into a robot’s mind” to intentionally break the following precepts:
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings, except when such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second law.
Leaving aside the thorny philosophical question of whether an AI could ever become a moral agent, it should be relatively obvious from their articulation that asimov’s laws are not ethical or legal guidelines for robots but rather about them. the laws are meant to constrain the people who build robots of exponentially increasing intelligence so that the machines remain destined to lives of friendly servitude. the pecking order is clear: robots serve people.
And to the extent that it even contemplates a code “for robots themselves,” the korean robot ethics charter is almost certain to follow suit.
It is interesting to ponder asimov’s laws in the context of technological development in south korea and elsewhere. for example, could samsung’s intelligent surveillance & security guard robot be programmed to correctly resolve the tension between asimov’s first and second laws without abandoning its fundamental purpose? funded by the south korean government to overcome the limitations of human soldiers guarding its borders to the north, samsung’s machine-gun sentry robots (check ’em out — they’re on YouTube) use precision automation technologies to discriminate friendly from enemy activity and guarantee high shooting accuracy without the need for human presence. what will the SK robot ethics charter say about these?
When I began my academic career a decade ago, the uniform law Conference of canada commissioned me to conduct a study on the far less ominous but related question of how to deal with computers that purport to negotiate and enter into contracts completely independent from human review or interaction. without a law resolving this novelty, there was concern that the future of e-commerce was uncertain. in the years since, as canada research chair in ethics, law and technology at the university of ottawa, i have been gearing up for a book project tentatively titled: minding the machine — a dual investigation involving: (i) the AI project of putting minds into machines; (ii) the corollary ethical and legal project of designing appropriate prohibitions and oversight mechanisms to mind those machines.
Until the silicon hits the sidewalk, i remain doubtful whether south korea’s robot ethics charter will match the media hype that it has received. (one author went so far as to style it a “hippocratic oath for androids.”)
My skepticism about all of this lies mainly in the subtext. talk about burying the lede! in my view, the south korean agenda has little to do with machine intelligence or roboethics proper. once you sniff your way through the subterfuge of south korea’s jetson-esque utopia, you will see that there are two very traditional drivers underlying all of this.
The first driver is financial. for better or worse, the south korean government has identified robotics as a key economic strategy in the coming decades. the BBC and the new york times report that millions of research dollars are being pumped into robotics in south korea. recognizing market saturation for industrial and military robotics, the strategy is to create a global market that does not currently exist — a market for domestic service robots. south korea is hoping that if they build it, we will come.
The second driver is social. with the lowest birthrate in the world, it is predicted that south korea will face significant workforce shortages in the coming years.
The current strategy for making up the shortfall includes developing service “bots” such as asimov’s robbie that can perform a range of domestic chores, and become companions and caregivers for the young and old.
In any case, if you find the idea of using service robots to solve domestic labour issues somewhat exotic, it should be remembered that such proposals have longstanding precedents in north america. a nice example was offered me by a brilliant cyberfeminist colleague in the following corporate slogan from the 1920s: “clothes washing is a task for a machine, not for your wife. turn the hard work into play. buy her a bluebird.”
In light of this slogan, it is intriguing to note the first of two central reasons offered to the media for creating the robot ethics charter by one of its drafters. recognizing the concerns that accompany the substitution of robots for people as caregivers and companions, the drafter ponders, “imagine if some people treat androids as if the machines were their wives.”
Before we spend valuable resources commissioning working groups to invent “no-flirt” rules or other robotic laws to avoid inappropriate human-machine bonding, isn’t there a logically prior line of questioning about whether a declining birthrate is truly a problem and, in any event, whether intelligent service robots are the right response?
A headline in the korea times a little over a year ago proclaimed a more intuitive approach: “gender equality essential to addressing low birthrate.”
It is no coincidence that the word robot itself derives from robota — a czech word that connotes involuntary servitude. aristotle was perhaps first to recognize the politics of automation, speculating that “[i]f every instrument could accomplish its own work, obeying or anticipating the will of others, chief workmen would not want servants, nor masters slaves.”
Was he right? could robots be a technology of emancipation? or does automation just as easily reinforce existing gender stereotypes and an unjust status quo?
The answer to these questions surely depends on how those robots are designed and used. not just the way they are programmed but, more broadly, the social roles and values that we ascribe to them.
Despite my luddite sensibilities, i have always remained a reluctant optimist about the potential of ethically inspired automation technologies, AI and collective intelligence. i am an adamant believer in the general project of roboethics and ethical software design, and i commend much of the excellent research in these fields by groups like the singularity institute for AI and the european robotics research network.
At the same time, i am concerned about robotic laws, charters and other sleight-of-hand that have the potential to misdirect us from the actual domains of ethics and social justice. let us hope that i am mistaken in what i described as the true drivers of the south korean robotics agenda and that its robot ethics charter will exceed its pre-release hype. only time will tell.