|
|
|
|
|
|
ROBOT / 2001 ( Satoshi Kinoshita )
Series: | Prints on canvas: Portraits | Medium: | print on canvas | Size (inches): | 20 x 20 (image size) | Size (mm): | 508 x 508 (image size) | Catalog #: | PC_038 | Description: | From an edition of 25.Signed, titled, date, copyright, edition in magic ink on the reverse /Aside from the numbered edition of 5 artist's proofs and 2 printer's proofs. See Catalog #PP_040/ Robot for more details.
Three Laws of Robotics
In science fiction, the Three Laws of Robotics are a set of three laws written by Isaac Asimov, which most robots appearing in his fiction have to obey:
1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by the human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict the First or Second Law.
Asimov attributes the Three Laws to John W. Campbell from a conversation made on December 23, 1940. However, Campbell claims that Asimov had the Laws already in his mind, and they simply needed to be stated explicitly.
Although Asimov pins the Laws creation on one date, their appearance in his literature happened over a period of time. Asimov wrote two stories without the Three Laws mentioned explicitly ("Robbie" and "Reason"); Asimov assumed, however, that robots would have certain inherent safeguards. "Liar!", Asimov's third robot story makes the first mention of the First Law, but none of the others. All three laws finally appeared together explicitly in "Runaround". When these stories and several others were compiled in the anthology I, Robot, "Reason" was updated to acknowledge the Three Laws.
The Three Laws are often used in science fiction novels written by other authors, but tradition dictates that only Dr. Asimov would ever quote the Laws explicitly.
A trilogy situated within Asimov's fictional universe was written in the 1990s by Roger MacBride Allen with the prefix "Isaac Asimov's ---" on each title (Caliban, Inferno and Utopia). In it, a set of new laws is introduced. According to the introduction of the first book, these were devised by the author in discussion with Asimov himself.
Some amateur roboticists have evidently come to believe that the Three Laws have a status akin to the laws of physics; i.e., a situation which violates these laws is inherently impossible. This is incorrect, as the Three Laws are quite deliberately hardwired into the positronic brains of Asimov's robots. Asimov in fact distinguishes the class of robots which follow the Three Laws, calling them Asenion robots. The robots in Asimov's stories, all being Asenion robots, are incapable of knowingly violating the Three Laws, but there is nothing to stop any robot in other stories or in the real world from being non-Asenion.
This is strikingly opposite to the nature of Asimov's robots. Although at first the Laws were simply carefully engineered safeguards, in later stories Asimov clearly states that it would take a significant investment in research to create intelligent robots without these laws because they were an inalienable part of the mathematical foundation underlying the functioning of the positronic brain.
In the real world, not only are the laws optional, but significant advances in artificial intelligence would be needed for robots to easily understand them. Some have argued that, since the military is a major source of funding for robotic research, it is unlikely such laws would be built into the design. Others have countered that the military would want strong safeguards built into any robot where possible, so laws similar to these would be embedded if possible.
The Three Laws are sometimes seen as a future ideal by those working in artificial intelligence - once an intelligence has reached the stage where it can comprehend these laws, it is truly intelligent.
None of the robot stories written by Asimov complimented the Three Laws of Robotics. On the contrary, they showed flaws and misconceptions through very serious glitches. Asimov once wondered how he could create so many stories in the few words that made up these laws. For a few stories, the only solution was to change the laws. A few examples:
The Three Laws were extended by a fourth law, the 'Zeroth Law', so named to continue the pattern of lower-numbered laws superseding higher-numbered laws. It was supposedly invented by R. Giskard Reventlov in Robots and Empire, although it was mentioned earlier in "The Evitable Conflict" by Susan Calvin. In Robots and Empire, Giskard was the first robot to act according to the Zeroth Law, although it proved destructive to his positronic brain, as he violated the First Law. R. Daneel Olivaw, over the course of many thousand years, was able to adapt himself to be able to fully obey the Zeroth Law.
0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm
A condition stating that the Zeroth Law must not be broken was added to the original Laws.
Several NS-2 robots (Nestor robots) were created with only part of the First Law. It read:
1. A robot may not harm a human being.
This solved the original problem of robots not allowing anyone to be subject to necessary radiation even for proper time limits (robots were rendered inoperable in doses reasonably safe for humans, and were being destroyed attempting to rescue the humans). However, it caused much other trouble as detailed in "Little Lost Robot".
The Solarians eventually created robots with the Three Laws as normal but with a warped meaning of "human". Similar to a short story in which robots were capable of harming aliens, the Solarians told their robots that only people speaking the Solarian language were human. This way, their robots did not have any problem harming non-Solarian human beings (and actually, they had specific orders about that).
In MacBride Allen's Caliban trilogy, the scientists of Spacer world Inferno created robots with a new set of laws. They are no longer required to serve humans, are programmed to find their own reason to being, and, while they still can't hurt humans, they do not need to prevent harm, which allows the leader of New Law robots, Prospero, to plot the perfect murder. The title character, Caliban, is the only robot to be programmed without any laws at all.
The problem of robots considering themselves human has been alluded to many times. Humaniform robots make the problem more noticeable. Examples can be found in the novel The Robots of Dawn and the short stories "Evidence (Asimov)" and "The Bicentennial Man".
After a murder on Solaria in The Naked Sun, Elijah Baley claimed that the Laws had been deliberately misrepresented because robots could unknowingly break any of them.
A parody of the Three Laws was made for Susan Calvin by Gerald Black:
1. Thou shalt protect the robot with all thy might and all thy heart and all thy soul.
2. Thou shalt hold the interests of US Robots and Mechanical Men, Inc. holy provided it interfereth not with the First Law.
3. Thou shalt give passing consideration to a human being provided it interfereth not with the First and Second laws.
Gaia, the planet with combined intelligence in the Foundation novels, adopted a law similar to the First as their philosophy:
Gaia may not harm life or, through inaction, allow life to come to harm.
The laws are not considered absolutes by advanced robots. In many stories, like "Runaround", the potentials and severity of all actions are weighed and a robot will break the laws as little as possible rather than do nothing at all. In another story, problems with the first law were noted - for example, a robot could not function as a surgeon, which caused damage to a human; nor could it write game plans for American football since that would lead to the injury of humans.
Roger Clarke wrote a pair of papers analyzing the complications in implementing these laws, if systems were someday capable of employing them. He argued that. "Asimov's Laws of Robotics have been a very successful literary device. Perhaps ironically, or perhaps because it was artistically appropriate, the sum of Asimov's stories disprove the contention that he began with: It is not possible to reliably constrain the behavior of robots by devising and applying a set of rules."
John Sladek's parodic short story "Broot Force" (supposedly written by "I-Click As-I-Move") concerns a group of Asimov-style robots whose actions are constrained by the "Three Laws of Robish", which are "coincidentally" identical to Asimov's laws. The robots in Sladek's story all manage to find logical loopholes in the Three Laws, usually with bloody results.
-From Wikipedia, the free encyclopedia.
| | | send price request |
|
|
|
|
|
Gallery opening
500 Fifth Avenue, Suite 1820 (Between 42nd and 43rd)
...
|
|
more
|
|