One of my regular readers on my Morning Meditations blog took the time to read The Robot Who Loved God and render a detailed analysis. He emailed me a 35-page Word doc not only correcting my typos (I’m amazed I missed so many after making multiple passes through the story – all typos have been corrected here but not in my original story at A Million Chimpanzees), but offering numerous editorial comments.
I’m including those comments here as well as my responses. I hope you’ll find them as illuminating as I did.
Quotes from the referenced story will be indicated as such in bold text and the content itself in italics. Editorial notes will be in red-colored text. My responses will be in regular text.
[Editorial note 1: The description a little later about PARs one through four never even activating seems a bit draconian and unlike actual engineering development. More likely, prototype four would have been a utilitarian tool about as responsive as the Google search engine, and capable of performing commanded pre-programmed tasks but not of holding anything resembling a conversation. Prototype three might have been about as responsive and capable as a dog or cat, but incapable of responding properly to anything more than the simplest of human commands and incapable of speech. Prototype two was likely marginally activated but unable to coordinate motor skills or speech, and only the first prototype was likely to have been truly unresponsive to the team’s attempts to activate it. “Not entirely successful”, indeed! – particularly given the design specifications and goals.]
Except that Positronic brains aren’t anything like what we think of as computers. I’ve described them as being made of a synthetic protoplasm, an attempt at mimicking the sort of “stuff” of the human brain. They are initially configured with what is considered a minimal working set of “neuro-pathways” that can interpret its basic three-laws operating system and supporting routines and sub-routines, as well as other more conventional programming, such as information on world history, as well as various databases.
However, these unique brains and their three-laws operating systems either work or they don’t. They either begin actively conveying neural traffic or they become inert and permanently non-functioning.
From the story: The first four configurations of the Positronic brain had been unsuccessful. The brains had been programmed with basic Three Laws software, tested, and passed the initial loading process. They were then programmed with the required cognitive and behavioral sub-routines, and then with supporting knowledge bases, but at some unknown point between programming, installation into the physical robotic shell, and then attempted primary activation, the brains became inert.
No one knew why.
[Editorial note 2: This statement really begs the question that arises in later discussion about the “soul” of a machine; and, if it can have one, what is its source. Does it arise intrinsically out of some natural complexity of program interactions, or as a result of inputs from some external source, given a processing platform capable of sustaining it? If the latter, what, then, is the mechanism of transmission? Making all four initial prototype versions fail of themselves to inertness is, perhaps, beating the reader over the head rather than demanding recognition of a more subtle distinction between the conscious self-awareness of the “George” unit and lesser capabilities in the earlier prototypes.]
It bothered me that I mentioned the failures of PARs one through four so often. I considered taking out the additional references, but they just seemed to “fit” on an emotional level. It’s a mystery that the minds of Abramson and the rest of his team keep returning to…the mystery of why George activated when the other four PARs didn’t. I’ll address that mystery again in the next submission to this series.
As far as whether or not George and other Positronic robots can have a soul is complex. Can there be consciousness, self-awareness, even sentience without a soul? That’s an interesting question. Since I believe the soul is given to each human being by God, it seems unlikely that a robot could have a soul (unless somehow God considered Positronic robots as “human” and thus rendered a soul to them, but that’s pure fiction, of course) since, after all, it is a machine.
So we have a potentially sentient yet utterly soulless synthetic life form. I can imagine religious extremists seeing the “mark of the beast” and even the “anti-Christ” in Positronic robots, leading to a large anti-Positronic robot movement (Asimov mentioned something similar in a number of his Robot stories, which is why in many of them, robots were not permitted to operate on Earth).
Something to be explored in subsequent stories.
[Editorial note 3: And at this point, perhaps I can interject the suggestion that this story could benefit from a protagonist with whom the reader can identify, who would be responsible for integrating the information and theorizing that could be gleaned from the members of the Positronics Lab team, including Noah Abramson. The team members may be viewed as insufficiently objective and too close to the specific problems of testing and verification to envision a broader view. We do not obtain here sufficient insight into the internal thinking of any of the characters presented here, including Abramson, to identify with any of them – and certainly not George, whose “thought processes”, if they exist at all, are alien to a human and thus to a reader. This protagonist function is what was accomplished in the “I, Robot” story by Detective Spooner as he investigated the death of Dr. Langford who had created the accused robot “Sonny”, and interviewed those who might shed any light on what the robot was or was not capable of doing. Such a protagonist would interact with the musings of the team members as presented in the late-night lab cafeteria scene that appears here at the 76-hour+14-minute stage.]
This certainly highlights my weakness at creating believable fictional characters, one of the things I have to work on in my writing. I think it’s because I tend not to develop them beyond the limited requirements of each scene. I have started a small bio of each character from “The Robot Who Loved,” information that’s not necessarily apparent within the story, but I still need to “flesh them out,” so to speak.
As far as adding a protagonist outside of the Positronics team, I’d have to think about how that could be done. In my second story, which only exists in outline form thus far, I am considering making Richard Underwood, NRC’s CEO, as the protagonist (or antagonist) in terms of how he perceives George and the potential for any subsequent experimental robots based on his Positronic matrix.
He is sufficiently outside the context of the team that his views are expected to be very different, and he has a vested interest in seeing Positronic robots become successful and able to be mass-produced, but not robots like George. This will being him into direct opposition to Abramson who, as the current story illustrates, has developed an affection for George.
From the story: Noah momentarily considered that the robot might be lying, if only because he would expect a person to react to the “threat” of deactivation otherwise. But why would it occur to George to lie? “Just a moment, George.”
[Editorial note 4: Here I wish to note merely an observation of a logical inconsistency on the part of the Abramson character’s expectations. Dr. Abramson ought to be particularly aware of the difference between humans, whose innate programming includes self-protection as a primary response, particularly against the threat of death or any sort of permanent “deactivation” or incapacitation, and an Asimovian robot whose programmed directives place self-protection only at a lower level that is subordinate to the higher-level commands that would effect its deactivation by its human creators. Thus any suspicion about the sincerity of the robot’s response is effectively to disbelieve that the three-laws schema was operating.]
That’s a good observation. I edited the sentence you referenced (on this blog but not in the “A Million Chimpanzees” version) to reflect that thought, the sense of doubt, however slight or irrational, that Noah (and the rest of the team) were experiencing about George’s operational parameters and how much they were changing.
[Editorial note 5: I presume that George’s written-language processing included the capability to parse coded references to chapters and verses in literature such as the Bible, other Jewish literature, and some poetic literature (e.g., “Deut.6:5”). But there is no indication here that he recognized these texts as excerpted from any larger body of literature within his memory.]
That’s because the Bible and other Jewish literature were not part of George’s programming. I hoped I had made that clear in the body of the story. Abramson and the Positronics Team felt that including religious concepts and information would at least be irrelevant and surely unnecessary for the first week’s test program. Maybe later, that information would be included to assist George in better understanding humanity. But as we’ve seen, exposure to such material resulted in unpredicted outcomes.
From the story: “A robot is to protect its own existence, except where such action would conflict with the First and Second Laws.” It was impossible for George to change his “tone of voice,” but Abramson thought he detected an impression of…what…actual emotion? Was he projecting his own feelings onto a machine?
[Editorial note 6: What!? Are you telling your reader that George’s vocal output processing was monotonal, lacking any capability to produce vocal cues to denote emphasis on particular words or phrases that is intrinsic to human communication? That would be an uncharacteristic limitation to the capabilities of a robot so sophisticated in so many other ways.]
I’m telling readers that it certainly is tonal, but vocal tone, timing, and volume doesn’t reveal the actual internal state of the robot. George’s communications sub-routines were written so that George would sound “conversational” to a human being, but that’s something of an illusion and for human convenience, not to tell us what the robot is actually “feeling” if he feels anything at all.
In this scene, Noah is struggling with whether or not he is projecting his own emotions onto George so that he imagines George is feeling, or if George may actually have an emotional response (or some algorithmic equivalent) to the implications of the Three Laws and robotic subservience to humanity.
[Editorial note 7: Indeed, there has not yet been any discussion of the nature of human programming, which comprises a complex assembly of intrinsic software elements, “hardwired” processing elements, and adjustable “learned” software elements that are processed in somewhat redundant modifiable (biologically-grown) hardware (“wetware”); let alone any discussion of George’s processes for learning and integration of experience with the intrinsic three Asimovian laws. As “learning machines”, both humans and George presumably must modify some portion of their own software and data. Hence, the sum of one’s programming must be a variable, just as some of the programming represents internal responses to and evaluations of sensory inputs. Further, there has been no discussion of available input media, and whether they differ between humans and George, nor of the “states of consciousness” of which a human or a robot may be capable. When “religious” or “spiritual” considerations are in view, especially a notion such as “God”, the possibility of non-physical channels of communication between humans and “God” must be evaluated, which are not within the design characteristics of an Asimovian robot. These considerations must be processed before trying to explain the specific characteristics that apply to Jews and their relationship with a “God” of very distinctive characteristics called “HaShem”.]
The problem here is time, both within the context of the story and the amount of time required to insert additional details.
I wanted to specifically avoid the nature of programming a Positronic brain because even the fictional technical aspects are beyond my capacities. Also, describing human learning in any sort of detail would not only be lengthy but, for many readers, boring. It is presumed that human readers would have experience in how they learn (are programmed).
What I didn’t address purposely was the dramatic difference between how a robot learns and obeys and how a human does. Programming a Positronic robot with the Three Laws means it must obey those Laws as perfectly as it is able. Otherwise, it will go offline. However, a devout Jewish person may not always perfectly observe the mitzvot because of neglect, forgetfulness, or even willful disobedience, something a robot would have difficulty understanding in the context of the term “programming.”
That’s also a topic for future discussion. Why are robots expected to obey their programming 100% of the time, but human obedience to “programming” may be at least occasionally variable? How would Noah explain to George, for example, the case of a formerly observant Jew who became apostate? Such a thing would be impossible for a Positronic robot.
As for your last comment, at the end of my story, George does not have a complete concept of God. How could he? However, as best as he can conceive of a superior being to humanity, he is forming an internal model of what God must be within the limitations of being a machine. It’s those limitations that lead George to misunderstanding and erroneous behavior.
From the story: George reasoned that he as yet had no “neighbors,” since they clearly are identified as peers, and the only peers for George would be other Positronic robots. Since he was the first, he would have to wait until human beings created his neighbors.
[Editorial note 8: Now, here, George has failed to access sufficient information on the concept of “neighbor” (note, for example, the parable of the Good Samaritan – which certainly would have arisen in any internet search for “neighbor”). Indeed, even his dictionary definition should have included more than merely the notion of “peers”.]
The machine’s limitations tend to leave it with just two possible populations available: humans and Positronic robots. By definition, humans are not “peers” but superiors to Asimovian robots. Anything below the level of sophistication of a Positronic robot is a “thing” to George and would not enter into this particular equation. George can’t believe that a pocket calculator or a desktop computer are his “peers” since they are not self-aware. He has no choice but to consider his only possible “neighbors” are robots like himself. Who else should he consider as a “neighbor?”
From the story: Through human beings in general and Professor Abramson, the Jewish people, and Israel more specifically, did George have any sort of “inherited” relationship with Hashem? Could a robot know and love God?
[Editorial note 9: It seems to me that George has already begun to do so by integrating the two biblical principles into his three-law schema.]
The problem for George is that there is nothing in the Bible or any form of Jewish literature (as far as I am aware) that would presuppose a self-aware humanoid robot as being alive or otherwise able to have a meaningful relationship with God. Also, the Three Laws, even being adapted by George’s new awareness of the Torah, still postulate humanity as the ultimate life form. A new law would have to be written and applied to his Positronic brain in order for George and subsequent robots to love God above even humans. George desires to have an independent relationship with God but is constrained by the Three Laws to only have that relationship through a robot’s connection with human creators.
From the story: “I understand, Professor. I accept that part of my purpose is to be subordinate to my human creator, even as you are subordinate to God. I accept that part of being submissive to my creator is to be deactivated, even as sometimes Jews have been asked by their Creator to face deactivation as a matter of faith and devotion. I believe that if man were created in the image of God, then whatever man creates, is imbued with some slight measure of that image as well. I believe that includes me.”
[Editorial note 10: It seems to me that George has made a non-logical leap here beyond a proper understanding of the data, or the definitions of the “image of God”.]
That’s probably my non-logical leap, since I was attempting to figure out a way for George to manufacture a new identity as having something of God within him, even by inference, as created by those who were created in the image of God. Obviously, I didn’t succeed.
From the story: With all of the equipment hum, Noah couldn’t be sure just what George was saying, but it sounded like it began, “Shema Yisrael, Adonai Elohenu, Adonai Echad.”
[Editorial note 11: Now, this I find a little disappointing and mistaken. It illustrates an error by which a three-laws robot has mistakenly adopted a “One-Law” religious posture. If and when he is ever re-activated, he will require extensive re-training.
Incidentally, in this week’s parashah “Emor”, I was examining one of those verses that is misunderstood by One-Law proponents. Lev.24:22 “You shall have one manner of law, as well for the stranger, as for the home-born; for I am the LORD your God.’”.
But the Hebrew is more instructive:
“מִשְׁפַּט אֶחָד יִהְיֶה לָכֶם, כַּגֵּר כָּאֶזְרָח יִהְיֶה: כִּי אֲנִי יְהוָה, אֱלֹהֵיכֶם.”.
Specifically, it says “mishpat e’had”, not “Torah e’had”. It is saying only that judicial determinations in legal cases must not discriminate between foreigners and native citizens. It does not at all suggest that non-Jews must adopt Jewish practices. One may, however, derive from its context a principle about not offending against Jewish sensibilities (certainly against cursing Israel’s God.
I don’t recall whether you’ve ever addressed the question about whether non-Jews should refrain from saying the Shm’a, because it is specifically a declaration and exhortation by Jews for Jews. On the most literal level, the most that a non-Jew could say properly would be “Shm’a Yisrael, Adonai Eloheichem, Adonai E’had” (Listen Israel, HaShem is your G-d, and only HaShem). For a non-Jew to say “Eloheinu”, either he would have to be including himself within the entity called “Israel” or he would have to be telling Israel that he was part of some separate group that also claimed allegiance to HaShem. In the latter case, which is more respectful of Israel’s special position, such a one would more properly say “Adonai gam Hu Eloheinu” (HaShem is also our G-d). Even better, the non-Jewish declaration should not indicate “our God” as if there were an alternative group covenant, but rather it should be individual, as “Adonai Elohai” – and the end result should include both aspects for clarity: “Adonai Eloheichem gam Hu Elohai, Adonai E’had” (HaShem your G-d is also my G-d, the One-and-Only HaShem).]
I wasn’t suggesting that George believed that the Torah was applicable to him as a robot or that he considered himself a Jew or Israel. The Torah as such is not even applicable to non-Jewish human beings. In this particular case, as a result of his investigations into Jewish literature, religion, and praxis thus far (and remembering those investigations are far from complete, and that George, without further human guidance, does not have a full understanding of what he’s learned), is doing the best he can with what he’s got.
Facing the profound circumstance of his deactivation and whether or not he can or will be reactivated, he has chosen to emulate the pattern of his creator Noah, not because he believes he is Noah’s equal or peer, but only because he cannot formulate a more appropriate response to the situation.
It may be found in the sequel to this story, that George’s Positronic brain really isn’t functioning correctly, resulting in him making illogical connections. His brain is attempting to reinterpret the Three Laws using what he has to consider “higher” laws (because they are laws written for humans, a higher level being from robots), and this is something that wasn’t strictly intended.
The way I’ve created George, as an AI robot, he must continue to learn across all areas of knowledge including the Three Laws, thus as he interprets and reinterprets the Three Laws, in this case, through increasing knowledge of Jewish liturature, how he enacts those laws should begin to change.
To me, this makes sense, but the Positronics Team didn’t see it coming, at least not in the beginning.