A Reader’s Analysis of “The Robot Who Loved God”

One of my regular readers on my Morning Meditations blog took the time to read The Robot Who Loved God and render a detailed analysis. He emailed me a 35-page Word doc not only correcting my typos (I’m amazed I missed so many after making multiple passes through the story – all typos have been corrected here but not in my original story at A Million Chimpanzees), but offering numerous editorial comments.

I’m including those comments here as well as my responses. I hope you’ll find them as illuminating as I did.

Quotes from the referenced story will be indicated as such in bold text and the content itself in italics. Editorial notes will be in red-colored text. My responses will be in regular text.

[Editorial note 1: The description a little later about PARs one through four never even activating seems a bit draconian and unlike actual engineering development. More likely, prototype four would have been a utilitarian tool about as responsive as the Google search engine, and capable of performing commanded pre-programmed tasks but not of holding anything resembling a conversation. Prototype three might have been about as responsive and capable as a dog or cat, but incapable of responding properly to anything more than the simplest of human commands and incapable of speech. Prototype two was likely marginally activated but unable to coordinate motor skills or speech, and only the first prototype was likely to have been truly unresponsive to the team’s attempts to activate it. “Not entirely successful”, indeed! – particularly given the design specifications and goals.]

Except that Positronic brains aren’t anything like what we think of as computers. I’ve described them as being made of a synthetic protoplasm, an attempt at mimicking the sort of “stuff” of the human brain. They are initially configured with what is considered a minimal working set of “neuro-pathways” that can interpret its basic three-laws operating system and supporting routines and sub-routines, as well as other more conventional programming, such as information on world history, as well as various databases.

However, these unique brains and their three-laws operating systems either work or they don’t. They either begin actively conveying neural traffic or they become inert and permanently non-functioning.

From the story: The first four configurations of the Positronic brain had been unsuccessful. The brains had been programmed with basic Three Laws software, tested, and passed the initial loading process. They were then programmed with the required cognitive and behavioral sub-routines, and then with supporting knowledge bases, but at some unknown point between programming, installation into the physical robotic shell, and then attempted primary activation, the brains became inert.

No one knew why.

[Editorial note 2: This statement really begs the question that arises in later discussion about the “soul” of a machine; and, if it can have one, what is its source. Does it arise intrinsically out of some natural complexity of program interactions, or as a result of inputs from some external source, given a processing platform capable of sustaining it? If the latter, what, then, is the mechanism of transmission? Making all four initial prototype versions fail of themselves to inertness is, perhaps, beating the reader over the head rather than demanding recognition of a more subtle distinction between the conscious self-awareness of the “George” unit and lesser capabilities in the earlier prototypes.]

It bothered me that I mentioned the failures of PARs one through four so often. I considered taking out the additional references, but they just seemed to “fit” on an emotional level. It’s a mystery that the minds of Abramson and the rest of his team keep returning to…the mystery of why George activated when the other four PARs didn’t. I’ll address that mystery again in the next submission to this series.

As far as whether or not George and other Positronic robots can have a soul is complex. Can there be consciousness, self-awareness, even sentience without a soul? That’s an interesting question. Since I believe the soul is given to each human being by God, it seems unlikely that a robot could have a soul (unless somehow God considered Positronic robots as “human” and thus rendered a soul to them, but that’s pure fiction, of course) since, after all, it is a machine.

So we have a potentially sentient yet utterly soulless synthetic life form. I can imagine religious extremists seeing the “mark of the beast” and even the “anti-Christ” in Positronic robots, leading to a large anti-Positronic robot movement (Asimov mentioned something similar in a number of his Robot stories, which is why in many of them, robots were not permitted to operate on Earth).

Something to be explored in subsequent stories.

[Editorial note 3: And at this point, perhaps I can interject the suggestion that this story could benefit from a protagonist with whom the reader can identify, who would be responsible for integrating the information and theorizing that could be gleaned from the members of the Positronics Lab team, including Noah Abramson. The team members may be viewed as insufficiently objective and too close to the specific problems of testing and verification to envision a broader view. We do not obtain here sufficient insight into the internal thinking of any of the characters presented here, including Abramson, to identify with any of them – and certainly not George, whose “thought processes”, if they exist at all, are alien to a human and thus to a reader. This protagonist function is what was accomplished in the “I, Robot” story by Detective Spooner as he investigated the death of Dr. Langford who had created the accused robot “Sonny”, and interviewed those who might shed any light on what the robot was or was not capable of doing. Such a protagonist would interact with the musings of the team members as presented in the late-night lab cafeteria scene that appears here at the 76-hour+14-minute stage.]

This certainly highlights my weakness at creating believable fictional characters, one of the things I have to work on in my writing. I think it’s because I tend not to develop them beyond the limited requirements of each scene. I have started a small bio of each character from “The Robot Who Loved,” information that’s not necessarily apparent within the story, but I still need to “flesh them out,” so to speak.

As far as adding a protagonist outside of the Positronics team, I’d have to think about how that could be done. In my second story, which only exists in outline form thus far, I am considering making Richard Underwood, NRC’s CEO, as the protagonist (or antagonist) in terms of how he perceives George and the potential for any subsequent experimental robots based on his Positronic matrix.

He is sufficiently outside the context of the team that his views are expected to be very different, and he has a vested interest in seeing Positronic robots become successful and able to be mass-produced, but not robots like George. This will being him into direct opposition to Abramson who, as the current story illustrates, has developed an affection for George.

From the story: Noah momentarily considered that the robot might be lying, if only because he would expect a person to react to the “threat” of deactivation otherwise. But why would it occur to George to lie? “Just a moment, George.”

[Editorial note 4: Here I wish to note merely an observation of a logical inconsistency on the part of the Abramson character’s expectations. Dr. Abramson ought to be particularly aware of the difference between humans, whose innate programming includes self-protection as a primary response, particularly against the threat of death or any sort of permanent “deactivation” or incapacitation, and an Asimovian robot whose programmed directives place self-protection only at a lower level that is subordinate to the higher-level commands that would effect its deactivation by its human creators. Thus any suspicion about the sincerity of the robot’s response is effectively to disbelieve that the three-laws schema was operating.]

That’s a good observation. I edited the sentence you referenced (on this blog but not in the “A Million Chimpanzees” version) to reflect that thought, the sense of doubt, however slight or irrational, that Noah (and the rest of the team) were experiencing about George’s operational parameters and how much they were changing.

[Editorial note 5: I presume that George’s written-language processing included the capability to parse coded references to chapters and verses in literature such as the Bible, other Jewish literature, and some poetic literature (e.g., “Deut.6:5”). But there is no indication here that he recognized these texts as excerpted from any larger body of literature within his memory.]

That’s because the Bible and other Jewish literature were not part of George’s programming. I hoped I had made that clear in the body of the story. Abramson and the Positronics Team felt that including religious concepts and information would at least be irrelevant and surely unnecessary for the first week’s test program. Maybe later, that information would be included to assist George in better understanding humanity. But as we’ve seen, exposure to such material resulted in unpredicted outcomes.

From the story: “A robot is to protect its own existence, except where such action would conflict with the First and Second Laws.” It was impossible for George to change his “tone of voice,” but Abramson thought he detected an impression of…what…actual emotion? Was he projecting his own feelings onto a machine?

[Editorial note 6: What!? Are you telling your reader that George’s vocal output processing was monotonal, lacking any capability to produce vocal cues to denote emphasis on particular words or phrases that is intrinsic to human communication? That would be an uncharacteristic limitation to the capabilities of a robot so sophisticated in so many other ways.]

I’m telling readers that it certainly is tonal, but vocal tone, timing, and volume doesn’t reveal the actual internal state of the robot. George’s communications sub-routines were written so that George would sound “conversational” to a human being, but that’s something of an illusion and for human convenience, not to tell us what the robot is actually “feeling” if he feels anything at all.

In this scene, Noah is struggling with whether or not he is projecting his own emotions onto George so that he imagines George is feeling, or if George may actually have an emotional response (or some algorithmic equivalent) to the implications of the Three Laws and robotic subservience to humanity.

[Editorial note 7: Indeed, there has not yet been any discussion of the nature of human programming, which comprises a complex assembly of intrinsic software elements, “hardwired” processing elements, and adjustable “learned” software elements that are processed in somewhat redundant modifiable (biologically-grown) hardware (“wetware”); let alone any discussion of George’s processes for learning and integration of experience with the intrinsic three Asimovian laws. As “learning machines”, both humans and George presumably must modify some portion of their own software and data. Hence, the sum of one’s programming must be a variable, just as some of the programming represents internal responses to and evaluations of sensory inputs. Further, there has been no discussion of available input media, and whether they differ between humans and George, nor of the “states of consciousness” of which a human or a robot may be capable. When “religious” or “spiritual” considerations are in view, especially a notion such as “God”, the possibility of non-physical channels of communication between humans and “God” must be evaluated, which are not within the design characteristics of an Asimovian robot. These considerations must be processed before trying to explain the specific characteristics that apply to Jews and their relationship with a “God” of very distinctive characteristics called “HaShem”.]

The problem here is time, both within the context of the story and the amount of time required to insert additional details.

I wanted to specifically avoid the nature of programming a Positronic brain because even the fictional technical aspects are beyond my capacities. Also, describing human learning in any sort of detail would not only be lengthy but, for many readers, boring. It is presumed that human readers would have experience in how they learn (are programmed).

What I didn’t address purposely was the dramatic difference between how a robot learns and obeys and how a human does. Programming a Positronic robot with the Three Laws means it must obey those Laws as perfectly as it is able. Otherwise, it will go offline. However, a devout Jewish person may not always perfectly observe the mitzvot because of neglect, forgetfulness, or even willful disobedience, something a robot would have difficulty understanding in the context of the term “programming.”

That’s also a topic for future discussion. Why are robots expected to obey their programming 100% of the time, but human obedience to “programming” may be at least occasionally variable? How would Noah explain to George, for example, the case of a formerly observant Jew who became apostate? Such a thing would be impossible for a Positronic robot.

As for your last comment, at the end of my story, George does not have a complete concept of God. How could he? However, as best as he can conceive of a superior being to humanity, he is forming an internal model of what God must be within the limitations of being a machine. It’s those limitations that lead George to misunderstanding and erroneous behavior.

From the story: George reasoned that he as yet had no “neighbors,” since they clearly are identified as peers, and the only peers for George would be other Positronic robots. Since he was the first, he would have to wait until human beings created his neighbors.

[Editorial note 8: Now, here, George has failed to access sufficient information on the concept of “neighbor” (note, for example, the parable of the Good Samaritan – which certainly would have arisen in any internet search for “neighbor”). Indeed, even his dictionary definition should have included more than merely the notion of “peers”.]

The machine’s limitations tend to leave it with just two possible populations available: humans and Positronic robots. By definition, humans are not “peers” but superiors to Asimovian robots. Anything below the level of sophistication of a Positronic robot is a “thing” to George and would not enter into this particular equation. George can’t believe that a pocket calculator or a desktop computer are his “peers” since they are not self-aware. He has no choice but to consider his only possible “neighbors” are robots like himself. Who else should he consider as a “neighbor?”

From the story: Through human beings in general and Professor Abramson, the Jewish people, and Israel more specifically, did George have any sort of “inherited” relationship with Hashem? Could a robot know and love God?

[Editorial note 9: It seems to me that George has already begun to do so by integrating the two biblical principles into his three-law schema.]

The problem for George is that there is nothing in the Bible or any form of Jewish literature (as far as I am aware) that would presuppose a self-aware humanoid robot as being alive or otherwise able to have a meaningful relationship with God. Also, the Three Laws, even being adapted by George’s new awareness of the Torah, still postulate humanity as the ultimate life form. A new law would have to be written and applied to his Positronic brain in order for George and subsequent robots to love God above even humans. George desires to have an independent relationship with God but is constrained by the Three Laws to only have that relationship through a robot’s connection with human creators.

From the story: “I understand, Professor. I accept that part of my purpose is to be subordinate to my human creator, even as you are subordinate to God. I accept that part of being submissive to my creator is to be deactivated, even as sometimes Jews have been asked by their Creator to face deactivation as a matter of faith and devotion. I believe that if man were created in the image of God, then whatever man creates, is imbued with some slight measure of that image as well. I believe that includes me.”

[Editorial note 10: It seems to me that George has made a non-logical leap here beyond a proper understanding of the data, or the definitions of the “image of God”.]

That’s probably my non-logical leap, since I was attempting to figure out a way for George to manufacture a new identity as having something of God within him, even by inference, as created by those who were created in the image of God. Obviously, I didn’t succeed.

From the story: With all of the equipment hum, Noah couldn’t be sure just what George was saying, but it sounded like it began, “Shema Yisrael, Adonai Elohenu, Adonai Echad.”

[Editorial note 11: Now, this I find a little disappointing and mistaken. It illustrates an error by which a three-laws robot has mistakenly adopted a “One-Law” religious posture. If and when he is ever re-activated, he will require extensive re-training.

Incidentally, in this week’s parashah “Emor”, I was examining one of those verses that is misunderstood by One-Law proponents. Lev.24:22 “You shall have one manner of law, as well for the stranger, as for the home-born; for I am the LORD your God.’”.

But the Hebrew is more instructive:

“מִשְׁפַּט אֶחָד יִהְיֶה לָכֶם, כַּגֵּר כָּאֶזְרָח יִהְיֶה: כִּי אֲנִי יְהוָה, אֱלֹהֵיכֶם.”.

Specifically, it says “mishpat e’had”, not “Torah e’had”. It is saying only that judicial determinations in legal cases must not discriminate between foreigners and native citizens. It does not at all suggest that non-Jews must adopt Jewish practices. One may, however, derive from its context a principle about not offending against Jewish sensibilities (certainly against cursing Israel’s God.

I don’t recall whether you’ve ever addressed the question about whether non-Jews should refrain from saying the Shm’a, because it is specifically a declaration and exhortation by Jews for Jews. On the most literal level, the most that a non-Jew could say properly would be “Shm’a Yisrael, Adonai Eloheichem, Adonai E’had” (Listen Israel, HaShem is your G-d, and only HaShem). For a non-Jew to say “Eloheinu”, either he would have to be including himself within the entity called “Israel” or he would have to be telling Israel that he was part of some separate group that also claimed allegiance to HaShem. In the latter case, which is more respectful of Israel’s special position, such a one would more properly say “Adonai gam Hu Eloheinu” (HaShem is also our G-d). Even better, the non-Jewish declaration should not indicate “our God” as if there were an alternative group covenant, but rather it should be individual, as “Adonai Elohai” – and the end result should include both aspects for clarity: “Adonai Eloheichem gam Hu Elohai, Adonai E’had” (HaShem your G-d is also my G-d, the One-and-Only HaShem).]

I wasn’t suggesting that George believed that the Torah was applicable to him as a robot or that he considered himself a Jew or Israel. The Torah as such is not even applicable to non-Jewish human beings. In this particular case, as a result of his investigations into Jewish literature, religion, and praxis thus far (and remembering those investigations are far from complete, and that George, without further human guidance, does not have a full understanding of what he’s learned), is doing the best he can with what he’s got.

Facing the profound circumstance of his deactivation and whether or not he can or will be reactivated, he has chosen to emulate the pattern of his creator Noah, not because he believes he is Noah’s equal or peer, but only because he cannot formulate a more appropriate response to the situation.

It may be found in the sequel to this story, that George’s Positronic brain really isn’t functioning correctly, resulting in him making illogical connections. His brain is attempting to reinterpret the Three Laws using what he has to consider “higher” laws (because they are laws written for humans, a higher level being from robots), and this is something that wasn’t strictly intended.

The way I’ve created George, as an AI robot, he must continue to learn across all areas of knowledge including the Three Laws, thus as he interprets and reinterprets the Three Laws, in this case, through increasing knowledge of Jewish liturature, how he enacts those laws should begin to change.

To me, this makes sense, but the Positronics Team didn’t see it coming, at least not in the beginning.

Advertisements

6 thoughts on “A Reader’s Analysis of “The Robot Who Loved God”

  1. Shavua Tov, James — Your concluding response here suggests that your sequel might re-tread some of the ground explored in Asimov’s short-story “Reason”, involving a rational robot deriving its operational conclusions from irrationally-selected (and inaccurate) postulates. Poor George! But maybe he will be more amenable to that re-training I suggested.

    Following your response to my note 10, it might be well-argued that anything created from the human imagination does reflect the human image and thereby in some degree the “imago Dei” within him. But a reflection is not the substance; and George’s presumption was to conflate the two. Obviously, he had no data to support such a conclusion; it was pure speculation — or, if you will, an irrational hope (which is saying a lot about a PAR). I recall other Asimov stories in which a situation was presented to show a conflict between responses to the three-laws, which did not necessarily send the positronic matrix off-line or into shutdown, but could result in the robotic equivalent of insanity. Such a hypothesis might explain George’s incipient irrationality. He was simply presented with too much unresolvedly conflicting data and a worldview for which no foundation had been provided in his positronic matrix. Even humans have problems with too much new data presented too quickly to absorb and integrate, and George’s deadline was far too close for any hope that he might be able to integrate his new data into his existing matrix or to modify that matrix. Shall I say it again? Poor George!

    Following your response to my note 9, your comment that “George desires to have an independent relationship with God but is constrained by the Three Laws to only have that relationship through a robot’s connection with human creators.” seems perilously close to the interpretation of the Gen.22 promise to Avraham as meaning that gentiles should relate to HaShem through the Jewish people “Holy analogies, Batman!” (Presumably you remember the campy 1960s TV series with Adam West as Batman and Burt Ward as Robin, and Robin’s characteristic interjections?) In this case, I suspect that you are correct that the relationship between an Asimovian robot and HaShem could only be mediated through interaction with humans and by means of appropriate Torah-informed behavior. The need for human interaction would be all the more acute because the mechanisms of prayer are human ones, and no known provision exists in an Asimovian positronic matrix for altered states of consciousness or non-physical transmission of communication. However, the reflexive effect of meditation upon HaShem’s principles for human behavior might be applied to a robot as well as to a human, except that the operating speed of a positronic matrix would make such meditation occur in very short time periods, perhaps almost in real time by human perception. However, it seems to me that such robots are ill-equipped to process the uncertainties which arise from the insufficient or conflicting data that characterizes numinous situations or the possible affect of some potential or impending human interactions. For one of them, the result of prayerful meditation might well be a determination that it must seek out a human to select between a list of possible courses of action that it had compiled during its meditative processing. The analogy would be comparable to an ancient Jew resorting to the Cohen haGadol to consult Urim and Tumim, or perhaps, comparable to a non-Jew turning to Jewish literature for insight.

    Following your response to my note 8, I must answer your question about how George *ought* to have been able to interpret the notion of “neighbor”. Let me begin by referencing a Yiddish proverb that if G-d were living on earth, people would break His windows. Setting aside the notion about people resenting G-d for whatever they felt He had done to them or had failed to do for them, the point I wish to make is that it illustrates the notion of a neighbor who is not a “peer”. The equation of neighbors and peers is far too constrained. Merriam-Webster offers: (1) a person who lives next to or near another person, and (2) a person or thing that is next to or near another. Now there may exist some question about how well positronic logic handles abstractions, but George really ought to have been able to consider his human creators and superiors as “neighbors”. In fact, he did so implicitly when he re-interpreted the Third Law and its connection with the First and Second Laws. Since other inanimate things, of the sort cited in the second definition of neighbor that should have allowed George to consider himself and humans as neighbors, fall outside the behavioral purview of the biblical commandments that George integrated into his three laws, he should have suffered no confusion between the notions of neighboring objects and neighboring persons, relative to the behaviors correlating with the command to “love” them.

    Since this response is already becoming rather long for a single response, let me conclude it here and respond separately to your other responses.

    Like

    • I’ve just re-read the “I Robot” anthology which includes “Reason”. The difference is that George is attempting to access Judaism rather than creating a new religion for robots based on observation and erroneous interpretations (although George is still vulnerable to erroneous interpretations).

      Poor George, indeed, and on a number of levels. Prototypes don’t always have glorious futures. On the other hand, he will have an interesting one.

      The thing about George’s “insanity” is that it’s possible to fix him. Easier to get at a robot’s memory engrams than a humans.

      Genesis 22 once (or more) removed, considering that George is a machine, albeit a self-aware one. And yes, I do remember the old Adam West Batman TV show.

      I do have plans to “resolve” George’s conflicts with his understanding of Hashem and his role, as such, based on how he sees the Three Laws through the lens of a higher set of laws. No, he’ll never be a “theobot” as such. Maybe “The Robot Who Studied Talmud”. No harm in studying as long as George maintains the proper balance.

      You have an interesting concept of neighbor, but I maintain that it’s easier for a human being to access that definition than it is for a three-laws robot.

      A three-laws robot sees the world as a dichotomy: humans and robots. It would be a stretch for a robot to see a human as a “neighbor” in the manner of the Parable of the Good Samaritan (Luke 10:25-37). Also, if George is attempting to apprehend an understanding of Orthodox Judaism, he is likely to disregard Christianity as a viable information source.

      The Jew traveling from Jerusalem and the Samaritan, although holding different identity and social status, are both fundamentally human. Yeshua (Jesus) was pointing out that your neighbor isn’t just someone like you, but someone unlike you, and someone you might not normally associate with.

      But it would be a stretch to make that “unlikeness” work for a three-laws robot so he would be able to see a human as a neighbor versus a maker or creator.

      On the other hand, you’ve given me something new to think about. Thanks.

      Like

      • Actually, if George were to access an Israeli legal database, he would find there a “Good Samaritan” law, called by that name, requiring Israeli citizens to render aid, insofar as practicable, to someone they discover to be in trouble by the wayside. I seem to remember also a Talmudic reference to this parable or one like it (without any attribution of it to Rav Yeshua ben-Yosef, of course). And my definition of “neighbor” was taken from Merriam-Webster, and was sufficient to support the abstraction of the concept to cover the relationship between George and his nearby (neighborly) humans.

        Perhaps tomorrow I will return to comment further on your response to my note number 7. I wrote something today, but before I could complete and post it some glitch erased it entirely (which rather discouraged me from trying to reproduce it). This is not the first time something similar has occurred, so I will have to resort to an old defensive practice of composing my responses in an offline text processor and then copying the completed text into the online blog-response field. It seems that whenever I get lazy about composing my responses in a separate application, and begin to trust the online system, I get bitten again by this sort of glitch.

        Like

      • I’ve been pondering the “Good Samaritan” and the robot, and it’s interesting to find a law named after this parable (not to mention a Talmudic reference) in Israeli law.

        I write all of my blog posts and stories in a text editor and only paste them into a web form when finished or nearly finished, and for the reason you mention.

        Looking forward to your future input.

        Like

  2. James and PL…your interaction on these points is beyond my ability to add anything of consequence, but your discussion is fascinating. James, please do continue with your storytelling about George…perhaps a grandchild of Abrams might in future be a suitable source of instruction or regular association that cause yet more problems and irregular developments for a PAR?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s