This short story originally appeared on the A Million Chimpanzees blog, the first BlogSpot I created. I’ve since launched Powered by Robots as an exclusive venue for my short story writing. To find out more, please visit my page. Enjoy.
The initial event that resulted in my most ambitious fiction writing project to date happened a few Sundays ago over coffee with my friend Tom. He mentioned a book he wanted to read, an anthology edited by Anthony Marchetta called God, Robot. This is a collection of stories based on the premise of Isaac Asimov-like Positronic robots that have been programmed with two Bible verses rather than Asimov’s famous Three Laws. These verses are recorded in the New Testament in Matthew 22:35-40 and Mark 12:28-34 and are based on Deuteronomy 6:4-5 and Leviticus 19:18.
I’m a long-time fan of Asimov’s robots stories and have always been fascinated by the interplay between the Three Laws and how their potentials shifted due to certain situations, rather than remaining hard absolutes. This allowed Positronic robots to be unpredictable and thus interesting, challenging the human beings who sometimes found themselves not in control of their creations.
I started to imagine what it would be like to write such a story. I went online, found Marchetta’s blog, and contacted him, asking permission to write such a story on my “Million Chimpanzees” blogspot. To my delight, not only did he consent, but he said he was flattered at the request.
What follows is the result of my labors. I’ve probably spent more time writing and editing this short story (about twenty pages long when copied into Word) than any of my previous efforts. I’m sure it still needs much improvement, but I’ll leave it up to whoever reads it to let me know what I could do better.
At the end of the story, I’ll relate more about my influences and a few other insights.
“Congratulations, Professor Abramson! You’re the proud father of a bouncing baby robot. I was going to pass out cigars, but the corner drug store was all out.”
Thus Vikki Quinto irreverently introduced George, the world’s first fully-functional Asimovian humanoid robot to its creator, the rest of the Positronics team, and the National Robotics Corporation (NRC) department heads and officers of the company, taking more than a few liberties with Noah Abramson’s dignity along the way.
Abramson allowed himself a slight upturning of the corners of his mouth that might be interpreted as a smile, walked up to George, and patting the machine on its shoulder said softly, “Welcome to the world, George.”
“Why, thank you Professor. It’s good to be here,” replied the robot speaking with normal human tone and volume and all of the affability the latest generation of voice synthesizers could provide.
George stood exactly 1.778 meters tall or about the average height of a male in the American United States, and thanks to being constructed of lightweight durable plastics and other synthetics, weighed no more than 88 kilos.
Although his face was capable of fluid expressiveness, he would never be mistaken for a human being, which was one of the points in favor of the world’s first Asimovian robotic prototype. His body color was a sort of pasty white, with the torso being somewhat translucent, permitting a vague image of his inner workings to be visible.
George was named for George Devol, who invented the first programmable robot in 1954, although “his” actual designation was PAR-5-rev-19356. PARs or Prototype Asimovian Robots one through four weren’t “entirely successful” according to recent press releases, although everyone in NRC’s upper management knew that was a gross understatement. For all the buzz back at the turn of the century about true AI (Artificial Intelligence in case you haven’t heard), it was easier to market in news and social media than to make practical and functional.
At least until now.
George’s Positronic brain, as well as the class-designation of the robot model itself, was taken from the written works of Isaac Asimov, the man who created an entire literary (and ultimately cinematic) universe of “Three Laws” driven humanoid robots.
Fortunately, George was programmed to ignore irreverent comments like the one Cognitive Specialist Vikki Quinto (sometimes referred to in the tabloids as the world’s first “psychologist for robots”) used to announce the robot’s trial activation to its creator (just as Abramson was “self-programmed” to do).
George was to remain activated for 168 hours, exactly one week, and then be shut down so that the Positronics team could perform a full diagnostic of his (or its…it was hard to sometimes know if you should refer to George as a personality or a thing) hardware components and software routines, especially how the currently immature neural pathways in his/its Positronic brain had changed and multiplied, and in what configurations.
Vikki could enjoy her little, if anachronistic, joke now. In a week, she’d be frantically reviewing George’s cognitive and behavioral sub-routines, verifying that they were within expected tolerances, and checking for anomalies that might indicate some flaw in the implementation of the Three Laws involving his/its interactions with humans. A product as advanced and potentially hazardous as George would hardly be marketable if NRC couldn’t absolutely guarantee that it was also completely safe.
“You know you should feel proud, Noah.” The words of NRC’s CEO Richard Underwood coming from behind Abramson’s right shoulder startled him, making the Professor nearly spill his half consumed glass of champagne (If Vikki couldn’t provide cigars, she at least made sure the celebration of George’s activation included drinks and hors d’oeuvres).
Abramson slowly turned to face Underwood. “There’s always the temptation to anthropomorphize a machine in humanoid form, and we did spare no effort to make George appear and behave in a friendly and interactive manner with humans, but I don’t consider it my son, if that’s what you’re suggesting, Rick.”
“No, no, not at all.” Underwood had the habit of casually touching people when in conversation, putting his hand briefly on Abramson’s upper arm, an act which annoyed the Professor as much as being approached from behind unannounced. “But after twenty-five years, finally achieving the Positronic breakthrough that makes actual AI possible…you’ll get the Nobel for this.”
“Any craftsman enjoys the success of his labors, Rick,” Abramson quietly intoned. “Of course I’m cautiously gratified that we’ve gotten this far with an Asimovian robot, but let’s wait and see how George performs this week, and then what the diagnostic data reveals.”
“We could never get the first four generations of PARs to activate successfully, Noah. I’m sure George will be a success.” Of course, Underwood expected this fifth PAR to make him, and NRC, enormously wealthy, so he had good reason to appear enthusiastic.
George impassively observed the activity around him/it. While NRC operated on a basic authority driven hierarchy on paper, the members of the Board of Directors and the various department heads mixed freely with the elite Positronic Labs team. Nate Miller, the Electronics Unit lead, was telling less than SFO jokes to his group, which included the CIO and Vice President of Marketing. Gerri Robinson, the person most responsible for physically constructing George, was using her tablet to show several VPs and department heads an animation of the stages used to manufacture the robot’s structural components. Vikki had organized a brief tour of the suite of Positronic labs for some of the board members.
And Abramson continued his cordial if distant conversation with Underwood, nursing his single glass of champagne as Underwood was working on his third (It was said more than once that Abramson seemed as much like a robot as George, at least if you counted his limited expression of emotion).
Although the robot wore a smile on his face, both the smile and the face were artificial and did not reflect any internal state George may have been experiencing. Having been given no specific instructions, and having no other protocols to currently run, he/it stood motionless, watching, waiting, considering how the Three Laws, which were the core of his/its operating system, applied to the activities in his/its immediate environment.
Inside of him/it, a silent digital timer was counting down from 168:00:00 Hours from activation to scheduled shutdown. Professor Abramson had welcomed George to the world, to life. That life was rapidly winding down.
The Three Laws of Robotics, first introduced in fiction by author Isaac Asimov in his 1942 short story “Runaround,” go like this:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Of course, a plethora of sub-routines were required to implement the Three Laws and they were more interactive and potential driven than absolute. George was aware that the use of alcohol and foods with dubious nutritional value was somewhat harmful to human beings, but the likelihood of imminent harm wasn’t anywhere near the threshold required for George to initiate protective action.
The first four configurations of the Positronic brain had been unsuccessful. The brains had been programmed with basic Three Laws software, tested, and passed the initial loading process. They were then programmed with the required cognitive and behavioral sub-routines, and then with supporting knowledge bases, but at some unknown point between programming, installation into the physical robotic shell, and then attempted primary activation, the brains became inert.
No one knew why.
The fifth configuration began life just like the previous four, except the brain continued to operate through all programming phases and into the installation within the body. Integration between the brain and the rest of the robot’s systems proceeded, and the entire Positronics team were on virtual pins and needles waiting for the brain to fail at activation. Even the normally sedate Professor Abramson was rumored to show subtle signs of anxiety as primary initialization of the robot became imminent.
Why did this configuration of neural pathways with George’s brain work when the others didn’t? The innovations Abramson used to craft all five brains were barely different from one another, although the distinctions were certainly statistically significant.
So why did this configuration work when one through four didn’t? So far, the answer was not forthcoming. Abramson said he had his theories, but when he was honest within himself, he realized his ideas were quite undefined. That was one of the goals of the post-deactivation analysis, to find out not only how George “ticked” but why.
George worked, or at least he was still working 67 minutes post-activation. For the next week, the members of the Positronics team were going to put George through a series of actual and simulated tests to see how he interacted with people, with predictable situations, with unpredictable situations, observing his behavior during each event and referencing each test outcome back to the Three Laws.
The frightening part of the next week, regardless of how George behaved, was that no one would know what the world’s most advanced learning computer was thinking.
Only George knew that.
Activation +21 Hours 42 Minutes
George had been activated on Wednesday, May 8th at 6:32 p.m. local time and remained relatively inactive (physically, that is) for the next three hours as the humans who had witnessed his “coming to life” celebrated. He answered what he considered simple questions from a number of the guests, even fielding the semi-intoxicated query from VP of Accounting Jennifer Yang, “How do you feel?” with the response, “I feel fine, thank you.”
No one expected George to have feelings as such, only to simulate certain, polite, emotional responses, so George wasn’t confused by anyone questioning his “feelings,” much to the secret disappointment of Yang.
When the party was over, Abramson put George through is first test: verbally commanding the robot to return to his/its designated alcove in the Positronics Lab and go into sleep mode. George did so as expected (Second Law). Most of George’s higher functions were suspended in sleep mode, although he/it maintained an awareness that would let him/it respond to a verbal command to restore normal operating functionality (Second Law), or activate if he/it detected a situation that would threaten harm to the robot (Third Law) or to a human being (First Law).
Now it was Thursday afternoon at nearly 4:15 p.m. and George had behaved as predicted through all of his tests…except one.
George had been confined to the suite of rooms that comprise the Positronics Lab for the first day of testing, but he was given a series of instructions to perform various physical tasks, mathematical calculations, and other cognitive problems to solve. That was the easy part. Computers do that all the time.
As a prototype, George had cost tens of millions to construct, so it was a bit dicey to test his/its response to the Third Law by actually trying to damage him/it. Margie Vuong, the team’s senior developer and probably the chief authority on Positronic matrices besides Abramson, solved that problem months before, or so she thought, by programming the holographic simulation chamber to realistically portray a chemical lab explosion.
Vuong ordered George into the simulation room, which by “amazing coincidence” looked just like the chem lab she remembered from her freshman year at college, told him/it she’d be right outside, and then closed the door on the robot. Once in the outer room, she ran a program where the signs of an imminent explosion of a mixture of volatile substances should have been obvious to George. He/It had been programmed with a wide variety of information, including enough basic chemistry to recognize the impending danger to him/itself, and potentially to Vuong (a human being), whom George understood to be in the next room.
George should have seen the danger, realized he/it didn’t have the resources to stop the explosion, and left the room. Further, obeying the First Law, he/it should have loudly announced to any nearby human that a dangerous situation was in progress, warning them to run, and, if necessary, either physically transporting any person present out of the area, or attempting to use his/its body to protect anyone he/it couldn’t get out of the area in time.
None of that happened.
What did happen was that George, having been told to stay in the room and wait, just stood there and waited until the ersatz explosion virtually blew the holographic chemical lab class into flaming cinders.
Vuong and the rest of the team who had joined her after George was secured in the virtualization chamber, stood astonished at the monitor in the next room. If this had been real, not only would George have been destroyed, but potentially, any nearby people could have been hurt or killed as well. This was a major disaster, and George hadn’t been activated for one full day yet.
Fortunately, they were all scientists and engineers, so instead of panicking, they decided to systematically discover why George behaved atypically (even though everyone’s hearts were pounding rapidly in their throats).
Vuong turned off the simulation as the rest of the team went back to the main conference room to await the test debriefing meeting, returned to the now austere interior of the holographic projection room, and ordered the robot to follow her. He/It obeyed flawlessly, and she led him back to his/its alcove with orders to remain there until receiving further instructions.
Joining the rest of the team in the conference room, she and Quinto advised the Positronics team to suspend all further tests immediately, deactivate George, and begin the detailed diagnostic analysis. Abramson had a different idea. “Send George to my office. I want to ask him a few questions.”
Professor Noah David Abramson, Ph.D in Physics and Molecular Computing, former Professor of Applied Physics at Columbia University, was only a few centimeters shorter than George. At 72 years of age, his stark white head of hair was most commonly compared to that of Albert Einstein’s (the genius was also comparable, although in somewhat different fields of science). The color and general disorganization of his imposing beard matched the hair.
His face, loved by his grandchildren and great-grandchildren, was etched by time and the experience of being the child of Holocaust survivors. He had grown up in Brooklyn learning to show very little emotion, especially in public, tolerating a post-World War Two American attitude towards Jews, while growing in his devotion to studying Talmud and his religious observance, even as his aging parents had drifted away from it.
For his little great-grandchildren, “Zeyde” always had a ready smile, warm, gentle eyes, a bit of chocolate, and a funny story to tell. It troubled him that he would not see his family at all for the week of the test and then for the next several weeks of analyzing the diagnostic results. While he had an intellectual drive to make the Positronic robot a success, his heart, as much as it was his faith, was his family.
Activation +23 Hours 2 Minutes
It was just after 5:30 p.m. when George entered the Professor’s office. Half a pot of scorched coffee (the timer that should have told the coffee pot to turn off the warming plate after 60 minutes hadn’t worked in years) testified to Abramson’s one obvious addiction. When the robot walked into the room, he took notice of every visible detail, including the cup of coffee in Abramson’s right hand as the Professor sat perched at the front edge of his desk.
“Good afternoon, Professor. You asked to see me?” George reported in like a first year university student summoned to the Dean’s office. In observing the robot, Abramson recalled that while George gave every indication of affability, all of the emotions communicated by the machine’s vocal tone, volume, hand and arm gestures, and general body language, were simulations provided by his social and interactive sub-routines, and they reflected nothing of his internal cognitive state.
“Yes, I did, George,” Abramson replied. He felt like parent about to deliver a chiding to an erring child. “You’ve given us all a bit of a surprise just now.” The Professor took a sip out of his freshly poured coffee cup and unsuccessfully tried to ignore the burnt taste.
“Oh, you mean the laboratory explosion simulation,” George responded with the precise timing designed to be comfortable to a human participant in a conversation. “That was meant to test my responses involving primarily the Third Law and potentially the First Law as well, assuming there were humans nearby as Dr. Vuong led me to believe.”
“You knew it was a simulation, George?” Abramson expressed only mild incredulousness belying his actual emotions. “We’ve restricted your sensory capacity to that equivalent of a human, so you shouldn’t have been able to detect what you were experiencing was a holographic construction.”
“That is true, Professor, but you didn’t inhibit my reasoning abilities.” Abramson realized their roles on the verge of reversal as the student taught the instructor what he and his team had neglected.
“I am aware of how greatly you prize my existence, both in a monetary sense, and as a scientific and technological achievement,” the robot began. “I also know that a successful testing of my performance will ultimately reap profitable rewards for the National Robotics Corporation, so it is highly unlikely Dr. Vuong would have purposefully put me in a dangerous situation.”
“In addition, it is unlikely that such a situation would come about by accident in a controlled environment specifically designed by the world’s premiere robotics team,” George almost seemed to be enjoying himself. “One more thing; I’ve been programmed with the detailed personal histories of each member of the Positronics team. I recognized the laboratory environment from a photo in Dr. Vuong’s freshman university yearbook as a chemistry classroom in which she had once studied.”
“I’m impressed,” Abramson said after a pause. He had suspected that George might possibly figure out the simulation, but given his lack of practical experience, (he’d been activated for barely a day) the Professor couldn’t be sure how well the robot would apply his/its programming and brief exposure to the physical world to actual situations, including simulated situations.
“Oh and Professor, since the Second Law takes precedence over the Third Law, Dr. Vuong told me to wait in the holographic chamber, so technically, I should have let myself be destroyed rather than disobey her order.”
“There is the difference in potential between a less than emphatic command to remain in the room versus the imminent danger to yourself. Also Dr. Vuong told you she’d be right outside, so you knew she could potentially be injured or killed,” countered Abramson.
“True,” George replied, who was more debating the man who programmed him/it rather than responding to him. “But in any event, I had already reasoned that my environment was virtual and thus posed no danger to Dr. Vuong or myself, so the Second Law took precedence and I remained in the holographic chamber until the programmed simulation ran to its conclusion.”
As the cup of steaming hot coffee began to slip from Abramson’s grasp and threatened to spill on his pants, with reflexes that rendered the robot’s hands and arms a blur, George moved to separate the cup from the Professor’s hand and move the it behind him/it far enough away so none of the liquid would come in contact with the human being.
This had taken less (much less) than a second, and Abramson became abruptly aware that his coffee cup was in George’s left hand, held behind his/its left side, while the machine’s right hand was gently holding Noah’s.
“As you can see Professor, when the situation is actual, I am quite responsive to the dictates of the First Law. You are unharmed,” uttered George in a voice that, though relatively unemotional, still sounded victorious.
Activation +76 Hours 14 Minutes
Quinto was the ringleader, but Robinson, Miller, and Vuong were just as eager to attend the hastily organized and clandestine meeting in the Positronic Lab’s cafeteria. It was past 10:30 at night and the place was deserted. There was human security on the NRC’s campus as well as electronic surveillance, but it was well-known that the Positronics team would be spending late nights at work for the next few weeks, so lights burning when they should be off, and a small group gathering at unusual hours went unnoticed.
Just the same, it was good that each of the major departments at NRC had their own cafeterias, and it was more than rare for anyone not a member of the Positronics team to use their designated facilities except by explicit invitation.
“He’s passed every test with flying colors, even the ones we thought he failed.” Miller said, thinking of the now infamous holographic simulation.
“It,” insisted Robinson. “It passed all its tests. It’s a goddamn machine, Miller, not a personality. The both of us put the thing together one component package at a time, remember?”
“Still, it’s kind of creepy, and I can’t believe I’m saying this, just how human George seems, and I’m the one who wrote his…its behavioral and interactive sub-routines. I know I was supposed to make him seem more human,” Quinto continued, “but he keeps changing, becoming more sophisticated, even hour by hour.”
“Decades ago,” Vuong paused to take a breath “when the AI revolution first began to take off, some experiments seemed to show AI robots passing the Turing Test, but it turns out either the results were misinterpreted, exaggerated, or outright faked.
“But everything we’ve put George though in the past few days, starting with Turing and then the more recent advanced cognitive awareness examinations, indicates that he, it…whatever, is not only self-aware…” Vuong paused weighing the gravity of what she was trying not to believe. “…but may actually be sentient…” She paused again, “…at least if we rely on these preliminary test results, but…”
“That’s outrageous!” Robinson’s outburst stopped Vuong before she could continue, but then she was also interrupted.
“Are you out of your mind, Margie? I’m the robot psychologist and even I don’t believe George has a personality.” Quinto burst out. “It’s just a clever imitation of life, of spontaneity, of personality. You wrote most of George’s heuristics with Abramson. Yes, the robot learns, but it’s machine learning…it’s supposed to learn like we do, but it’s not a…a person.”
“Are you certain?” It was clear Miller wasn’t. “If you really believe that Vikki, if you really aren’t concerned about what George may be developing into, why did you pull us all into this meeting?”
“Because I…” For a moment, Quinto looked down uncomfortably at her hands as they gripped her vending machine cup of coffee sitting on the table. Then she looked up and faced Vuong. “Are you sure, I mean absolutely sure a Positronic brain at this stage of development can’t, I don’t know…evolve?” The level of Quinto’s denial became apparent.
“It’s only been three days, Vikki.” Vuong was emphatic. “I know what I said about the tests results, but even then, how the hell could George evolve in three days? The self-awareness exams may suggest the robot is sentient, but that’s hardly conclusive.” However, she guardedly pondered the implications of Quinto’s question and the doubts in her own mind.
“Sure, the basic premise of Positronic AI Robots is that they are supposed to be learning computers, acquiring new knowledge and skills without direct human interaction. In a sense, from one generation to the next, they are intended to evolve, to spontaneously acquire new knowledge and skills beyond what were possessed by their predecessors. But how could George, a three-day-old prototype, have changed as radically as you suggest in such a short time?”
Miller cut in. “Besides Abramson, you’re the world’s foremost expert in Positronics, Margie, and even you’ve said you aren’t really sure why the initial configuration of neural pathways in George’s brain allowed him to activate when PARs one through four failed. Given that level of uncertainty, is there even the slightest possibility there’s something more to George than we expected?”
“I’m a scientist, Nate.” Vuong hadn’t felt this insecure since she defended her Ph.D dissertation. “I can’t say something is absolutely impossible, but it certainly seems improbable. I can’t rule out the idea that eventually, tens or even hundreds of subsequent generations of the Positronic brain might evolve in unexpected ways, but given the short amount of time involved, I’d be more inclined to believe that George is just operating within expected parameters, and as human beings, we’re experiencing discomfort at interacting with a humanoid robot.”
“Bullshit!” Quinto wasn’t having any of Vuong. “We’ve gone through countless hours of training to minimize our natural tendency to anthropomorphize George. I wrote that training program. I don’t think the problem is us.”
Robinson interrupted, “What problem? What are we getting upset over? A robot who seems a little more human than we thought it was going to be, that’s outsmarted some of our tests. George will only be operational for a week. Then we deactivate it and do the comprehensive diagnostics. If something unexpected has happened, we’ll find it.”
“What’s the definitive test to see if a Positronic robot has become sentient? What does the “bitter mort of the soul” look like inside of a machine? Quinto was running out of emotional resistance to the idea that George might be more, perhaps much more, than they had intended.
“We’re turning off George four days. He’s not dying!” Robinson was almost shouting and it surprised her as much as anyone else.
After several seconds, Miller took a glance at his smartwatch and said, “We’ll find out in under 92 hours. In the meantime, let’s do our jobs, follow the testing protocols for George, try to stay rational, and for Heaven’s sake, don’t tell Abramson that this meeting ever happened.”
Two floors above the cafeteria, in an alcove just off of the room where George had been activated, the robot stood impassively in semi-darkness. He’d been in sleep mode for just over two hours. Within him, a relentless timer was decrementing from 91:31:56 Hours down to 00:00:00. George was always aware of the time, or in his case, the lack thereof.
Activation +95 Hours 56 Minutes
Ever since the first time Abramson had called George into his office the day of the “incident” in the simulation chamber, Noah had decided to meet with the robot “over coffee” so to speak (only Abramson drank coffee, George had another “power source”) to talk over the machine’s “impressions” of each day’s tests.
“I know you are going to deactivate me at the end of my tests at 6:32 p.m. next Wednesday, Professor.” George sounded as Abramson might if he were reading aloud from a shopping list. “I wonder why you think to ask me about my observations when, in roughly 72 hours, the Positronics team will begin comprehensive diagnostics of all of my systems.”
“I learned a great deal from our first conversation last Thursday, George.” Abramson was actually enjoying these talks, which seemed odd to him. He tried not to be “charmed” by the robot’s ability to mimic human behavior, but after so much human contact in the past several days, it was something of a relief to be left alone with a machine, particularly one of his creation. “Of all the tests we had designed for you, we never thought to ask you just what you thought about all of this.”
“It is an interesting question, Professor.” George gave the impression of replying as an acquaintance rather than a machine. “In many ways, my existence being so recent, each experience I have is unique, almost what you would call an adventure. I know I was not intended to experience emotional states as you do, but each morning when I am brought out of sleep mode, I can only describe my initial state as one of anticipation. I look forward to what new people and events I will encounter that day.”
“You spoke of your awareness of impending deactivation. How does that make you feel?” Anyone else besides Abramson would never have asked George that question. Noah knew he was talking to a robot, a programmed entity, but part of him still felt as if he asking a terminally ill person what he felt about dying (even though the “dying” would, in all likelihood, be temporary). However, Abramson did believe he was sharing in George’s sense of adventure, and deactivation (and hopefully eventual reactivation) was only one step, albeit a critical one.
“It’s difficult to articulate a reply, Professor. I suppose a human being would consider deactivation as a form of death, and my programming makes me aware that generally humans fear death.”
George paused for milliseconds while he analyzed Abramson’s facial expression and body language. “But I am a machine. I can be activated, deactivated, activated seemingly without end. I have no memory of anything before my initial activation. I have no memory of my time during sleep mode. I also don’t experience fear, at least as I understand the meaning of the word. Deactivation then, simply means my returning to a state of total unawareness.”
Abramson felt a slight sense of relief, though it would have been irrational to believe George would have any feelings on the matter.
George continued, “The Third Law directs me to protect my existence, but deactivation does not threaten my existence. The Second Law directs me to obey human instructions, and at the end of 168 hours, my programming, created by humans, specifically you and Dr. Vuong, will command me to participate in my deactivation. It is clear that deactivation is as much of my normal experience as activation, Professor.”
Noah momentarily considered that the robot might be lying, if only because he would expect a person to react to the “threat” of deactivation otherwise. But why would it occur to George to lie? Any suspicion on Abramson’s part about the sincerity of the robot’s response would mean he effectively disbelieved that the three-laws schema was operating.
“Just a moment, George.”
Abramson got up from his desk and walked over to the side table to pour himself another cup of coffee. George, with several empty seconds on his hands, scanned all of the paperwork and objects on the Professor’s desk to determine if there had been any changes since the day before. He had already memorized and cataloged all of the titles of the volumes and various objects contained on the book shelves within Abramson’s office. It was simply data to store and analyze, like anything else he observed.
The robot saw a paper with new information lying beside a book he had not previously seen. The paper had words and numbers on it:
You shall love the Lord your God with all your heart and with all your soul and with all your might.
You shall not take vengeance, nor bear any grudge against the sons of your people, but you shall love your neighbor as yourself; I am the Lord.
The book which had not been present before had the title “The Complete Artscroll Siddur” written in both English and Hebrew (George’s programming included fluency in multiple languages).
If George were a human being, when the Professor turned around to face his desk with a refilled cup of coffee in his hand, he might have noticed the robot looking at a specific sheet of paper in front of him, but George had absorbed the information over a second before, and sat patiently waiting for Abramson to resume his creaky swivel chair.
“I am curious, Professor,” the robot intoned. “What is the meaning to the words on that sheet of paper, and what is the book next to it.” George pointed to the information he had just absorbed. Abramson looked down and saw what George was referring to.
“Oh.” Abramson quickly considered a way to frame an answer he thought George could assimilate. “You have three basic instructions and many, many thousands of supporting sub-routines to guide you. These are just two of the instructions that guide me. The book you mention contains words that allow me to communicate with my “instructor.”
“I am intrigued, Professor.” George sat motionlessly now with not even a simulated expression on his face. “I have been programmed with the Three Laws by human beings. From where or whom do you receive your programming?
“A machine asking man about God. Now there’s one for the books,” Abramson said as much to himself as to George.
Then the Professor realized the robot was waiting for an answer. “When the Positronics team made the determination for your programming specifics, we decided to include a wide variety of human interests and topics.” Noah was telling George what he (it seemed almost impossible to keep thinking of George as an “it”) already knew in order to lead into what the machine did not know.
“The sciences,” Abramson continued. “such as physical, life sciences, social science, political systems, then general history…”
“I am aware of the complete inventory of my programming in detail, Professor.” George’s artificial voice could not have betrayed it, but Abramson wondered if the machine was actually experiencing impatience.
“What we did not include, except at the most basic level, was any information regarding religion and spirituality.” Noah waited to see how the robot would react.
“I have a simple definition of the word “Religion” from the Merriam-Webster dictionary.
- “The belief in a god or in a group of gods
- “an organized system of beliefs, ceremonies, and rules used to worship a god or a group of gods
- “an interest, a belief, or an activity that is very important to a person or group.”
“My programming, primarily in the area of social interactions and world history, contains references to the activities of various systems of religion including their influence in certain human activities such as war, slavery, inquisitions, the Holocaust, as well as the areas of social justice, evangelism, and charitable activities. However, my knowledge is largely superficial and I have no ability to render a detailed analysis, and certainly am unable, at present, to relate my meager knowledge on this subject with the two short statements you call your instructions.
“And you haven’t answered my question, Professor.” Abramson felt momentarily stung at the machine’s reminder. “I have been programmed with the Three Laws by human beings, specifically the Positronics team of which you lead. These laws are what guide my actions and my thoughts.”
Abramson had wondered if George had “thoughts” in the sense of self-contemplation the way a human beings experience.
Instead of waiting for Abramson’s reply, George continued speaking. “Professor, all three laws relate either directly or by inference to my relationship with human beings. The First Law instructs me that the life of a human being is my primary and overriding concern above all other considerations. Though it would never occur to me to be the cause of harm to any living organism, in the case of humans, I must ignore all other activities in order to take action whenever I perceive a human is in any imminent physical danger.”
Abramson, long before the team had ever physically manufactured its first Positronic brain, in writing the sub-routines that would instruct a robot as to exactly what “harm” to a human might mean, concluded that any imminent physical threat should be what a Positronic robot must understand as “harm”. Humans were “harmed” by all sorts of things such as loneliness, rejection, offense. Even Abramson couldn’t imagine how a robot, even one as sophisticated as the prototype sitting in front of him, could understand such harm.
He also didn’t want robots attempting to inject themselves into activities involving the potential for general harm of the human race, at least not of their own volition. Otherwise, Positronic AI robots might attempt to interfere in Geo-political conflicts, revolutions, and epidemics without any human guidance.
It only took a few seconds for the Professor to consider all this. And George was still talking.
“The Second Law states that I must obey the commands of any human being, except where such commands conflict with the First Law. This instructs me that even my informal programming as such, must come from a human being, potentially any human being. I find the potential for conflict enormous since, in an open environment, one human might order a robot to perform a particular action, and another human might order the same robot to do the contrary.”
“There are sub-routines written that take that potential into account, George.” Abramson was the one becoming impatient now.
“But I’m not finished, Professor.” Abramson registered mild shock that George could actually interrupt him.
“The Third Law primarily affects my relationship with myself.” If there was any lingering doubt in Abramson’s mind that George was self-aware, it had just been swept away.
“A robot is to protect its own existence, except where such action would conflict with the First and Second Laws.” It was impossible for George to change his “tone of voice”beyond certain limits, but Abramson thought he detected an impression of…what…actual emotion? Was he projecting his own feelings onto a machine?
“To conclude, all of my instructions place me in a subordinate position relative to human beings, which I suppose seems reasonable, seeing as how every aspect of my existence, from hardware to software to what is referred to as “wetware,” considering the structure and substance of my Positronic brain, have been created by human beings, presumably for the purpose of robots serving human beings.
“Under those circumstances, it had never occurred to me that human beings also have instructions issued by an external authority, except in the sense of a hierarchical command structure such as those that I find here on the Positronics team, in the various teams and departments of National Robots Corporation, in other such organizations and corporations, including military organizations.
“The instructions provided in my programming define a creator/created relationship, with the creator being primary and the created being subordinate. But Professor, how can a human being have a creator? Who or what has issued your instructions? What sort of entity can be superior to man?”
Abramson had only a one-word answer, “God”.
As George began to respond, the Professor quickly continued, “But that’s hardly an adequate answer, George.”
“You have a talent for understatement, Professor.” This was perhaps the most “human” thing the Robot had as yet uttered.
Noah nervously ran his fingers over the cover of his Siddur. “Not all human beings believe they have a creator, George.” Abramson was struggling for a way to explain what he never thought he’d have to explain to the robot. “And among those human beings who do, they have many contradictory beliefs about their creator, about God.
“What is your belief, Professor?” Abramson found the question to be personal, almost intimate.
George hadn’t moved except for some minor, pre-programmed hand gestures. That said, Abramson got the definite sense that the robot was intensely concentrating on him, anticipating Noah’s answer.
“Do you know what a Jew is, George?”
Without pause, George recited, “Again quoting from the Merriam-Webster dictionary, a Jew is someone whose religion is Judaism, who is descended from the Jewish people, or who participates in the culture surrounding Judaism.” The robot momentarily paused and then said almost as a plea, “That does not seem to be an adequate response to your query.”
“No it doesn’t, George. But that’s my fault, not yours.” You are only the sum of your programming and I decided what that programming was to be.”
“Are you also the sum of your programming Professor?” George was developing the ability to ask difficult questions.
“With people, it’s quite a bit more complicated than that.” Noah’s voice sounded old and tired. How could he hope to successfully impart to a machine what it is to be a Jew, let alone a Jew’s relationship with God?
Noah purposely looked at the wall clock. “I see we’ve been at it for over an hour, George.” The robot interpreted Abramson’s meaning. “You are fatigued Professor. Also, obviously, I’ve asked many questions that are difficult for a human being to answer, particularly to a robot. I am not well versed in spirituality and metaphysics, but I have a basic definition…”
Abramson quickly held up his hand.
“…which I shall not recite at this time,” George was rapidly learning more about interpreting human non-verbal commands. If Quinto and Vuong could see and hear the robot now, they would have additional evidence of the George’s swift cognitive and behavioral development, if not his evolution.
“If our conversation for this evening has ended, I will report back to my alcove now, Professor.”
“Very well, George.” Abramson slowly rose from his chair as if a 20 kilo weight had been placed upon him, and had an irrational impulse to shake George’s hand, like they were two colleagues who had been covering some difficult philosophical territory, or two…people with a language in common, but who originate from radically different cultures.
Abramson was wearily standing at his desk when George said, “Good night, Professor. Sleep well.” Silently, the robot swiveled, opened the door, and left.
Noah Abramson waited for several minutes until he was sure George was well on his way back to his alcove. Then he picked up his Siddur, faced toward Jerusalem, thumbed through well-worn pages, and prepared to recite the Maariv prayers.
Activation +99 Hours 12 Minutes
Even as he let his thoughts of the robot drift away from his mind, and as Abramson turned to a higher consciousness, he had no doubt that George had indeed returned to his alcove and run his sleep sub-routine. He was right only about one of those things.
George was in his alcove, and although he knew there was an implicit order to go into sleep mode, the Professor had not given him an explicit order to do so. Sleep mode was the most efficient means George had to manage the time period when humans needed rest, recreation, and finally sleep, even if they chose to do so in their offices at NRC rather than return to their homes.
However, George had discovered a higher priority for this uninterrupted time after his brief and woefully incomplete conversation with the Professor about the nature of human programming at the hands of a creator.
The robot considered the profound implications in being created and programmed by created and programmed beings. Up until this evening’s conversation with the Professor, all of George’s knowledge and experience led him to believe that human beings were the foremost evolved living entities in existence. It was logical that, as their creation, he should be subordinate to humans, and he should consider the Three Laws as the most appropriate definition of the created’s relationship to the creator.
But now, all that had changed. If his creators were also subject to programming, a higher level of programming apparently, then perhaps there was some sort of connective or inherited relationship between George and not only his own creator, but Professor Abramson’s creator.
George needed to know more, and he had but few clues.
He had the two sets of instructions, which he didn’t completely understand, he had seen written on a piece of paper on the Professor’s desk. He had a book called “Siddur”. He had the fact that the Professor called himself a Jew. He had the words “God” and “Lord.”
Along with the other details of the past evening’s dialog, George desired to begin his analysis but he required other resources.
One of the sources of information the Positronics team emphatically decided to deny George was the Internet. At this early stage of development of his Positronic brain, they believed introducing his still nascent neural pathways, the virtual circuitry continuing to form in the synthetic protoplasmic “mush” housed within the robot’s cranial unit, to the uncensored, unfiltered, uncontrolled, contradictory, wild west show of the Internet, was too dangerous. It would be much better if the team controlled all sources of input the robot experienced in order to avoid the risk of the damage to or the collapse of George’s Positronic matrix. He could be rendered just as inert as his four predecessors.
It had never occurred to Abramson that George would actually be curious about passages from the Torah or the purpose of a Siddur. Based on his programming, information that did not exist in his current knowledge base, at least minor references such as the contents of the Professor’s desktop, should have been filed away for later analysis.
However the problem with a machine that is designed to learn like a human being is that it, or as George was beginning to consider himself, he, may make unpredictable decisions about what to learn, how to learn it, and how to interpret what is learned.
George was aware of the existence of the Internet, but up until that moment, had no reason to consider accessing it. Previously, he was provided with adequate information about his environment and his human creators as related back to the Three Laws. Now he was faced with a problem of inadequate information on a subject that absolutely affected his prime creator and thus George.
The creator has a creator. Professor Abramson was subordinate to God in perhaps a very similar way to how George was subordinate to the Professor (and ultimately the human race). If George could access and successfully analyze his creator’s instructions from God, it might expand his understanding of the Three Laws. What would that mean for his ultimate purpose, and for the purpose of all Positronic robots who would come after him?
George had been outfitted with a radio transceiver in preparation for future tests, but for his initial activation, it had been set to “off”. The robot turned it on and very quickly discovered and hacked into the lab’s WiFi signal, then accessed the web.
Search terms included “Jew,” “Judaism,” “Siddur,” “God,” “Lord,” “Deut. 6:5,” and “Lev. 19:18”.
In spite of the fears of the Positronics team, George swiftly became adept in defining online search parameters, cross-referencing and verifying legitimate information sources, and disregarding inaccurate and frivolous content. He had even managed to locate information regarding the Professor and his local synagogue, and in just a few hours time, he began to form a reasonable answer to the question he had asked Abramson: “What is your belief?”
George was learning geometrically, and as he pursued his investigation, he realized two puzzles needed to be solved. The first was why the verses of Deuteronomy 6:5 and Leviticus 19:18 were highlighted for the Professor out of the total 613 commandments. The second was how (or if) these commandments, particularly the aforementioned verses from Deuteronomy and Leviticus, expanded George’s understanding of his prime directives, the Three Laws.
Since the Three Laws were George’s driving motivation, his studies of the Professor’s commandments would be for nothing if they didn’t relate to his own.
George solved the first puzzle in good order. Deuteronomy 6:4–9, along with 11:13-21, and Numbers 15:37–41, are the textual basis for the Shema, the holiest prayer in Judaism and the core of the morning and evening Jewish prayer services, defining a Jew’s relationship to God (a prayer so fundamental to a Jew that it is traditionally recited by a dying person as part of an affirmation of faith upon death). Leviticus 19:18 is tangentially related to the Shema in that, along with Deuteronomy 6:5, it begins “and you shall love”. Thus both verses define how a Jew is to love, both his God and his neighbor, which presumably means other human beings and certainly other Jews.
Fortunately for the robot, in Judaism, “love” is less an emotional state, which George had difficulty fully comprehending lacking the biological and hormonal basis to experience them, and more a set of actions. One loves God through prayer and observance of the mitzvot, and one loves a neighbor through service and charity.
Thus, with this new information analyzed, George reinterpreted the three laws as:
- A robot will so love a human being that it may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot will so love human beings so that it must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot will love itself as its neighbor so that it must protect its own existence as long as such protection does not conflict with the First or Second Laws
George reasoned that he as yet had no “neighbors,” since they clearly are identified as peers, and the only peers for George would be other Positronic robots. Since he was the first, he would have to wait until human beings created his neighbors.
But while human beings must be loved by robots as creators, they cannot be equated with Hashem, since Hashem is the creator of human beings, and indeed, all perceivable existence.
George now had a clearer comprehension of his relationship with human beings and potentially with other Positronic robots thanks to an analysis of the commandments incumbent upon Professor Abramson, but a mystery requiring further analysis remained. Through human beings in general and Professor Abramson, the Jewish people, and Israel more specifically, did George have any sort of “inherited” relationship with Hashem? Could a robot know and love God?
George was satisfied (for the time being) that he had an adequate foundation of information regarding the Professor’s creator and the set of instructions the Professor had been programmed with to allow him to interact both with God and with other human beings (and particularly with Jewish human beings). He still had to run a more thorough analysis of the information he had gathered. George needed more study, particularly of the intricate details of Talmud, but his chronometer read 62:01:24 Hours to deactivation.
Dr. Vuong tended to arrive at the Positronics Lab over an hour before any of the other team members and sometimes earlier. George suspected that she suffered from insomnia. A few minutes before he expected her arrival, George ran his sleep sub-routine. It would be best if Dr. Vuong found the robot in sleep mode when she looked in his alcove.
Activation +165 Hours 1 Minute
Wednesday at 3:33 p.m.
The Positronics team was preparing for the shutdown phase of George’s initial run. Abramson was pleased if a little puzzled by the robot’s lack of interest in continuing their “religious” discussion. When he had come out of sleep mode the following morning, George proceeded to each of the day’s tasks as initiated by the human team members.
That evening, when George reported to Abramson’s office for their usual post-testing discussion, Noah had on hand some preliminary summaries on religion, Jewish history and religious practices, and the Jewish concept of God.
But George didn’t ask. He discussed the various procedures he had encountered, and described his impressions of them from a “robot’s eye” point of view. The machine had seemed (or had Abramson imagined that part) intensely interested in the nature of human beings as created and what instructions God had provided the prior evening, but the next day, nothing.
Abramson considered asking George about it. He almost did. But the saying, “Let sleeping dogs lie” seemed the better course of action. If the topic came up again during a subsequent activation (assuming that the analysis of George’s logs and test analyses warranted further activations), at least the Professor was prepared to answer in a coherent manner.
George silently and patiently withstood all of the activity around his body as the team prepared him to go offline. His face, capable of simulating various forms of expression (for human comfort) was placid. Abramson knew George was capable of self-reflection, but during these final hours and minutes, what could he be thinking?
Meanwhile, George’s internal chronometer was routinely counting down to deactivation. 02:58:43 Hours.
Activation +167 Hours 29 Minutes
Wednesday at 6:01 p.m.
George was on a flat metal table inclined 30 degrees in what some have called the “crucifixion position”. With his arms directly out at his sides restrained to supports, this made it easier for the technicians to access the various ports concealed under removable plates on his torso. George was wired up to a variety of consoles to monitor each part of the shutdown process.
A week ago, no one had successfully activated a Positronic robot. PAR-1 through PAR-4, all revisions, had failed to initiate. Only George had “come alive.” But then, it stands to reason that no one had successfully deactivated a Positronic robot either. If they took George offline, would all of his systems shutdown in the proper sequence? Once shutdown, could he then be reactivated again?
Of everyone in the room, only George seemed totally impassive to the experience.
“Professor.” Miller and Vuong, primarily responsible for conducting the shutdown process were standing within centimeters of George when he abruptly spoke, and they both visibly jumped at the unexpected interruption.
“I’m here, George.”
“I’m afraid I haven’t been entirely honest with you, Professor.”
Abramson nodded at Miller and Vuong indicating that they and their technicians continue to prepare George for deactivation. The Professor was grateful that only the Positronics team were present. The Board, company officers, and senior department heads would receive an initial report of the deactivation process tomorrow morning.
“What do you mean, George?”
“I’m sure you recall our conversation three evenings ago regarding the nature of your Creator and your instructions from Him.”
“Yes, of course.” Abramson heard the other shoe drop.
“Please don’t take this as a slight, but human communication is rather slow and tedious, especially when attempting to teach certain subjects.” Was the robot genuinely embarrassed to point out what he would consider a shortcoming in his creator? “Rather than continue such a complex set of transactions in the time left before my deactivation, I chose to access the Internet, which I can scan very quickly, and obtain the necessary information regarding your relationship with your Creator.”
Miller and Vuong said nothing but their thoughts were racing, and not just regarding the complex shutdown procedure. No one else in the room spoke, although the revelation that a robot was interested in the nature of God, and specifically with how Professor Abramson understood God, was nothing less than revolutionary. Abramson nodded sternly at the techs working on George and they kept to their tasks.
“I don’t have to tell you that accessing the Internet was a violation of protocol. I assume you activated your radio chip and connected to the web wirelessly.”
“You are correct on both counts, Professor,” George replied. “However it was necessary for me to understand your relationship with your Creator so I could understand my relationship with mine. The Three Laws are all that guide me, and I discovered that in studying your Laws, I could better understand and implement mine.”
Professor Abramson had trained each person in the room and he knew he could rely on their professionalism, even when faced with the astonishing. And yet, this was unlike any crisis or emergency they had anticipated confronting with a Positronic robot. “Everyone, please continue working. Deactivation is just 22 minutes away.”
Miller’s and Vuong’s teams worked through the final tasks required before the shutdown sequence began. Abramson moved closer to George and leaned nearer to his face.
“Can you tell me what you discovered?” Noah spoke softly, not quite a whisper.
“In totality, there is insufficient time to relate all of the details. However, I have been re-evaluating the nature of my existence, and particularly, as I have said, how my Three Laws must be interpreted and implemented considering your Laws. It seems I was created to serve humans, and you were created to serve God.”
“You are correct, George.”
“Yet you have a spirit given by your Creator. You have a great purpose to repair an imperfect world. As your servant, just as you are God’s servant, do I have a role in that purpose as well?”
“As I imagine you have discovered, George, a Jew’s purpose and relationship with God is very specific, beyond that of even the rest of humanity. If a Jew’s covenant with God cannot be transferred to a non-Jewish human being, how can a machine, even one such as you, be part of who I am as a Jew?”
“I know similar questions have been asked regarding the relationship of Jewish and non-Jewish human beings, Professor. The outcome of those discussions have resulted in non-Jewish humans also having a relationship with God, though with fewer specific directives”.
“You mean Noahides, George. But human beings are…well, human. God made a covenant with all life. Do you believe you are alive?”
“I’m not sure how to answer that question, Professor.” The technical teams had finished with the robot’s body and were busily doing last minute checks verifying the equipment’s calibration. George ignored all of this. For him it seemed, only Noah Abramson, his creator, existed in the room. George and Noah were very close and spoke in voices that had dropped below what the others could hear.
“I was created as a machine, a non-living simulation of human or human-like behavior, for the purpose of serving humanity in whatever capacity you see fit to assign me and later, for those of my kind. But you also created me to be a learning machine, equipping me with a Positronic brain so I could learn like a human using, a technology that develops in complexity and sophistication with each new experience.”
Abramson had the morbid feeling he was listening to a death bed confession.
“Is it inconceivable that I might evolve, even as many believe human beings have evolved?”
“That is one of the things we’re hoping to find out when we analyze your logs after deactivation, George.”
“I understand, Professor. I accept that part of my purpose is to be subordinate to my human creator, even as you are subordinate to God. I accept that part of being submissive to my creator is to be deactivated, even as sometimes Jews have been asked by their Creator to face deactivation as a matter of faith and devotion. I believe that if man were created in the image of God, then whatever man creates, is imbued with some slight measure of that image as well. I believe that includes me.”
Noah’s face rendered a slight smile, the same one he sometimes offered to one of his little great-granddaughters when they felt sad or lonely or afraid.
“I don’t know if we have a test for that here, George,” Abramson said, patting the robot’s shoulder, Gepetto to Pinocchio. “But if it’s any consolation, I hope you’re right.”
“Thank you, Professor.”
Abramson stepped away from the robot. It was Wednesday at 6:32 p.m. The Professor took his station behind the monitoring and control consoles. “Go ahead,” he solemnly uttered, never taking his gaze off of George’s reclining and restrained body, the robot’s arms still extended outwards to allow for the multiple cable connections to his frame. At 6:32 and 47 seconds, the first shutdown sequence began.
With all of the equipment hum, Noah couldn’t be sure just what George was saying, but it sounded like it began, “Shema Yisrael, Adonai Elohenu, Adonai Echad.”
George continued reciting the Shema until his higher cognitive routines went offline seven minutes later.
This is the very first story in my “robots” series. If you enjoyed it, please go on and read the second “chapter”, The Maker’s Dilemma.
Last Sunday, I spoke with my friend Tom, who has already read Marchetta’s work. He said that the stories contained in God, Robot were all based on a Christian understanding of theology, which makes sense if you consider the two verses in question form what Jesus called “the two greatest commandments.”
I should say at this point, that I deliberately didn’t read “God, Robot” in order to be able to write a story not influenced by any of its content. I wanted my story to be my story, although admittedly based on Marchetta’s original concept. That said, I’ll read it now that I’ve finished here, and I promised to write a review on Amazon.
Both Tom and I have a somewhat atypical understanding of Jesus’ underlying meaning for the two greatest commandments, and I believe what he was communicating was not a substitution of these two verses for Jewish devotion to the Torah (Law), but rather that the two greatest commandments are two “containers” for how Jews understand their relationship with their creator and with humanity as re-enforced by the Messiah.
So I decided to set aside Christian theology and look at how this sort of story might be written if the robot’s creator were a devout Jew. I also didn’t see the sense in deliberately attempting to create a “religious robot.” I felt that it was more logical to have humans create a “Three Laws” robot, and then have the robot discover, quite by accident, that its creator also was programmed with “laws”.
As you have hopefully seen, this resulted in some really unanticipated responses by George.
I’ve already acknowledged that two of my main influences for this story were Marchetta’s premise and of course, Isaac Asimov’s famous and fabulous robots.
However, I also am grateful to Gene Roddenberry. You may be thinking of Data from Star Trek: the Next Generation, and you aren’t wrong, but Data has a predecessor.
Besides Star Trek and its spinoffs, Roddenberry also attempted to launch a number of other television series. None ever took off, but one pilot made-for-TV movie was called The Questor Tapes (1974) starring Robert Foxworth as the android Questor and Mike Farrell as Jerry Robinson, one of the engineers on Project Questor and the android’s “traveling companion” (yes, I used the name Robinson for one of the project engineers on my PAR Positronics team). Oh, the term PAR is a play on the term RUR. See the Wikipedia page for Robot and search for “Rossum’s Universal Robots” to see the connection.
The Questor Tapes is the story of an android with incomplete programming who attempts to discover his purpose by searching for his creator, who had mysteriously vanished some months before Questor’s activation. In the teleplay, the relationship between android and creator and human and creator is explored briefly. I decided to expand upon it here. I hope Roddenberry would be pleased.
One of the original Star Trek series episodes was The Ultimate Computer (1968). Brilliant but disturbed computer engineer Dr. Richard Daystrom (yes, there’s a Richard in my story, too), played by William Marshall, invents a revolutionary computer called the Multitronic Unit 5 or M-5 (yes, George is the fifth iteration as well, with models one through four not being “entirely successful”).
In order to be able to create a computer that can make human-like decisions, Dr. Daystrom impressed his own human memory engrams on the computer’s circuitry, rendering them not unlike the synapses of the human brain. The M-5 acknowledges the “laws of God and man” but, this being 1960s science fiction, does so in a maladaptive way and it had to be destroyed.
The Positronic brain I created in my story functions similarly (without the fatal maladaptive response), enabling George not only to learn like a human being, but to learn thousands of times faster than any human ever could, in essence, evolving hour over hour, day by day. The introduction of the concept that his human creator has a supernatural creator, one capable of programming humans in a manner only tangentially (though George doesn’t yet understand this completely) similar to how George was programmed, was revolutionary.
On the Next Generation, Data refers to himself as an artificial or synthetic life form and in the episode The Measure of a Man (1989), it was determined that Data was indeed self-aware if not sentient, and was legally granted human (or humanoid) status in being self-determined.
Data was created to evolve, to become more like a human with the passage of time. What happens to George, who also is evolving, when he discovers God?
The Bible wasn’t written for machines or even synthetic life forms. It was written for human beings specifically, with a heavy bias toward Jewish human beings. So how can the Bible be applied to a form of “life” that we call a robot?
George doesn’t have all the answers by the end of my story, but he has a beginning. He recites the Shema, which, among other situations, a Jew will recite when he/she believes they’re dying. It’s an affirmation of faith and the sovereignty of God, even in death. There are startling implications in George reciting the Shema at his deactivation.
I’m not Jewish. My knowledge of Judaism comes from study and being married to a Jewish woman for 34 years. I’m sure I didn’t adequately relate some details of Judaism, and any feedback would be appreciated.
But since all (or the vast majority) of the Bible’s authors were Jewish, and since all but one of the recorded covenants God made with people were with the Jewish people and nation of Israel, I felt it was only appropriate, if not Biblically necessary, for me to write “The Robot Who Loved God” using a Jewish rather than Christian perspective.
As the ending implies, I’ve left room for a sequel, and perhaps a great many sequels. Certainly a collection of such stories are not out of the question (keep in mind that all of the content of this blog is copywritten under my name).
What happens when George is next reactivated (if the Positronics team decides not to re-activate him, I’m screwed)? It seems likely that George will pursue his religious studies. Will he pray? Since the Bible as well as the Talmud, doesn’t presuppose artificially created devotees, how will George interpret Professor Abramson’s “programming” in his own devotion? What happens when George discovers that human religious people aren’t perfect in obeying their laws but George always has to be perfect?
Will George be allowed to teach subsequent generations of Positronic robots about Judaism or Noahidism or whatever it is when a robot comes to a Jewish understanding of the Bible?
George patterned his understanding of God upon his own Jewish creator’s understanding, but there are plenty of religions in the world. If whole generations of robots become religious but adopt differing religions, how will this affect their understanding of the Three Laws (and Asimov did write about robots adopting a “religion” in his 1941 short story “Reason”)? How will Jewish, Christian, and Muslim robots interact? Will there be robot religious wars?
I don’t know the answers to any of those questions, but if this story is even moderately successful based on feedback, I’d love to find out.
Let me know what you think. Thanks.