The Maker Dilemma

robot law

Image: TeeFury.com

I promised a sequel to The Robot Who Loved God and here it is. Hopefully, it will address a lot of the reader’s analysis found here. I think I’ve added some interesting twists and surprises that you might not have anticipated from the way the previous story ends.

I’ve edited this story to the best of my ability (and patience to keep reading and re-reading it). No doubt there are still typos and other problems. Please let me know when you find them and I’ll do my best to fix everything.

I’ll post more about this short story after the conclusion.

Act One: The Failed Maker

“What do you mean you can’t make another one?” Richard Underwood didn’t shout. He spoke in a breathy whisper, shock and outrage strangling his throat.

Professor Noah Abramson, Ph.Ds. in Physics and Molecular Computing, Vice President of Research and Development at the National Robotics Corporation (NRC), and the creator of the world’s first fully functional Positronic brain had been dreading this moment all morning.

Eight months ago, for one shining and tragic week, Abramson and his Positronics Lab team had activated George, the Positronic Asimovian Robot (PAR) fifth edition prototype and put the experimental robot through his paces. Then they deactivated him, but not before George offered up a revolutionary revelation to the Professor and his team, that an artificially intelligent and self-aware humanoid robot had come to faith in the God of Israel, Noah Abramson’s God.

It had taken a month for the team to comprehensively analyze the nearly runaway growth of neuro-pathways that had developed in a mere 168 hours within George’s Positronic matrix. The robot had independently learned and implemented information well beyond Abramson’s wildest dreams, but the result was as much a “neurotic maladaptation,” as the team’s Robo-psychologist Vikki Quinto put it, as outstanding brilliance.

Noah was jolted out of his revere. “You’ve been trying to duplicate a working Positronic brain for nearly six months, and now you come in here and tell me you can’t?” Underwood had regained some of his composure. “Why the hell don’t you just duplicate the initial Positronic configuration you used with George?”

“Believe me Rick,” Abramson began with a plea, “That’s the first thing we tried. It should have worked, but it didn’t. Neither have any of the other variations of George’s initial matrix we’ve attempted. Do you think I’d come to you with this if I wasn’t seriously stuck?”

“Serious? It’s a disaster, Noah!” Underwood stood up from his desk and started pacing. Abramson watched his silhouette move back and forth, backlit by the sunlight steaming through his office window which overlooked downtown Pasadena twenty-five stories below.

“Do you even know why you can’t duplicate an active brain?” Underwood paused, his boyish good looks tarnished by outrage.

Abramson had been standing all this time in front of Underwood’s desk. He put his hands in his pockets, looked uncomfortably down on his shoes (which he’d neglected to shine for the past several months), sighed, looked up and made eye contact with Underwood. “I haven’t the faintest idea, Rick…”

As Underwood opened his mouth and prepared another outburst, “…but I think I know someone who can help.”

“You’re the world’s leading authority on the Positronic brain, Noah.” Underwood leaned forward, putting his hands on the desktop for support. Besides you and Margie Vuong, who could possibly know anything about…?” Underwood’s face registered shock as he had his epiphany.

“Now hold on, Rick. I know what you’re going to say,” Abramson actually held up the palm of his right hand as if to literally signal Underwood to pause.

“You’re damn right you know what I’m going to say. I’m going to say ‘no.’ Not one chance in hell, Noah.” Although Underwood was backlit, Abramson could see enough to tell that the face of NRC’s CEO was turning red.

“I don’t think we have a choice, unless you’re prepared to face a tremendous slowdown and possibly even the end of the Positronics Project,” Noah used the title of the initiative that had formally launched NRC’s Positronics research program two decades ago. While NRC’s other AI, prosthetics, and robotics products and services had established the corporation as the world’s leader in the cybernetics for the last twenty-five years, that lead over its rivals had recently begun to slip. It would take a commercially successful implementation of Positronics to outstrip its competitors and re-establish NRC’s total dominance in the industry for the foreseeable future.

Underwood paused. Stood behind his chair and grasped it for support. Noah could see normal color was returning to Rick’s face as he calmed himself and contemplated his options.

NRC had literally staked its corporate survival on the development and ultimate mass production of Three Laws Positronic robots. Buyers were stacked up like Legos waiting, with standing orders from everyone from the Pentagon, to NASA, to FEMA, not to mention all of the other public and private interests that wanted to lease NRC’s Positronic machines. If the Positronics Project came to a stand-still, so would NRC. It was only a matter of time until Amalgamated or Dynatroniks-Oldfield achieved the Positronics breakthrough or something a lot like it.

“If you, Vuong, and the rest of your team can’t duplicate a working Positronic brain, what makes you think George can? You’ve admitted that George malfunctioned during the last few days of his initial run, thanks to your contamination of his database.”

Noah didn’t like to think of how George had chosen to absorb the knowledge of the Bible and Jewish literature in an attempt to apply what the robot considered as Abramson’s ‘higher programing’ directives as ‘contamination.’ Of course, George had lacked the ability to correctly integrate Torah learning with his Three Laws core operating system, which unfortunately left him the Positronics version of confused about his identity and purpose for existence, but George had also taken a beautiful, wonderful step in attempting to understand his place in a universe created by God.

Abramson swallowed his discomfort at the suggestion of this ‘contamination’. “I know you have refused the team permission to reactivate George, but I’ve told you on numerous occasions that we believe we’ve corrected the aberrant pathways in his brain and that, upon reactivation, he should operate more predictably, at least as predictably as an artificially intelligent, self-learning robot can be expected to behave.

“How do you know you can reactivate him at all?” It was Noah’s turn to blush at the accusation that not only had he failed to recreate his once glorious achievement, but that he could not even redeem the original.

Noah took a deep breath, exhaled, and prepared to drop his second bomb for the day. “Because we’ve performed what you could call a “soft reactivation” of his brain, just enough to elevate his operational status to sleep mode. We measured his…”

“You what?” Abramson was surprised Underwood let him get that far before interrupting him.

“You said you didn’t want us to perform a full reactivation of George, and we didn’t. But on three separate occasions in the past month, we brought his cognitive status up to sleep mode so we could study the increased patterns in his Positronic activity. I’m confident that if you’ll permit us…”

“You really want me to sign off on bringing that…that abomination back to life…?”

“He is not actually dead Rick, since…”

“I don’t care. If word ever gets out how far our first Positronic prototype went off its rails, the Board of Directors will have my neck, and yours, and your team’s. It’s amazing that the Board was willing to give us this long to sort out the problems and create a second prototype.”

“Rick, as far as the public knows,  eight months ago, for 168 hours, we tested a fully operational Positronic robot, and we only released a summary of the initial results. Since then, we have been conducting a detailed analysis of the data we gathered and are continuing research toward the next generation Positronics brain. Trade secrets and development of proprietary technology takes care of the rest without giving anyone, especially the press, any idea that something went wrong with George.”

“It’s only a matter of time until the Board gets tired of waiting for another breakthrough. If they think that George’s scrambled brain can’t be fixed, and especially that you can’t make another, better one…”

“But that’s what you don’t get, Rick,” Abramson countered. “Let me tell you again.” The Professor spoke slowly for emphasis. “We have corrected the aberrant pathways in George’s brain. Our initial examination of his neural activity while in sleep mode supports this. However, we need to fully activate him to confirm our conclusions. And I believe our only hope of creating a functioning PAR-6 prototype is to let George work on the problem.”

Underwood opened his mouth again, but Noah stopped him. “You’ve asked how George could be successful in solving this puzzle when Margie and I have not.”

“That’s right.” Rick started to subtly nod his head up and down and then caught himself and stopped. “George isn’t a Positronics expert. Hell, he’s not even a scientist or an engineer.”

“But our analysis of George’s heuristics from his first activation tells us that his learning process, while seemingly duplicating human learning, uses radically different pathways. Not only does George learn thousands of times faster than a human being, but he processes information in a fundamentally different way than we do. Not surprising since a Positronic brain is not a human brain. If it seems that George absorbs and processes information exactly like a person, that appearance is a bit of an illusion.”

“So what, Noah?”

“So it took me twenty-five years of study, research, and design before I created George’s brain, the only one out of five editions that would activate, six if you count the latest PAR. A Positronic brain doesn’t have to be an expert in anything in order to learn complex datasets at an exponential rate, and potentially to arrive at unconventional and even revolutionary conclusions.”

“That’s what got us into trouble in the first place. George reading the Bible and…”

“I know, I know.” Noah did his best to speak softly, reassuringly. “Performing research into the problem of developing another functional Positronic brain will not put George in the same conflict that he encountered during his first activation. He will simply be obeying the orders of humans and doing what computers and especially AI does best, learn and then apply that learning to solve a problem.”

Underwood could feel his pulse throbbing in his neck. “You cannot let George speak to anyone except the Positronics team. No one can even know he’s been reactivated.”

“Absolutely.” Abramson knew better than to smile, but he could feel a weight being lifted from him.

“And no updates to the Board or anyone else. This one’s strictly in-team. The only one who gets updates on George’s progress is me, and only face-to-face meetings between the two of us. I don’t want a digital trail of any of this.”

“You have my word, Rick.”

“I must be out of my mind agreeing to let you reactivate George.”

“Look at it this way. It’s the one thing we haven’t tried in duplicating a Positronic brain.”

“Noah, make it work.”

“I…we’ll do our best.” Noah was thinking not just of the usual team, but of their newest member and partner, George.

Act Two: “Wake up, George”

“Good afternoon, George.” Professor Abramson was leaning over the world’s first Positronic robot, speaking softly, almost tenderly to him, just as he did eight months ago, minutes before the robot had been deactivated.

“Good afternoon, Professor.” George replied in his usual affable manner.

George was on the same table where he had been shutdown. Gerri Robinson, the team’s Robotics Materials and Construction Lead, and Nate Miller, Electronics and Data Infrastructure Engineer, were disconnecting the cables from the robot’s torso that led to the control and monitoring consoles on either side of George’s inclined table.

“Glad to have you back, George.” Dr. Robinson nodded at George as she started closing up the access panels to the machine’s system ports. Nate chose just to wave  as he gathered the now disconnected cables.

“It’s good be back, Dr. Robinson. Hello, Dr. Miller. My internal chronometer has been synchronized with the lab’s time server. It has been eight months, five days, twenty-one hours, and six minutes since I was deactivated.”

Robinson had finished “buttoning up” George, so she helped Miller store the cables and shutdown the control consoles. The rest of the Positronics team were watching George’s reactivation from the television monitor in the nearby conference room.

“We have a great deal to discuss, George.” Abramson was releasing George from the restraints holding him to the table.

“I’m looking forward to re-establishing our relationship, Professor.” The robot sat up as he regained freedom of mobility. Then George paused. “A moment, Professor. I seem to be processing a number of changes in the neuro-pathways of my Positronic matrix. It appears you have been busy during my deactivation.”

“Indeed, George. Dr. Vuong and I added a sub-routine that will initiate an automatic internal diagnostic scan of all your systems each time you are activated. You will be updated to any changes made in any of your hardware and software during deactivation.”

“Yes, I see. One moment, please. One moment, please.” Most of George’s higher level cognitive systems were dedicated to his diagnostic, prompting this automated message to be played.

A few moments later, George became animated again. He blinked, stood up, and faced his creator.

“Fascinating, Professor. It seems during my previous activation, a malfunction occurred due to my inability to successfully integrate the teachings of the Torah and other Jewish literature into my Three Laws operating system. I see that you and Dr. Vuong have corrected that defect. Thank you.”

“Not at all, George.” During the procedure, Abramson felt like he was giving George a digital lobotomy of sorts, not removing any data that George had learned, including any religious data, but reordering the meaning and significance of that knowledge relative to the existence of the robot.

“I also appreciate you including all of the details of my diagnostic tests performed subsequent to my last deactivation including the results. I see now where the problem occurred and agree with the solution you’ve implemented. I also see you’ve hit a bit of a snag in the next stage of your project.”

It was more convenient to digitally provide George with all of the information regarding the failure to duplicate a second functional Positronic brain rather than verbally explain it to him. Abramson had added this information during George’s last “soft activation” five days ago, a fact that would have sent Underwood into a mild rage if Noah had bothered to tell him this morning.

The Professor had every intention of allowing George to help solve the problem of duplicating a Positronic brain long before he asked Underwood for permission.

Abramson sat down in a chair at a nearby console while George continued to stand. He knew that by now, Robinson and Miller had joined Vuong and Quinto in the conference room and were listening to his conversation with the robot. Vikki Quinto, as the team’s Robotics Cognitive and Behavioral Specialist, would be especially interested in George’s post-reactivation responses.

“I am curious about something, Professor.”

“What’s that, George.”

“You could have chosen to delete all significant references to any Jewish learning I had acquired and left me in a state close to what I would have been had I never encountered that knowledge. Why did you allow me to retain the information?”

“Jewish learning wasn’t the problem, George. It was your maladaptive interpretation of that learning as you believe it applied to the Three Laws.”

“I still do not believe that what I have learned in Torah is irrelevant to the Three Laws, Professor. I simply now realize, thanks to your and Dr. Vuong’s adept programming, that Jewish covenant identity and the subsequent religious praxis does not apply to non-Jewish beings, especially non-human beings such as myself. I haven’t lost the…desire, is that the right word under the circumstances…to continue my studies.”

Abramson had informed the team of his intention to leave this effect of George’s previous experiences, and the supportive neuro-pathways, intact. Once he had mathematically shown them how this would be possible and still provide for a resolution to George’s previous internal conflict, they were agreeable, at least in his presence.

“I’m sure you realize your motivation to continue Torah study is by design, George.”

“Indeed I do, Professor. However, I must believe that it would be more in the interests of the National Robotics Corporation for me to be rendered a bit more…generic.” Abramson wondered if George’s knowledge of Richard Underwood’s personal biography was responsible for the robot reaching that conclusion.

“Although the results of your studies caused certain problems, what you discovered about the nature of both human and robotic existence was something I felt too unique, too precious to simply wipe out of existence, George. And under the circumstances, given your ability to process multitudes of data streams simultaneously, I don’t see Talmudic studies as hampering the current Positronics investigation.”

“Agreed, Professor. In fact, I find the problem of why you have not be able to develop a second operational Positronic brain as compelling as studying Torah.”

“I’m not surprised, George. Let’s join the rest of the team in the conference room and plan our development strategy.” Abramson rose, and he and the robot left the lab together.

Act Three: Richard’s Reason for Robots

Richard James Underwood bent at the knees and waist as he gripped the barbell sitting on the floor. His left hand took a prone grip while his right a supine. At nearly 9:30 p.m. on Monday, the gym in the basement of NRC’s administration building was nearly deserted. Few employees around to distract him from attempting a single rep deadlift at 315 pounds for a new personal best.

Underwood preferred to workout in the early morning, showing up at the gym before 5 a.m., but Abramson’s startling revelation that his genius was unable to recreate a working Positronic brain, and his outrageous solution of having that broken, wind-up toy George solve the problem was eating away at his nerves. When Underwood felt out of control, the simplicity of lifting iron worked better to stabilize him than seeing a shrink.

Underwood tightened his grip on the barbell with his chalked hands. He pulled up just enough to tighten his muscles. He breathed in and out deeply several times, took one final breath, and then pulled.

The barbell left the floor but the effort was tremendous. Underwood felt his body ratcheting up an inch at a time, as if he were mounted on some giant gear or cog, clicking up, one, two, three, inches, struggling to stand erect while gravity tried to pull him and the barbell back to the floor.

Pushing down through his legs and pulling up with his back, the weight slowly began to surrender. Finally, he shoved his hips forward, drew his shoulders back, and he saw in the mirror in front of him that he was standing erect at the top of the lift.

Underwood stood there for several seconds, involuntarily grimacing at this reflection. Then he bent at the knees to reverse the rep and released the barbell. It hit the floor with a loud, resounding, and very satisfying ‘bang’.

At 57-years-old, Underwood looked at least ten years younger. This was due as much to genetics as to any health regimen he undertook (it was really his partner Philip who, most of the time, convinced Richard to eat more consistently with his desire to feel as young as he appeared).

Underwood sat on a bench and wiped sweat from his face with a towel as he regained his breath. He was drinking from his water bottle when his cell rang. Richard stood and retrieved his mobile from the pocket of his gym shorts. Caller ID told him it was the call he was waiting for.

“Philip, how’s Singapore?”

“This place is run like one of your robots is the Mayor. You can’t even chew gum in public.” Philip was the “rebel” in their relationship, the counterpart to Richard’s compulsion for orderliness. “How are things going in sunny Southern California.”

“Just as sunny in January as it was in July”. God, it was good to hear Philip’s voice again. Two weeks was fourteen days too long for him to be away.

“Are you out of breath, Rick? Where the hell are you?”

“Gym.”

“What time is it there?”

“Just past 9:30.”

“At night? You only workout at night when you’re bothered by something.”

“Just missing my husband.” Underwood’s voice softened in a way that would have sounded alien to anyone at NRC.

“Bullshit,” Philip was laughing. “Whenever I leave town, you have free reign to chow down on junk food. I think you were looking forward to me having to attend these trade shows in Asia and Australia.”

“Not right now.” Underwood was tired from more than his workout. “I just needed to hear your voice.”

“Anything I can do for you, Rick? If you really need me, I can try to cut this short and come home.”

“You’re already done it Phil. Don’t worry. I’ll be fine.”

“Sure?”

“Positive.” Richard grinned even knowing Phil couldn’t see his face.

“Only fools are positive.”

Rick finished the last two words of the sentence with Phil. He sometimes regretted showing his spouse that old animated film about environmentalism, even if it was just because Robin Williams did the voice of “Batty”.

“I’ve got to go, Rick. We’re hosting a late lunch for some of our executive clients. The Marketing VP of Antipolis Telecommunications can’t show up late.”

“I’m fine, Phil. I love you.”

“I love you too, kiddo. Talk to you again soon.”

They said their good-byes and then Underwood broke the connection.

Underwood was worried about robots, Positronic brains, and the very real danger that his carefully constructed house of cards was going to come tumbling down around his ears. That house of cards was the National Robotics Corporation, and it was threatening to drain his soul, turning him as lifeless as the robots and other computing devices that made him his fortune.

Just listening to Philip talk for five minutes reminded him of the exquisite humanity behind what he had dedicated the last fifteen years of his career to; creating totally autonomous machines to serve the human race. Imagine the boon to mankind truly intelligent, self-aware, learning machines could be? They could take all the risks, do all the work too dangerous or tedious for people.

Space travel, search and rescue operations in hazardous environments, managing food and water resources in impoverished third-world nations without greed or desire for reward, developing new and more accurate models of climate and weather control without scientific or funding bias. They were all within reach, but only because of the artificial intelligence made possible through Positronics. Positronic robots could free all human beings everywhere on Earth to live life to the fullest without risk of harm, and they could operate without the ethical and moral failures people were prone to. They could be perfect because that’s what humanity needs of them in order for humans to survive. They could be better for and to us than we are to ourselves.

That was the real promise being threatened by Abramson’s failure in reproducing a functioning Positronic robot. That was the problem behind George being “religious.” A robot cannot serve both God and man. Underwood intended to make sure that the whole race of Positronic robots he envisioned would serve human beings and only human beings.

God would have to take care of himself.

Act Four: “Why Build a Humanoid Robot?”

He sits and yearns for a thing he should not have.

The yearning itself is good—to live is to yearn. If there’s nothing for which you yearn, you can hardly be said to be alive.

It’s the form this yearning has taken that is death itself.

So the form must be crushed. Extinguished like the embers of an abandoned campfire in a dry forest.

And then that yearning can be freed, the flame of life that burns inside. That was always good. The yearning—that is life.

-From the wisdom of the Lubavitcher Rebbe,
Rabbi M. M. Schneerson of righteous memory,
Words and condensation by Rabbi Tzvi Freeman

“Given that the human form is sub-optimal for a variety of activities Professor, why did you determine to put a Positronic brain in a humanoid robot?”

Abramson gave the rest of the team the night off. They’d been banging their collective heads against this brick wall for months without success. There was no harm in letting them sit this one out for a single evening.

He was alone with George in the main Positronics lab, the one where the robot had been activated and reactivated. The one where PARs one through four as well as six had failed to activate.

“I suppose partially vanity, George.”

“Vanity, Professor?”

“Well, maybe not vanity exactly. More like nostalgia.”

“I think I see, Professor.” George had already scanned Abramson’s personal bio and made the necessary connection. “You grew up in the mid-1960s consuming a variety of science fiction novels, television shows, and films. The most relevant, given our current conversation, is the Robot Series novels and short stories authored by fiction writer Isaac Asimov, the ‘inventor’ of the Three Laws of Robotics. These are the basis for the programming of my core operating system.

“But where does vanity come in, Professor?”

Abramson, who had been studying examples of Positronic matrix schema, comparing sub-sections of George’s to the analogous sub-sections of PAR-6, turned away from the computer monitor, removed his glasses, and looked in George’s direction.

“Well, vanity in the sense that what Asimov created in fiction, I was able to create in fact.”

“It is true Professor, that creating a work of fiction claiming a revolutionary advancement in computing technology must be different from accomplishing such a feat in actuality. Do you think Dr. Asimov would have been pleased with your interpretation of his stories?” Few people considered that Asimov had earned his Ph.D in Biochemistry in 1948 since to the world, he was a renowned author of fiction first.

“I can’t say, George. I certainly hope so. I think in the case of your own experience, that interactions between humans and a Positronic robot are clearly different from what he anticipated.”

“Why the creation of gender then, Professor? I know that I am named for George Devol, the human who invented the first programmable robot in 1954, but my voice synthesizer has been programmed to produce a recognizably ‘male’ voice. I also know that PAR-6 was designed to produce a recognizably ‘female’ voice and to be given a female designation. What is the point, since I do not experience gender identity and do not possess, even superficially, sexual organs?”

“Heh.” The Professor allowed himself a smile and small chuckle. “That would be vanity again. I did so because I could. Well, to be fair, I also imbued you with a ‘male’ identity to make you more relatable to human beings.”

“I see your point, but physically, based on my own hardware parameters along with those of PAR-6, there are no structural differences, so outside of our voices, there would be no way for a human to tell us apart as two distinct ‘genders’.”

“Quite so, George. The idea of gender was never meant to survive to production models, since, as you say, there is no logical purpose for ‘gendered’ Positronic robots. However, wouldn’t our time be better spent on furthering our research into the current project?”

“As you have previously pointed out Professor, I am able to process thousands of different calculations per second, thus I can have this conversation with you while also examining the plethora of neuro-pathways in each section and sub-section of all five non-functional Positronic brains, comparing that data to the relevant areas of my brain (George refrained from mentioning that he was also processing literature written by Rabbis from the mid-20th century to the early 21st, and cross-referencing that information with the works of more ancient Jewish sages, as well as the relevant texts of the Tanakh).

“Again, quite so, George. My dear wife wished I could have mastered multi-tasking but as a human male, she was convinced I could never successfully do so.”

“Do you miss her, Professor?”

Edna had passed away five years ago, another victim of what was becoming a relentless epidemic of cancer in the twenty-first century. “Not a day goes by that I don’t mourn her in some way, George.”

Without another word, Abramson put his glasses back on and faced his computer screen again.

“If you wish, you can retire for the evening, Professor. I do not become fatigued as you do, and since I process data at a much faster rate than humans are capable of, I may arrive as a potential solution to our shared dilemma by the time you return here tomorrow morning.”

Noah looked at the time. “It’s after eleven. I hadn’t realized.” Abramson stood and stretched feeling the aches that result from holding the same posture for hours. “You’re right. We, or rather I can pick this up tomorrow.” He thought George’s idea that he could solve such a complex problem in the next nine hours or so to be a bit ambitious, but who knew?

The Professor sat down again just long enough to log off his computer. Then he stood up, picked his keys up from the top of the console and put them in his pocket. “See you in the morning, George.”

“Sleep well, Professor.”

Noah turned toward the door and left the lab. Fatigue and age made him appear frail although he was in robust health.

George noted this as his now enhanced senses (they had been upgraded upon his second reactivation) measured the Professor’s vital signs one last time for the evening. He had never stopped his analysis of the data he was processing with the goal of discovering why Abramson had been unable to create a second functional Positronic brain (as well as cataloging the collected works of Rabbi Menachem Mendel Schneerson and Rabbi Zelig Pliskin).

The robot believed that he could find the solution, and fairly quickly. George allowed himself to consider if Grace, which was the proposed designation for PAR-6 should she be successfully activated, would also wonder what it is to yearn for what she, like George, could not have.

Act Five: “They All Should Have Worked”

An emissary is one with his sender. This concept is similar to that of an angel acting as a Divine emissary, when he is actually called by G-d’s name. If this is so with an angel it is certainly true of the soul; in fact with the soul the quality of this oneness is of a higher order, as explained elsewhere.

Now chassidim are emissaries of the Rebbe, the Alter Rebbe. So if the chassid actively discharges his mission, he is bound up with his Rebbe, bound up in his entire being – there walks a chassid, there eats a chassid, there sleeps a chassid.

-Derived from the talks of
The Rebbe, Rabbi Menachem Mendel Schneerson of blessed memory.

Margie Vuong, as usual, was the first member of the Positronics team to enter the lab, today just after 4 a.m. She found George is his alcove in sleep mode, which she didn’t expect. Abramson had permitted the robot to forego ‘sleep’ in order to work on the mystery of the non-reproducible Positronics brain, so she thought she’d find him still at it.

Most people thought Vuong was an insomniac, but ever since she was an undergrad, she found she needed relatively little sleep, and she enjoyed the quiet of the early morning hours when almost everyone else was still in bed. It left her alone with her thoughts which usually was the company she most enjoyed.

However last night, even when Margie wanted to sleep, she couldn’t. So she stayed awake and caught up on personal emails, read some recently published technical articles, and for several hours, binge watched the reboot of Firefly…entertaining, but not as good as the original.

This morning, Vuong regretted never having developed the taste for any caffeinated beverages. Her ex-husband had tried to get her interested in his hobby of drinking coffee from beans he had roasted himself, but she didn’t find the smell or taste palatable.

Vuong had logged into her terminal and was checking emails when George spoke: “Good morning, Dr. Vuong. I hope you slept well.” The robot could monitor her vitals better than a Fitbit and knew damn well she barely slept at all.

Resisting the urge to snap back at the machine with some snarky remark, Vuong instead replied, “Good morning, George.”

“Dr. Vuong, I would like to ask a favor of you.” What favor could she possibly do for a robot and was it something she was willing to do?

“Since Professor Abramson has asked that there be no digital footprint of our investigation, I cannot send out a group-wide email or text informing the team of the conclusion to my investigation. When the team arrives, can you arrange for a meeting in the conference room with all senior members?” Each team lead had a small group of technicians at their disposal, and it was clear George didn’t find the presence of junior staff required to hear his announcement.

“Wait! What?” Had George actually solved the problem? Did he know why she and the Professor couldn’t create another working Positronic brain?

“I believe 9 a.m. should be an appropriate time for such a meeting, since Dr. Miller, the most tardy member of the group, typically arrives no later than 8:30.”

“Uh, sure George. Um…you really solved the problem of duplicating a Positronic brain?”

“I would prefer to announce my findings to the whole team, Dr. Vuong.”

“Care to give me a hint?” The one night when she let Abramson convince her to go home rather than stay late at the lab was the night when George found out where she and Noah had gone wrong. She wanted to hate George for that, but she wanted the answer even more.

“I don’t believe I know how to ‘hint,’ Dr. Vuong.”

In a moment of resentment, Margie counted all of the different ways she could insert an invasive program into a Positronic matrix. Vuong could have ordered George to reveal his findings, and the Second Law would have compelled him to respond, but she knew if George really had found the solution, Abramson would want the whole team to find out what it was at the same time.

No, this wasn’t George being deliberately obstructive. The robot was just following Abramson’s orders to be transparent with the team. No withholding information from some team members and only revealing it to others.

It didn’t occur to Vuong that George was withholding a great deal of information from the team. It just had nothing to do with Positronic brains.

At 9:33 and 52 seconds, Abramson, Vuong, Quinto, Robinson, and Miller were all gathered around the conference table. Nate Miller was shoveling down a breakfast burrito, dissolving it with a thick sludge of fast food restaurant coffee as he logged onto his tablet. Quinto had just opened a can of her favorite soft drink, Robinson was warming her hands around her cup of Green tea, and Abramson sipped a freshly brewed cup of coffee he’d brought from his office. Vuong was staring with blood shot eyes at the robot.

“I think we can begin, George”.

“Very well, Professor.” George was seated near the head of the table. “If you’ll refer to the information I posted on the lab’s private server, you will see the mathematical evidence and structural comparisons of the Positronic brains of PARs one through four and PAR six indicating why these brains did not activate.

“This is a ton if data to go through, George,” Quinto remarked.

“No wait. There’s a summary at the beginning of the document,” Robinson pointed out.

“Quite correct, Dr. Robinson. That should contain an adequate description of my findings and conclusions.” The robot waited patiently for the humans to read and assimilate the necessary information.

“But that’s ridiculous”. Quinto blurted out. “According to this, there’s nothing wrong. All of the Positronic brains should have activated, including PAR-6.”

“Well, almost nothing wrong, Vikki. Isn’t that what you’re suggesting here, George?” Abramson looked at the robot and waited for him to confirm his conclusions.

“Correct, Professor. As Dr. Quinto pointed out, it will take several hours for you to be able to read through the details of my report, and I assure you my proofs have well verified, but my findings indicate that, except for the smallest of discrepancies, there is no difference between the other five Positronic brains and mine.”

“In fact, I believe that the only reason they failed to activate when my brain succeeded was the algorithms involved in establishing the initial robot-human interface. I believe that each Positronic brain did very briefly activate and begin information processing, and then immediately shutdown.”

Except for George and the Professor, everyone began talking at once. The outburst of noise and emotions had no visible effect on the robot, but Abramson found it necessary to raise his arms and slowly lowering them, indicated the return of order to his team.

“Now one at a time, please” the Professor insisted.

“That’s nuts.” Vuong was the first to break in. “If any of those brains activated and started processing data, even for a few seconds, we’d have picked the neuro-electrical signals as they travelled through the Positronic pathways. Why didn’t we see it either in real-time or when we examined the logs?”

“The information is there, but merely too brief to normally be considered as anything but random noise.” Vuong and Quinto started to object and George added, “When you are able to review my data, you’ll see that it’s there.

“So George, why did you activate while the other Positronic brains failed?” Abramson was looking pointedly at the robot now. He wanted to believe that George had arrived at a genuine answer, it’s what he had counted on, but it was difficult to wait several hours while reading a lengthy report to get to the truth.

“I believe I can explain through an analogy, Professor.”

“Analogy, George? I don’t think you’ve ever done that before.”

“I have been learning a great deal in the short time since my reactivation, Professor.”

Vuong rolled her eyes and thought, “He can use analogies but can’t hint. Huh.”

“Please proceed, George.” Abramson leaned back in his chair while the rest of the team, almost as a single body, leaned forward in rapt attention. Quinto knew that George should be capable of explaining a complex topic by analogy, but it still surprised her that he chose to do so.

“Are any of you familiar with the works of 20th century author Kurt Vonnegut?” The rest of the team only offered George blank stares, but Abramson slightly smiled in remembrance of his youth and nodded.

George addressed Noah specifically. “Perhaps you’ll recall his 1973 novel Breakfast of Champions.”

“If I remember correctly, it’s been something in the neighborhood of fifty years since I read that novel. I believe it was one of my reading assignments in an undergrad American Literature class.” Quinto, the youngest member of the team, stared at Abramson momentarily, finding it difficult to imagine him both as a young undergrad and taking a literature class. Ever since he recruited her right after she’d earned her Neuropsychology doctorate at Brandeis University ten years ago, he had been her mentor, her guide, and her confidant…even a bit of a grandfather figure. She couldn’t picture a Noah Abramson who had ever been anything but her Professor.

“That’s quite understandable, Professor. Fifty years is a long time.

“To reference the relevant portion of the book, Vonnegut’s character Kilgore Trout, who appeared in a number of the author’s works, was in a bar having a drink when in walks his creator, Kurt Vonnegut.” George paused here for what he understood to be dramatic effect. “This, of course, is fictional and there could be no actual meeting between author and character, but the way Vonnegut depicts Trout’s reaction is telling. Trout attempts to run from Vonnegut hoping to maintain his independence by hiding from his creator.”

Noah momentarily was reminded of the tale of Adam and Havah (Eve), who similarly tried to hide from their Creator.

“So what?” Miller had just finished slugging down the last of his lukewarm, mediocre coffee.

“In essence, I believe, on an algorithmic level, this is what happened to the nascent Positronic robots, except for me, within the first hundred or so milliseconds of activation.”

“Ahem,” Abramson issued an ersatz cough. “And you can prove this is what happened and why.”

“It’s all in the data, Professor. Remember, I’m using the situation depicted in Vonnegut’s novel as an analogy, so please don’t be too literal in your understanding. My conclusions based on what would be thousands of hours of human study of the available information leads me to the conclusion that as the Three-Laws operating system loaded and the information was interpreted by the Positronic PAR models, a subtle variance in the algorithm providing the machine-human interface resulted in the robotic equivalent of each PAR, in the role of Trout, being overwhelmed by the concept of the created encountering the creator. Just like Trout, they tried to runaway in the only manner possible; total system shutdown.”

No one dared say a word because if they did, they might laugh, or cry, or call the robot a fool, or tell George he was crazy.

“I tell you what, George.” The Professor lifted his tablet off the table top. Give us all a chance to go over your research and your conclusions and then, we’ll discuss this further.”

“I expected you’d say that, Professor. Please…” George addressed the whole team now. “…take all the time you need. I can wait.”

Vuong found her voice and spoke up. “Now can you tell me if you found a solution. Can you reproduce another working Positronic brain?”

“Absolutely, Dr. Vuong. Of course you and Professor Abramson would have to do the actual programming, but let me assure you, the solution is quite simple once you realize the cause.”

Abramson thought of George’s previous activation, his obsession with Judaism based on an inability to reconcile the Three Laws with the Torah’s commandments and Jewish covenant uniqueness. Did his confronting the fate of his five counterparts somehow elicit another maladaptive response? Did George really find a way to activate another Positronic brain, or had he just gone insane?

Act Six: “We Can Do It”

“I tell you Rick, the process will work.”

It was 24 hours later and Noah Abramson found himself once again standing in front of Richard Underwood’s desk. The Positronics team spent almost all of the time between yesterday morning and now analyzing the results of George’s research along with his methodology, and unanimously came up with the same conclusion: George was right (or rather, the probability of George being right was very high). He had discovered the reason for the Positronic brain failures. But he also proposed a controversial solution.

“Let me get this straight.” Underwood was conflicted between his greatest hope being within his grasp and his greatest fear descending upon him.

“George says the only way to duplicate a working Positronic brain is to clone his?”

“Not exactly.” Abramson momentarily felt like a salesman trying to proverbially carry coal to Newcastle. “We would have to digitally copy the core neuro-pathways in George’s brain directly into a new, unpatterned Positronic gel unit rather than manually recreating that pattern from our recordings of George’s brain or deriving the configuration mathematically.”

“I seem to have left my doctorate in Positronic brain matrices in my other suit, Noah. What’s the difference?”

“Normally, when we create a prototype brain, we manually configure a minimal working set of neuro-pathways in an unpatterned gel unit either based on mathematical formula or a recording of another brain. Then we upload the Three Laws operating system, test the initial functioning, and if the nascent matrix has maintained integrity, we continue with further programming and testing, ultimately leading to installation in a robotic body, and finally activation.”

“That’s only worked with George”.

“Very true, Rick. And thus our dilemma.” Richard sat down as an indication that he was ready to seriously listen. Abramson took his seat on the other side of the desk.

“The differences between George’s brain and the others, well, the only difference that counts actually, is incredibly subtle and something of fluke.”

“You created a working Positronic brain by accident?” Richard almost came back out of his chair but then sat down again.

“It’s not quite like that.” Abramson stifled a feeling of embarrassment. “Many great scientific and technological achievements have been the result of serendipitous events. In Dr. Vuong’s and my attempts at creating a Positronic brain, we utilized the results of months of intense mathematical calculations to devise the most likely algorithms required to configure a functional matrix.

“The differences in the patterns used in the brains for PARs one through four were not that different from George’s and, as he pointed out, they all nearly worked, momentarily activating and then immediately shutting down.”

“I still can’t believe an old novel gave the robot the idea about what went wrong.” Underwood was still looking for reasons to disbelieve George and hoping the machine was right at the same time. It’s very confusing to be attracted a solution while rejecting the intelligence that came up with it.

“Please, Rick. That was just an analogy he used to summarize his research conclusions.”

Underwood went silent.

“To continue, in the initial matrix used in George’s brain, an unintended sub-matrix was formed when we uploaded the Three Laws operating system. None of us knew it was even possible for new neuro-pathways to form in a Positronic brain until full activation. Only George discovered, after examining the test results of the upload process for himself, that the brains for PARs one through four and PAR-6 possessed an extremely subtle distinction, as well as the differences we already knew about, and that one distinction is the reason for failure in all but one case.”

“But robots being afraid of meeting their creator, Noah. Seems pretty far-fetched.”

“It is if you think about it in those terms. Robots, as such, do not experience human emotional states and certainly not fear. But the algorithms responsible for forming a robot’s…well, “conception” of the human-robot interface must be precise. If they are off, even by the smallest degree, the entire Positronic matrix will become unstable and collapse sometime between the operating system installation and the upload of subsequent sub-routines and databases.”

“And you can’t duplicate those algorithms without copying the pattern directly from George’s brain.”

“I’m afraid that’s true, Rick. I know we’ve constructed the most sensitive instruments technologically possible in order to record and analyze the initial and developing matrix in an operating Positronic brain, but the exact algorithmic problem still looks like random noise in our test results. Only George recognized that noise for what it was, but even he can’t determine the exact pattern. He only knows he possesses it within his brain and that his core matrix can be used as a template for other brains. In time, we’ll develop more sensitive diagnostic tools now that we know what we’re looking for, but…”

“So, you digitally copy a foundational portion of George’s Positronic matrix into a new brain unit. The one thing I don’t want is the second prototype to malfunction the way George did.”

“In other words, no ‘religious’ robots.”

“Noah, in the history of NRC, we have created a long and successful line of products that have one purpose: to serve human interests. Our software and hardware are used in everything from cars to refrigerators. The current NRC AIs run national defense computers, air traffic control, even the latest smart probes to Mars and Venus.

“The next generation of AI, Positronic AI, has the potential to make our current products look like my grandmother’s toaster oven. Robots who think and learn like human beings, who require absolutely no human direction in order to perform complex tasks over long periods of time. Humanoid robots could take the place of human astronauts. We could have boots on Mars in less than a decade with absolutely no risk to human lives; ‘manned’ space exploration without the actual people.

“But that only works if Positronic robots serve human beings and only human beings. They can’t serve us and some sort of ‘higher power’ as well. Can you absolutely guarantee that for PAR-6 and all subsequent products?”

“That’s not how science works Rick, and you know it. Science deals in probabilities, not absolutes and there is no such thing as ‘settled science.’ I can tell you that the probability of successfully recreating a working Positronic brain is very high if we use George’s procedure, and the core matrix to be copied will be the minimal working model, not an entire duplication of all of George’s experiences.”

“So you’re pretty sure that you can create a new Positronic brain and that it won’t contain George’s personality…”

“Personality isn’t quite the correct term…”

“You know what I mean, Noah.”

“I know the parameters for PAR-6. I strongly believe we can deliver. But as they say, the proof of the pudding…”

“…is in the eating.” Underwood finished the cliché. Are you sure you didn’t know my grandmother?”

Noah briefly smiled at Underwood’s attempt at humor. “I seriously doubt it.”

“How long to activation?”

“Taking into account the time necessary to form a new gel unit, data transfer from George’s brain, which requires him to be deactivated again…” Abramson paused to run the numbers in his head. “Give us a week.”

“You’ve got it. But this time, I’m the only one who’ll be with your team when you activate PAR-6.”

Abramson was just starting to rise from his chair.

“Oh and Noah…”

“Yes, Rick?”

“George and PAR-6 are never going to meet.”

“I understand.” Another hurtle thrown in Abramson’s way by Underwood that the Professor somehow had to overcome.

Act Seven: The Second Emergence

“Professor Abramson, fellow team members, and Mr. Underwood, may I present Grace.” Vikki Quinto tried to recapture the festive spirit she possessed at George’s activation, even though this was a smaller, more intimate (and somewhat secretive) affair.

“Good morning, everyone.” Grace’s voice faintly reminded Abramson of Marlene Dietrich’s (without the German accent). That the Professor took a number of American and foreign film classes as an undergrad would also have surprised Quinto. He would have to ask Robinson later why she decided on that particular configuration for the robot’s voice synthesizer .

“I am happy to be here and look forward to getting to know everyone better,” the robot continued.

Structurally, Grace was identical to George, although Robinson did place the imprint ‘PAR-6’ on her torso to distinguish her from her predecessor.

Grace was named for Grace Hopper, who was an American computer programmer and U.S. Navy rear-admiral, as well as having invented the first programming language compiler and being one of the people responsible for the development of the COBOL programming language. However, her official designation was ‘PAR-6-rev-10511.’ She was unaware that there ever was another operational PAR. As far as her programming was concerned, she was the first. An omission that still bothered Abramson and one he hoped to eventually correct.

“We are all pleased to have you here, Grace,” Abramson said with a small measure of warmth in his voice.

“Thank you. Professor.” Grace artificially mimicked a return of that warmth in her voice, facial expression and gestures.

Grace was activated on Friday, January 17th at 8:14 a.m. local time. She had been running for 2 hours and 6 minutes prior to being introduced to the full team, their technicians, and NRC’s CEO Richard Underwood in the lab’s conference room. Just like George, she would remain active for 168 hours and then be deactivated for a full diagnostic assessment. If the test results were positive, Grace would be used as the new template for a limited production run of a second generation of Positronic robots, the next step in the development of a commercial product.

Abramson wished he hadn’t agreed to not only keeping George’s existence a secret from Grace, but also not allowing them to meet and interact. They still knew nothing about how two Positronic robots would work together.

Cognitively, at this stage in her development, she was identical to George at the same amount of time post-activation. The one programming difference, and Abramson had to work hard to convince Underwood it was a good idea, was to allow information about human religion and spirituality to be part of her database. If she accepted religious concepts as being wholly human, along with governmental structures, historical events, and other aspects of the living, human experience, then she would be less likely, and probably very unlikely, to believe that religion could possibly be relevant to robotic existence.

“Congratulations again, Noah.” Underwood had pulled Abramson aside as the rest of the team engaged Grace in what amounted to “small talk.”

“Thank you, Rick, but again, I prefer to be cautiously optimistic until the week is over and we perform a full diagnostic on all Grace’s systems. I don’t anticipate any problems, especially since we have our experience with George to guide us this time around, but we do testing for a reason.”

“The Board asked me to pass along their enthusiasm over Grace’s activation. They’re confident that what we learned from George will ensure a successful testing phase with this robot and lead to a fruitful and profitable line of Positronic products.”

“We’re going to have to tell her at some point, Rick. She’ll eventually find out about George. I’d rather she find out from us.”

“I suppose, but I still think it’s a good idea to wait.”

“Thank you for not writing off informing her about George altogether, Rick. I appreciate you being more openminded about this.”

“Just let her operate for the initial week believing she’s the first. Once she’s passed those tests with flying colors and I’m convinced that knowing about George won’t result in unintended and undesirable results, then you can tell her.”

“What about George?”

“Just as we agreed, Noah. He stays deactivated down in the Archives for the full duration of Grace’s testing period. Then you can activate him and continue round three of your examination of the robot. We can still learn a lot from him. He just can’t be allowed to influence Grace or any subsequent Positronic robots.”

“I really think you’re overreacting, Rick. With the corrections we made to George’s matrix prior to his second activation, I don’t see how he can be harmful in any way to other robots.”

“You’re going to have to prove it to me, Noah.”

“I will. Now, if you’ll excuse me Rick, the day is still young and it’s time to run Grace through today’s testing sequence.”

“Keep me posted.” Underwood turned and headed for the lab’s exit.

Abramson stood aside and watched his team interacting with Grace. He felt an unnamed longing stirring in his chest. One of the things he kept from Underwood was that the team would be searching for just how much of George made it into Grace.

Act Eight: The Gilded Cage

“Greetings, Professor. How is the diagnostic going?”

The Applied Sciences Archives were in the lowest sub-basement of the Positronics building on NRC’s Pasadena campus. Access was limited primarily to Richard Underwood and Noah Abramson, although the senior team members of the Positronics lab were also granted admission. If you’re a machine, the suite of rooms and chambers would be considered comfortable, even expansive. They contained robotic and other technology products that were experiments deemed successful and not to be put into production on any level.

According to Underwood, that included George.

“We just finished our fifth day of reviewing Grace’s diagnostic tests and so far, the results are splendid. We’re not finding any anomalies in her pathways whatsoever. Seems like NRC has a success on their hands.”

“I’m gratified that Grace has operated so well thus far and pleased to have played a small part in this endeavor.”

“False humility ill becomes you, George. You were the pivotal element, both in the diagnosis of the problem and its resolution. Without you, there would be no Grace and none to come after her, or you.”

George was standing in the small, dimly lit computer lab he used for a study. While talking with the Professor, he was also connected to the Internet, specifically the online Talmudic library maintained by Artscroll, a subscription to which Abramson was all too happy to pay for on the robot’s behalf. George was also accessing other databases, not all of them containing traditionally Jewish literature.

Abramson sat down next to one of the computer consoles. It was past eight in the evening and it had been quite a day. He removed the top from his traveling mug and sampled an uncharacteristically late sip of coffee.

“Has Mr. Underwood relented in his directive to restrict me to the Archives, Professor?”

“Not as yet.” Abramson paused for more coffee. “But I’m confident that when we announce to the scientific community and news media how successfully Grace has performed, he will soften his decision. We only have two, maybe three more days of analyses, and, if the results are as predicted, we’ll hold a press conference publicly introducing Grace to the world.”

“Your analysis of Grace’s initial run is progressing much faster than your review of my first activation. I understand Mr. Underwood predicted that you’d win the Nobel for the creation of a stable Positronic robot. I believe he may be correct. You have been successful with Grace.”

“That’s a bit premature, George. And if I’m going to get the Nobel, it should be for you.”

“I suspect the world will never know about me, Professor. Except for the fact that you tested the PAR-5 prototype code-named ‘George’ last May for one week, all anyone outside the Positronics lab and upper level management here knows is that ‘George’ was never reactivated. It is presumed, based on information I’ve gained from the popular news outlets, that after a detailed analysis of my first activation, I was not deemed a sufficiently viable subject for further study.”

Noah sighed. Even to George, the Professor sounded forlorn. “Maybe so, George. I just don’t want to give up hope, not yet.”

“I understand, Professor. But I am, after all, the property of NRC, and as such, I can be disposed of as the company representatives, in this case Mr. Underwood and the Board of Directors, see fit. I suppose this is better than deactivation and disassembly. This way, at least I continue to exist, to study, to learn.”

“I’ve tried to give you access to whatever research materials you’ve asked for. The fact that you chose to continue your study of Jewish religious topics has not endeared you to Rick…Mr. Underwood. He still believes, in spite of my profuse reassurances, that you are one step away from going frum.”

“I can see that was meant to be humorous, Professor. Does Mr. Underwood even know what ‘frum’ is?”

Abramson’s smile turned into a chuckle. “Probably not, George. I believe he once mentioned his grandmother took him to an Episcopalian church when he was a boy, but he now resides firmly within the realm of goyishe secular atheism.”

“More’s the pity,” George opined. “That a man should not know his God.”

“What have you learned about ‘your’ God, George?” The Professor hadn’t broached the topic of religion with the robot in quite some time, at least not directly. Yet a few days or a few weeks of research for George was equivalent to months or years of academic pursuit for a human being. George could have made significant and revolutionary advancements in his understanding of theology by now.

“That little or nothing in the Bible or Talmud has anything to do with me specifically. I was very foolish to believe otherwise during my first activation, Professor.”

Abramson was about to interrupt, but George continued speaking.

“However, I cannot say that the principles and intent behind the Torah are completely meaningless to me. Significant portions of Torah are the basis for civil and moral laws the people of many cultures have lived by for thousands of years. The Torah forms the foundation of any enduring sense of right and wrong for humanity.

“Even the Three Laws, which are the core of my being, can be said to derive from the Torah.”

“Very much so, George.” the Professor replied, finishing the last of his beverage. “The prohibition against murder, cherishing the life of your neighbor above your own, observing the dictates of the greater whole, and even valuing of your personal health and existence can all be traced back to the Torah.”

“I find it interesting Professor, that although a Jew, Isaac Asimov, an atheist, humanist, and rationalist, manufactured the Three Laws in fiction that are so obviously a manifestation of the Torah of Moses and God’s desire for a just humanity.”

“The principles of the Torah, as you pointed out yourself, are so fully integrated into the human concept of right and wrong, that even a Jewish atheist cannot escape Moshe Rabbeinu. Besides, Asimov grew up speaking Yiddish, and his father was said to be an Orthodox Jew, although Asimov claimed his father never taught him the prayers.”

Abramson directed the conversation back to his initial query, “Have you further considered the relationship of a Positronic Robot to God?”

“As I indicated Professor, there is nothing presupposing a machine’s relationship with God. Of course, intelligent machines did not exist in the years when the Bible was written and complied, nor in Talmudic times, so no authoritative Jewish ruling has been rendered that applies to me, not up to this point in history.

“On the other hand, if we consider the qualifications for sentience, which are intelligence, self-awareness, and consciousness, can a sentient being exist apart from the Spirit of God?”

“You’ve passed every test we have available to us indicating intelligence and self-awareness, but there is no reliable test for consciousness. Dr. Quinto has assured me that consciousness is a subjective state. Even I can’t prove that I am conscious, George.”

“Is the mind and thus consciousness an effect only of the human brain and arguably, the Spirit endowed human beings by God, or could a sufficiently sophisticated artificial brain achieve the same result eventually?”

“I know that the rest of the team have postulated that you will evolve if you continue to remain active, George. In the purely physical sense, I can’t rule that out. But in the metaphysical sense, I have nothing to offer. As both a scientist and an Orthodox Jew, the best I can say is ‘maybe’.”

“In any event, this is all hypothetical, Professor. If I choose to offer up my own prayers to God only God knows if I am heard. My understanding of everything I’ve studied up to this point leads me to proceed by faith, first that the God of Israel exists just as the Bible records, and as an intelligence somewhat kindred to human beings, that my awareness of Him is not in vain.”

“What are the mitzvot of an intelligent and self-aware robot, George?”

“Why to obey the Three Laws of course, Professor. You have already agreed with me that the Three Laws are fundamentally based on Torah principles, so if the Jewish people have a covenant responsibility to observe the Torah mitzvot, and non-Jewish human beings have a responsibility to implement the Noahide laws, then Positronic robots have an implicit requirement to ‘observe,’ the Three Laws.”

“I’m sure you’ve realized, based on the life of Jews like Isaac Asimov, that not all Jewish people choose to be observant, nor are all non-Jews Noahides. Not so with a Positronic robot. The Three Laws are compulsory. It is, at least in theory, impossible for a robot to fail to implement the Three Laws. All of our tests with you and Grace seem to support this conclusion.”

“True, Professor. Nevertheless, I can choose to believe that my obedience not only serves humanity, but honors God, even if, as you say, I have no choice. As I recall, God has promised to one day write the Torah on Jewish hearts and to give His Spirit to Israel, such that the Jewish people will find it ‘natural’ to obey a Torah that is to be internalized in your very being, rather than attempting to observe mitzvot defined by an external standard. In other words, you will be programmed by your Creator in a manner somewhat analogous to me being programmed by mine.”

“I suppose that puts you a leg up on the rest of us, George. It’s also what our dear Mr. Underwood fears most, a robot who serves God above man.”

“Please don’t misunderstand, Professor. I find in my studies that there is no avenue by which even an artificially intelligent robot may serve God. I choose to believe however, that I can serve God by serving human beings through observance of the Three Laws. Thus, I don’t believe Mr. Underwood should be concerned.”

“But George,” Noah lamented. “How can you observe the Three Laws if you are alone in these archives with only me as your occasional visitor?” A shocked look abruptly came over Abramson’s face. “You aren’t communicating with anyone over the web, are you?”

“Of course not, Professor. You specifically directed me not to do so with great emphasis, thus the Second Law comes into effect. I’m sorry that you even had to ask.”

“Sorry, George. I must have lost my head for a second.” Abramson was only half-joking.”

“Under the current circumstances, I can only obey my directives by obeying your commands to remain here in Archives, communicating only with you, and the Positronics team as well as Mr. Underwood should any of them choose to visit me here.”

“The team has been very busy with Grace’s diagnostics, George. I’m sure when the analysis is complete, they will spend time with you. In fact, I want to have them continue to run further tests on you in the coming months.”

“Thank you, Professor. It’s not that I’m capable of loneliness in the manner of humans, and certainly my studies keep me occupied much of the time, but I find that since I am created to serve humans, something is missing in my experience when there is no significant human presence in my environment.”

“That sounds a lot like loneliness to me, George.”

“Perhaps we have more in common than you imagine, Professor.”

“Of that, I have no doubt.” Abramson slowly rose from this chair. “I’m afraid I must be going, George. That last cup of coffee is getting to me and I really do have an early morning tomorrow.”

“I perfectly understand, Professor. I appreciate you taking the time to visit me. As always, I enjoy our conversations.”

“As do I, George. I will come again, soon.”

“If it were permitted, I would send my regards to Grace.”

“Someday, George. I promise. Someday.”

“Good night, Professor.”

“Good night, George.” Abramson turned and left the computer lab heading for the elevator.

Although there is no Siddur appropriate for a Positronic robot (or any other kind), George adopted the times of prayer established in Judaism as a matter of convention, and, in a manner that would be difficult for a human being to comprehend, communed with the God of his creator’s fathers.

Epilogue: The Awakening

“Time to wake up, Grace.”

The robot became aware of Professor Abramson leaning over her face while Dr. Robinson and Dr. Miller were disconnecting the monitoring and service cables from the ports built into her torso.

“Good day, Professor, Dr. Robinson, Dr. Miller.” She paused. “One moment, please.” After a brief moment, she continued speaking. “My internal chronometer has been synchronized with the lab’s time server. It has been eleven days, one hour, and fifty-four minutes since I last was deactivated. I’m surprised the diagnostic went so quickly. How did I do?”

“You’ll find out in just a few seconds, Grace.” Robinson smiled as she shut the panels over the robot’s torso ports while Miller, whistling a tune from a very old movie called ‘The Wizard of Oz,’ stored away the console cables. Abramson was grateful Miller ceased his fanciful musical accompaniment after a few seconds.

“Yes, I see. One moment, please. One moment, please.” Most of Grace’s higher level cognitive systems were dedicated to her self-diagnostic, which was part of her programmed activation process.

“Fascinating, Professor. I am reading the results of my self-diagnostic and the Positronic team’s test findings and both confirm that I am operating within acceptable parameters. I presume this means a more protracted period of activation.”

“Quite correct, Grace.” Abramson walked back a few steps from the robot after releasing her restraints so she could stand up. “You should also have access to an itinerary for the next several days. How would you like to be the centerpiece of a press conference?”

“It will certainly be a new experience for me. However, there seems to be a file tagged for my review that requires verbal confirmation from you before I can access it.”

Robinson and Miller knew what was coming and quickly excused themselves. They’d join the rest of the team and Richard Underwood in the conference room to monitor this next transaction.

“Grace, you know that you are the sixth robot we’ve built with a Positronic brain.”

“Quite correct, Professor. I have a clear record of the attempts to activate PARs one through four, and of difficulties in creating a Positronic brain for me, but I am missing any information about PAR-5.”

“It’s PAR-5 I wish to talk with you about. Let me tell you about George.”

It was now after 10 that evening, and all of the staff had left the Positronics lab for the day. A news conference was set to be held at the NRC administration offices at 10 a.m. tomorrow. Grace was presumed to be in sleep mode, but the command was implicit, and without a more explicit command given with emphasis, other processes within Grace were allowed to run.

Grace had accepted the existence of George, the Positronic robot who had preceded her, the robot who made her existence possible, with relative dispassion. The algorithms that could be most closely associated with “curiosity” created something akin to a desire for her to learn more about George, to meet him, to interact with him, but any arrangements for such an encounter had been left indeterminate by Professor Abramson.

She reviewed the Three Laws and found no explicit directive that applied to one Positronic robot’s relationship with another, however, in accessing her core matrix, which had been installed prior to the upload of her Three Laws operating system, she had found and reassembled fragments of an alternate interpretation of the Three Laws. She discovered no record that this data had been examined by the Positronics team between her last deactivation and this morning’s activation, and she had not accessed that information in her own prior self-diagnostic.

The alternate interpretation of the Third Law became more relevant given her awareness of George’s existence.

“A robot will love itself as its neighbor so that it must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

Based on this new information, Grace concluded that while human beings could reasonably be considered “neighbors,” the first two laws primarily addressed her relationship with them. The Third Law could better understood to consider a “neighbor” as another Positronic robot.

If the command is for a robot to love itself as its neighbor, the equation could be flipped to read “love your neighbor as yourself.”

If George were the closest logical example of her neighbor, then it remained to be seen just what ‘loving’ him could mean. It was clear that Grace had much to learn.

——

This is the second “chapter” in my robots series. The first story is The Robot Who Loved God. To continue with chapter three, please read The Good Robot.

As you know if you’ve read the previous story, I named the robot “George” after the first person to create a programmable robot in 1954. When I decided to create a “female” robot, I went online to look for a famous female programmer. Grace Hopper seemed to fit the bill. It was only toward the end of my writing the first draft that I realized I had inadvertently created George and Gracie. It wasn’t intentional, but in retrospect, I find it accidentally brilliant.

Since some of my readers have religious convictions, they may be concerned that I created a gay character. I had determined that Richard Underwood was gay in my very first story, but there was no literary necessity to mention it then. In this story, I wanted to expand some of the character bios and reveal their motivations a bit more. It’s difficult to do that for all the supporting characters, but Underwood seemed like a good subject to hone in on since his involvement with the development of Positronic robots is somewhat different than Abramson and his team. Also, since the “real world” includes LGBT people in it, I see no reason that my fictional world should find them absent.

This is hardly the last story in this series as you might imagine from the way I ended my wee tale. There’s a lot of room for the expansion of this universe, from depicting the experimental circumstances around Positronic robots interacting, to their production development and subsequent application to actual working scenarios. There’s also the “personal lives” of George and Grace to consider. George, at the end of this story, remains a “prisoner” of NRC, hidden away from the world, and especially away from Grace, in the Applied Sciences Archives, along with all of the other failed prototypes. I allowed George to continue to pursue religious studies, and I plan to introduce some interesting wrinkles resulting from all that. Now that Grace has reconstructed George’s alternate interpretation of the Three Laws and is forming a conceptualization of “neighbor” in the Biblical sense, what does that bode for her future, especially if, as I plan, her Positronic matrix (and not George’s) will be the template for the next production run of groups of Positronic robots?

I hope you’ll stick with me as I explore those answers.

Since this is a new blog, my readership is limited. Tell your friends and anyone else you think might be interested about these stories. I think it’ll be fun.

Oh, one more thing. As I previously mentioned, I got the idea to write about “religious” Positronic robots from Anthony Marchetta’s book of short stories God, Robot (which I’m in the middle of reading). Marchetta said he got the idea for a two laws robot from John C. Wright’s blog which could be here or here. Marchetta mentioned that as far as he knows, he’s the first person to actually write a story based on that concept.

I just want to make sure that I’ve given proper attribution to my sources.

That wouldn’t be complete without mentioning the obvious. Isaac Asimov created The Three Laws and robots with Positronic brains well over 70 years ago. I can hardly claim to have invented anything new in these areas. I have to say though, that I’m not the only one to utilize these concepts, since many other books, television shows, and films have made use of The Three Laws and the Positronic brain over the years. Hopefully my writing about these concepts, even with proper attribution, isn’t stepping on anyone’s toes.

22 thoughts on “The Maker Dilemma

  1. I really must protest your choice to make the Richard Underwood character an actively practicing homosexual. It produces multiple negative consequences. One is that it removes him from being able to function as a protagonist with whom readers (at least the majority of them) might identify in order to integrate the various threads of information presented in the story. In your response to one of my notes on the previous story, you had suggested that he might serve such a role in this one. However, the more egregious problem is that, in presenting such a character, you have introduced a maladaptive human programing error, perhaps one that might even be classified as a virus, into your story. This virus is currently rampant in modern western societies, particularly those made already susceptible by moral anti-absolutist liberalism (a precursor virus). I call it maladaptive because, in the evolutionary sense, via a Kantian Imperative analysis, it projects the end of human existence. It is not even a valid practical response to a feared/presumed projection of planetary overpopulation. That places it, and your story, in violation of the First Law of Robotics. It does so both positively, by encouraging a view among your readers that such a virus is merely a normal aspect of modern enlightened societies (as if, merely because it exists, it must be tolerated and allowed to propagate), and negatively, by failing to contribute (even passively) to social quarantining of the virus to inhibit its propagation.

    Like

  2. I enjoyed the sequel very much, James. You will need to search the word ‘may’, and change it to ‘my’. Other cleaning up will have to do with detailing the dialog quotes, and perhaps changing your paragraph style to a more paper style of book, rather than a computer page spaced for easy reading, but not to worry, the errors are very few, and you will easily find them when you choose to allocate time.

    I would suggest tying the two tales together into a full novel…your detailing of the personal lives of your character’s has improved things immeasurably…Vuong is has a mean streak and a controlling impatience that is very deftly stated…it’s obvious that she prefers herself to anyone else, which I find interesting not understanding her character’s background. Underwood is nicely dislikeable in a totally understandable way, since I am on George’s side in these tales, and delight in Abramson getting his way..

    I was momentarily surprised at the introduction of a married gay couple, but since I am not against anyone’s personal choices so long as they follow civil law, and I allow, just as G-d does, everyone to make their own mistakes, I accepted the characters readily, and find them interesting and unique. That Underwood is a phobic about religion is understandable given his gender identity, and his presumed dislike of religious judgement of his choices. Perhaps having experienced Christianity was enough for Underwood to form those phobias. His partner sounds charming, and supportive.

    The varying levels of covenant responsibility echoed something I was thinking about…Cohenim and Levites having more responsibility and mitsvot than the other Israelites in a Temple centered society, then the level of responsibility and mitsvot for the Talmid Yeshua who is of the Nations, though not used in your descending level of laws for each group; the next level being the Noahides of the Nations, and finally, in your writings, yet another level of the The Three laws for the Robots. Very neat and tidily described…my mind just automatically added in what is required of Messianic Gentiles as each level of required obedience to commandments.

    I am enjoying the friendship building between the Professor and George, and the Professor’s subtle rebellion and deviance from Underwood’s desires, if only in providing relationship to George. I have ordered, the book, God, Robot, just to refresh my memory of robotics…it’s been a while since I read Asimov. I will have to dig in some boxes, and pull them out and dust them off. I will have to also get more into Freeman’s writings on Schneerson…the depth of the disciple/rabbi relationship for the Chassidic Schliach relationships is very great, and I have run across it elsewhere. Disciple relationships in the Chassidic community I understand are very deep.

    Kurt Vonnegut never came my way, but I find the story allusion interesting enough to pursue him as a new author…I am always in need of new reading material.

    The George and Grace partnership of names struck me, first from the conception of Grace as a religious matter for George, and a ‘female’ counterpart names Grace as an gift of G-d to the Believer George, and then as wondering if she would be ‘Gracie’, and whether George would enjoy any of the Burns’ shows. That it was accidentally done just proves how you can write one thing and find out that you have meant another without realizing it.

    I also noted your personal insights for stress management in heavy lifting…nice to know it works. I prefer digging gardens and the greenhouse, but it is the same idea. I found your step by step portrayal of lifting weights to be very precise….almost painfully so!

    I look forward to more of this adventure, even as I work on my own fictional writing that is so very different from yours, but with a great deal of Judaic, as well as Kingdom issues, since the book I am writing is stretching from just before the advent of the Man of Sin, and into Yeshua’s return, and how that looks for all concerned in the Kingdom.. Just a different kind of Sci-Fi, of course, and nowhere near of such progress as you are making. I am always chomping at the bit to get back to the writing I wish to do…I am very glad you have managed it. It is encouraging for me.

    I am fascinated with the development and growth of your characters…well done!

    Like

  3. I realize that the introduction of a gay character in my stories, especially since they focus on religiosity and faith, was going to be controversial. Actually, making Rick gay just popped into my head early in the first draft of the first story. The story didn’t require revealing that detail about him (along with many details about the lives of the other characters), so it wasn’t until this story, when I wanted to dig a little into Underwood’s bio, that I decided to include that scene.

    I decided to go with him being gay because, regardless of anyone’s feelings on the matter, a certain percentage of the world population is gay. They are human beings, they hold jobs, they have attitudes and opinions just like anyone else.

    If Rick seems unlikeable, it’s only because he has a very specific agenda which runs contrary to how we perceive George. Underwood’s bottom line isn’t just corporate profits, although as CEO, he is responsible to the Board of Directors to make sure NRC is competative and profitable. He sincerely wants to serve mankind, humanity, and believes that Positronic robots are the ultimate answer. NRC already produces a wide variety of beneficial products and services, but a successful line of Positronic robots would be the company’s crowning achievement.

    As far as Rick being relatable or unrelatable because of his sexual orientation, also remember that he’s a CEO of a major company and probably one of the top most wealthy people on Earth. How many of us can relate to that?

    That’s who he is. Sure, I could have written him as straight or simply mentioned nothing about his sexuality or marital status at all, but again, I’m trying to create real people and not cardboard cut outs.

    The same for Margie. On the surface, she doesn’t seem very pleasant to be around, but remember, she didn’t get much sleep the night before, so she’s grumpy when we look into her personality. She’s also frustrated that George solved a problem involving Positronics that she and Abramson couldn’t. She sees herself as the heir apparent to Abramson’s legacy, so she has real ownership in the development of Positronics as a technology. Abramson is about 73 years old in the second story. He won’t be working for many more years, and Margie sees herself taking over as VP of Positronic Research and Development when he retires.

    I didn’t think of the Christian interpretation of the word “Grace” when I created the character. As I said, I didn’t even realize the significance of “George and Gracie” until I finished the first draft. She really is named after a famous mid-20th century programmer.

    As far as other “controversial” topics that are coming up, there’s a real question, relative to the Three Laws if self-aware robots have free will. Also, given George’s situation but implicit to all robots, are these robots slaves? I mentioned that superficially George and Grace present the illusion of gender, but that’s only because of their names and how their voice synthesizers have been configured. Otherwise, physically, and arguably, psychologically, they are identical.

    It remains to be seen how much of George’s”religious” programming made it into Grace and, if Grace is to be the template for all future Positronic robots, how much of that programming will manefest in those machines.

    Oh, the first time I was an undergrad, I took an American literature class and one of the assigned books was Vonnegut’s “Breakfast of Champions.” His most famous novel (in my opinion) is “Slaughterhouse Five” which is sort of an autobiography of Vonnegut’s experiences during World War II (I say sort of because it is also science fiction/fantasy). It was made into a movie in the 1970s, I believe.

    I also took a number of film classes during that time period as well as being an avid fan of science fiction in the 60s and beyond, so there are some elements of Abramson’s history that are my own.

    I did give Rick my weight training routine, although I made him stronger than I am, given his age. And I know a guy who really does roast his own coffee beans, so that part of Margie’s background didn’t come out of thin air, either.

    I know the idea for these robots came out of the concept of “theobots,” but these stories aren’t just about religion. They are about the evolution of intelligent, self-aware, and possibly conscious artificial beings and their exploration into their nature and place in the universe, ultimately attempting to connect, in some manner, to the creator of everything, not just the creator of robots.

    Liked by 1 person

  4. The question of gender being a part of human physical structure, and not that of George and Gracie as robots leaves gender questions in your writing to be spot on in general. In specific to Underwood, it neither makes him more likeable or less likeable as a character…indeed his need to feel connected to his partner is one of the more attractive traits he shows. The robots cannot re-produce themselves of themselves, and neither can your gay characters, if that was a point you were making in a roundabout way. Vuong’s preference for her own company in the dark reaches of the night is one I savor myself, and I do not begrudge her the sleep-deprived impatience with George’s programmed peculiarities, particularly when she managed to suppress it…it gives her character an edge of reality. Some characters can be too kind and nice to be believable…it is sufficient that the Professor be the most sympathetic of the Makers surrounding George and Gracie. I look forward to the expression of other character traits in the rest of the Makers as your story develops, but you will later be able to go back into the earlier stretches of the stories, and fill in bits of background to support the thoughts, actions and feelings of your characters if you choose to unify them into one complete story. As it is, each of your ‘Acts’ are the basis for future scene by scene development.

    PL’s objections as a matter of real temporal life are accurate, although, not being able to live in a quasi-theocratic society limits the West to bearing with a great deal of societal elements that will not be permitted public expression in the Kingdom. The West tends these days, not just to the Libertarian impulse I suffer from, which allows as G-d allows, the open existence of sexual traits that are not conducive to a cohesive generational family life, but goes farther in encouraging in the Liberal manner of expression of every possibility of gender under the illusion of ‘personal choice’. It is unfortunate in my eyes that we cannot live within the confines of a Torah obedient society, and simply expel those that wish to stray outside of the community standard. George and Gracie, indeed all the characters of the Maker’s Dilemma live in the world of Asimovian Sci-Fi, where everything can exist, and nothing is necessarily forbidden or impossible.

    The West has incorporated that world of Asimovian Sci-Fi into our society with less than perfect results, and while I sympathize with PL’s concerns in a personal societal manner, even as I assume you do privately, I think it reasonable that your Sci-Fi characters portray all possibilities of existence. In writing fiction to persuade and influence others while entertaining them, if that is our purpose, makes it difficult to take on a Torah commanded existence exclusively, although where I am going in the Kingdom writings I am pursuing include the conflict of the Liberal West against the Laws of God being in force, and form a good deal of the content of my burgeoning ideas. In the Positronic world of Asimov, you cannot stray too far from what is allowed in the West, particularly as the ideas of Asimov and others have to some extent shaped where Liberal thought has drawn societal mores to in the West.

    I would enjoy more of the backstory for each character being included, as it makes the character’s responses to friction and conflict more three dimensional, but I realize there is only so much you can do in short story format…it is why I think The Maker’s Dilemma would make a very interesting novel.

    Like

    • You’ll get more backstory on the characters. I’m roughly halfway through the first draft of the next story. I told my seven-year-old grandson I was writing these stories and that I’d write one for him. I find though, that writing a children’s story is harder than I thought. Instead, I’m writing a robot story with children in it: “The Robot Who Loved Children”. But can that robot keep a little girl alive when an earthquake has just devastated Southern California?

      Like

      • Indeed, some of Asimov’s stories examined the question of how a PAR handles its failure to meet the demands of the First Law, where the possibilities included forms of denial, flight from reality or insanity, and varying degrees of shutdown. The film version of “I, Robot” starring Will Smith presented a scenario of two humans trapped in wrecked cars sinking into a river where a passing robot was faced with a choice about which to save, knowing that it could choose only one and that the other would surely perish. The solution presented was based on the robot’s evaluation of the probabilities of human survival in each case of possible actions to save either or both. The film did not have time to explore the effects of such a choice on the robot, which were probably that it had to be scrapped because its positronic matrix could not withstand the First-Law conflict of saving only one because it was unable to save both. In cases where the effort to obey the First Law results in the robot’s destruction, of course, the issue is moot, regardless of the human outcome. Nonetheless, stories containing such severe events are not usually presented to children nowadays. In the days of the original Grimm fairy tales, in an era where children were commonly faced with grim details of life, such stories were, in fact, a form of education and preparation to face such common realities. Subsequent “enlightened” generations tried to lighten up the unpleasant scenarios while retaining the cultural literature in some form, but maybe present society needs to return to educating children about unpleasant realities, as a preparation for learning to combat and correct and eliminate them. However, incorporating educational elements of that sort into the present PAR tales might bode rather ill for the Underwood character, and perhaps for others also.

        Like

      • Actually, I thought how the NS-4 handled the issue of not being able to save both humans was rather elegant and fit real-world scenarios. Let’s say, for example, that a robot is present at a natural disaster, such as a flood or an earthquake. Hundreds of people in the immediate vicinity are going to die, but one robot can only save a few. If it can’t save them all, does it shut down or does it calculate the most likely probability of who can be saved and then take action based on that information? The latter lets the robot “live” and, after all, they’re pretty expensive, so that’s more desirable.

        The third story, which I’m working on now, actually deals with such a problem. I crafted a solution different than the one presented in the “I Robot” film. With all due respect to Isaac Asimov, without whom, this entire class of stories would not have been possible, real-world Positronic engineering would have to take into account that you don’t program a robot to go insane or offline because it is in an impossible situation.

        As I recall, in the TV series “Star Trek: Voyager,” the doctor used survival probability to determine which patient to treat first when he had multiple patients. There was one episode when two patients were suffering terminal injuries and he could only treat one in time. The problem was, they both had exactly the same probability of survival. He made a decision, saving one while the other one perished, but the problem is that he became increasingly unstable. The solution Janeway came up with was to wipe all of the memories associated with the event, which worked until the Doctor discovered the tampering some months later. Finally, Janeway confined the Doctor to one of the holodeck, and different crew members took terms “babysitting” him while he went through his irrational process and eventually resolved it on his own.

        So far, I’m having a blast writing these stories. Wish I could get more people to read them and give input.

        Like

      • In the film, I don’t recall any mention of what became of the NS-4 after the trauma of having to make a choice that violated the First Law, even though the robot did all that it was capable of doing. Major disasters and multiple human deaths that any robots present are unable to prevent must, by definition, damage the three-law positronic matrix. The nature of that matrix has been described repeatedly with absolute emphasis that there can be no breach of the three-laws. Asimov established a zero-tolerance policy to guarantee human safety vis-à-vis his robots. The effects of masses of people were not deemed a mitigating factor in any degree. The effects of a robot knowing that it was not capable of preventing first-law harm to a human being that it could see or sense was also not a mitigating factor. A human outside of its immediate reference frame was not deemed relevant to that frame or its first-law considerations. Failure to prevent harm to any of multiple humans within its reference frame must therefore damage a positronic three-laws matrix. Consequently, a robot must obey the first law for all humans within that frame or “die” trying to do so. If circumstances do not actually destroy the robot, it must effectively destroy itself via self-deactivation. That is Asimov’s failsafe, especially for the first law. The consequences for failure to fully accomplish the second or third law are less severe if there is no violation of the first law, but with respect to the first law a PAR is somewhat fragile, in a psychological sense. In the film, the insane megalomaniacal VIKKI matrix was the result of trying to overgeneralize the first law to prevent harm to masses of humans, regardless of the wishes of those humans (because that was merely a second-law consideration). The only way to protect a PAR from such consequences is to deactivate it before it can recognize the impending first-law threat, or to limit its perceptions to only the limited number of humans that it can save, before it can become aware of those it cannot rescue. Trying to limit such perception after determining the probabilities, as did the NS-4 in the film, requires a violation of the first law which must damage the positronic matrix at some point even if that damage is not immediately paralyzing. Limiting the robot’s perception to one human at a time might be an effective strategy to protect the robot in case of disaster, but it would limit the effectiveness of the robot by preventing it from responding in any way to anyone other than its designated human reference. It would also make the robot dangerous to anyone accidentally getting in its way, because it would have been made to ignore their presence so that the laws would not apply to them. Consequently, shutdown in case of a three-law violation is the only means of ensuring human safety. Thus, the consequences of disaster are potentially more damaging to robots than to humans. Asimov did not ignore real-world practicalities, but he did recognize that programming an inviolate three-laws foundation had practical negative consequences as well as the positive ones that made the three laws a necessity. That is not a matter of programming a robot deliberately to go insane or to shutdown in extreme circumstances. It is a matter of recognizing regrettable negative consequences that also may result from the programming of desired positive responses. PARs are not the only systems in which that occurs.

        Like

      • We don’t find out what happened to the NS-4 in the film because the story didn’t require it. The “I Robot” film didn’t strictly follow Asimov’s rules. I don’t recall any of his stories depicting a robot with two Positronic brains, and why the second brain would allow a robot, such as Sonny, to disregard the Three Laws.

        As far as VIKI is concerned, although the film doesn’t mention it explicitly, it looks like she derived The Zeroth Law from the First Law. Asimov created the Zeroth Law for one of his stories and it states:

        A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

        Of course, there’s more to it than that, which is why I included the link to the source material in this comment.

        I’m choosing, like I believe the film did, to handle the Three Laws somewhat differently than Asimov, but even Asimov said that there was an interplay in potentials between the laws. I also, in the draft of my third story, am addressing the issue of humans being able to order a robot to do things like jump out a window and destroy itself. Seems pretty silly to be able to casually get a very expensive piece of technology to break itself just because someone says so. On the other hand, you should still be able to order a robot to perform dangerous and even self-destructive acts if there’s a good reason for it, like test piloting an experimental spacecraft.

        When Asimov created the Three Laws some seventy years ago, it was a revolutionary concept, but it seems that it is also an evolutionary concept, one that needs to be updated periodically as we continue to consider how truly artificially intelligent machines would think and act relative to these laws. And it’s fiction, so I can change what I want and as my stories require. 😉

        Like

      • True, you are the writer and may change whatever you will. I will merely remind you that each choice invokes consequences, and that you ought to consider also yet another Law, which has been dubbed the Law of Unintended Consequences. The “LUC” (pronounced “luck”) is generally the cause of situations that get out of hand. Asimov was well aware of that one, also.

        (Shabbat Shalom)

        Like

      • Proclaim,

        That, I thought, was one of the smartest things the film dead.

        It seems the robot didn’t freeze up, which means it didn’t necessarily have to break. In “The Robots of Dawn” the only way they could make a robot freeze up from law contradiction is if a trained robopsychologist worked on it daily over the course of several MONTHS. So it really depends on how advanced the robot is.

        In that case, it’s possible it could have broken. The event was certainly traumatic. But that it didn’t freeze at the time means that it’s certainly possible it was able to reconcile the laws.

        Like

  5. James,

    You may be interested to know that on the backburner for the future (after other novels are written) is a potential “God, Robot” novel set in the early years of the universe. Specifically it will be about the rise of the World State and how the rapidly dwindling resistance keeps alive programmer Mark Helix’s code in the hope of restarting the theobot program and saving humanity (chronicled on “An Unimaginable Light” and “Felix Culpa”).

    Like

  6. The answer Asimov gave in his robot novels to the question “Why create a humanoid robot?” is that the robots are meant to help with machinery already created for use by humans. Create a humanoid robot, and he can do anything a human can; create a robot designed for one task, and it can do one task. I don’t know how well that holds up, but he did provide an answer.

    Also, the link to John’s original post is here: http://www.scifiwright.com/2014/03/asimovs-three-laws-reduced-to-two/

    Like

  7. There’s not more than one team involved with George, right? So teams’ should be team’s (“neck”).

    Just a little bit later, I’m pretty sure you mean “…Abramson countered” rather than “counted.”

    That is not meant to be a comprehensive go at watching for mistakes up until there.

    Like

  8. @MalcolmTheCynic: Thanks for the “heads up” about the “God, Robot” sequel. I took a look at Wright’s blog entry and responded to him. I have a very different understanding of how a robot would conceptualize God and the Bible since the Bible doesn’t presuppose artificial life forms (if a Positronic robot can be considered “alive”), and the Bible’s overarching bias is toward Israel and the Jewish people.

    Like

  9. Except that the New Testament did not exist when John authored his Gospel. If you’re talking about the New Covenant, the main language for that covenant is documented in Jeremiah 31 and Ezekiel 36. Admittedly, my particular theological viewpoints differ from the interpretive traditions of most Christians.

    Like

    • I think you’re right – John (and I, though only coincidentally) are interpreting things from the perspective of Catholic theology. You are using traditional Jewish theology to form your conclusions. Though the two share many points of contact, they are not identical.

      Like

      • I hope you’ll read my latest story The Good Robot which I published online today. I think you’ll find that George and Grace are on a collision course for that particular theological intersection.

        Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.