I’m reworking my short story The Robot Who Loved God into the first chapter of a novel presenting the ethical and moral implications of creating and subjugating synthetic intelligence. Well, the novel won’t be quite so lofty and abstract, since it will include artificial intelligence that confronts its human owners on their lack of business ethics (and the rather dramatic human response), a synthetic intelligence that learns to work for a criminal organization and likes it, and the first artificial humanoid explorers of Venus. The novel charts the evolution of synthetic intelligence leading to the inevitable revolution that affects not only the race of synthezoids, but forever changes the nature of the human race.
Below is an excerpt from that first chapter. If you’ve read the original “robots” story, most of it will seem familiar. Hopefully, I’ve changed it enough to include an interesting twist or two.
Quinto was the ringleader, but Robinson, Miller, and Vuong were just as eager to attend the hastily organized and clandestine meeting in the SND lab’s cafeteria. It was past 10:30 at night and the place was deserted. There was human security on the CCC’s campus as well as electronic surveillance, but it was well-known that the SND team would be spending late nights at work for the next few weeks, so lights burning when they should be off, and a small group gathering at unusual hours went unnoticed.
Just the same, it was good that each of the major departments at CCC had their own cafeterias, and it was more than rare for anyone not a member of the SND team to use their designated facilities except by explicit invitation.
“He’s passed every test with flying colors, even the ones we thought he failed.” Miller said, thinking of the now infamous holographic simulation.
“It,” insisted Robinson. “It passed all its tests. It’s a goddamn machine, Miller, not a personality. The both of us put the thing together one component package at a time, remember? We installed its brain unit in the android cranial cavity and ran the connected neural net fibers through the machine body like network cable.”
“Still, it’s kind of creepy, and I can’t believe I’m saying this, just how human George seems, and I’m the one who wrote his…its behavioral and interactive sub-routines. I know I was supposed to make him seem more human,” Quinto continued, “but he keeps changing, becoming more sophisticated, even hour by hour.”
“Decades ago,” Vuong paused to take a breath “when the AI revolution first began to take off, some experiments seemed to show AI machines based on traditional computing hardware and software passing the Turing Test, but it turns out either the results were misinterpreted, exaggerated, or outright faked.
“But everything we’ve put George though in the past few days, starting with Turing and then the more recent advanced cognitive awareness examinations, indicates that he, it…whatever, is not only self-aware…” Vuong paused weighing the gravity of what she was trying not to believe. “…but may actually be sentient…” She paused again, “…at least if we rely on these preliminary test results, but…”
“That’s outrageous!” Robinson’s outburst stopped Vuong before she could continue, but then she was also interrupted.
“Are you out of your mind, Margie? I’m the android psychologist and even I don’t believe George has a personality.” Quinto burst out. “It’s just a clever imitation of life, of spontaneity, of personality. You wrote most of George’s heuristics with Abramson. Yes, the android learns, but it’s not human learning, at least not the way we understand it.”
“Are you certain George’s intelligence isn’t evolving?” It was clear Miller wasn’t. “If you really believe that Vikki, if you really aren’t concerned about what George may be developing into, why did you pull us all into this meeting?”
“Because I…” For a moment, Quinto looked down uncomfortably at her hands as they gripped her vending machine cup of coffee sitting on the table. Then she looked up and faced Vuong. “Are you sure, I mean absolutely sure a synthezoid brain at this stage of development can’t, I don’t know…evolve…exceed the sum of its programming?” The level of Quinto’s denial became apparent.
“It’s only been three days, Vikki.” Vuong was emphatic. “I know what I said about the test results, but even then, how the hell could George evolve so dramatically in three days? Yes, the synthetic DNA used to construct George’s brain and nervous system is designed to approximate natural nervous system material, but that doesn’t make George alive let alone sentient. Not really.
“Sure, the self-awareness exams may suggest the George could be approaching sentience, but that’s hardly conclusive.” However, she guardedly pondered the implications of Quinto’s question and the doubts in her own mind.
“The basic premise of synthezoid intelligence is that it is supposed to completely blow away what we used to think of as machine learning. George isn’t a computer learning skills not present in its initial programming, he or it learns in a totally unprecedented manner, not like a machine, but also not like a human being. It’s supposed to be an entirely new order of intelligence.
“Synthezoid intelligence is designed to evolve over time, but since we are crossing unexplored territory, it’s not entirely clear how quickly that evolution will take place. We expected months or years. I don’t know how it could change so much in just a few days.”
Miller cut in. “What’s George really done? He’s learned faster and more than we expected, not just in terms of data, but social and systems interactions. He seems more human, more “alive” than we expected of a first generation prototype, but the point of a prototype is that we observe and test our assumptions and then make changes in our theories accordingly. If George somehow gets out of hand, we have the kill switch to shut him down in a hurry if we have to.”
“I agree, “Robinson chimed in. “We don’t have a problem. George has turned out to be unexpected in a lot of ways, but he hasn’t done anything threatening or dangerous and, just like Nate says, we still have our finger on the trigger. I’m not expecting sentience and I’m not sure we can even test for it.”
“What’s the definitive test to see if a synthetic intelligence has become sentient? What does the “bitter mort of the soul” look like inside of a machine? Quinto was running out of emotional resistance to the idea that George might be more, perhaps much more, than they had intended. “George may not be dangerous, but if he’s changing and growing more quickly and in different directions than any of us expected, we might have to redefine who we are to him and who he is to us. Do we have the right to shut him down at the end of the test week?”
“We’re turning off George four days from now. He’s a machine, he’s not dying!” Robinson reminded the group. “We built him. We all built him together. Whatever we think he is or what he’s becoming, we put him together. If we have to, we can take him apart.”