Excerpt from “The Android Who Loved God”

I’m reworking my short story The Robot Who Loved God into the first chapter of a novel presenting the ethical and moral implications of creating and subjugating synthetic intelligence. Well, the novel won’t be quite so lofty and abstract, since it will include artificial intelligence that confronts its human owners on their lack of business ethics (and the rather dramatic human response), a synthetic intelligence that learns to work for a criminal organization and likes it, and the first artificial humanoid explorers of Venus. The novel charts the evolution of synthetic intelligence leading to the inevitable revolution that affects not only the race of synthezoids, but forever changes the nature of the human race.

Below is an excerpt from that first chapter. If you’ve read the original “robots” story, most of it will seem familiar. Hopefully, I’ve changed it enough to include an interesting twist or two.

questor

Mike Ferrell as Jerry Robinson on the set of Gene Roddenberry’s “The Questor Tapes” (1974)

Quinto was the ringleader, but Robinson, Miller, and Vuong were just as eager to attend the hastily organized and clandestine meeting in the SND lab’s cafeteria. It was past 10:30 at night and the place was deserted. There was human security on the CCC’s campus as well as electronic surveillance, but it was well-known that the SND team would be spending late nights at work for the next few weeks, so lights burning when they should be off, and a small group gathering at unusual hours went unnoticed.

Just the same, it was good that each of the major departments at CCC had their own cafeterias, and it was more than rare for anyone not a member of the SND team to use their designated facilities except by explicit invitation.

“He’s passed every test with flying colors, even the ones we thought he failed.” Miller said, thinking of the now infamous holographic simulation.

“It,” insisted Robinson. “It passed all its tests. It’s a goddamn machine, Miller, not a personality. The both of us put the thing together one component package at a time, remember? We installed its brain unit in the android cranial cavity and ran the connected neural net fibers through the machine body like network cable.”

“Still, it’s kind of creepy, and I can’t believe I’m saying this, just how human George seems, and I’m the one who wrote his…its behavioral and interactive sub-routines. I know I was supposed to make him seem more human,” Quinto continued, “but he keeps changing, becoming more sophisticated, even hour by hour.”

“Decades ago,” Vuong paused to take a breath “when the AI revolution first began to take off, some experiments seemed to show AI machines based on traditional computing hardware and software passing the Turing Test, but it turns out either the results were misinterpreted, exaggerated, or outright faked.

“But everything we’ve put George though in the past few days, starting with Turing and then the more recent advanced cognitive awareness examinations, indicates that he, it…whatever, is not only self-aware…” Vuong paused weighing the gravity of what she was trying not to believe. “…but may actually be sentient…” She paused again, “…at least if we rely on these preliminary test results, but…”

“That’s outrageous!” Robinson’s outburst stopped Vuong before she could continue, but then she was also interrupted.

“Are you out of your mind, Margie? I’m the android psychologist and even I don’t believe George has a personality.” Quinto burst out. “It’s just a clever imitation of life, of spontaneity, of personality. You wrote most of George’s heuristics with Abramson. Yes, the android learns, but it’s not human learning, at least not the way we understand it.”

“Are you certain George’s intelligence isn’t evolving?” It was clear Miller wasn’t. “If you really believe that Vikki, if you really aren’t concerned about what George may be developing into, why did you pull us all into this meeting?”

“Because I…” For a moment, Quinto looked down uncomfortably at her hands as they gripped her vending machine cup of coffee sitting on the table. Then she looked up and faced Vuong. “Are you sure, I mean absolutely sure a synthezoid brain at this stage of development can’t, I don’t know…evolve…exceed the sum of its programming?” The level of Quinto’s denial became apparent.

“It’s only been three days, Vikki.” Vuong was emphatic. “I know what I said about the test results, but even then, how the hell could George evolve so dramatically in three days? Yes, the synthetic DNA used to construct George’s brain and nervous system is designed to approximate natural nervous system material, but that doesn’t make George alive let alone sentient. Not really.

“Sure, the self-awareness exams may suggest the George could be approaching sentience, but that’s hardly conclusive.” However, she guardedly pondered the implications of Quinto’s question and the doubts in her own mind.

“The basic premise of synthezoid intelligence is that it is supposed to completely blow away what we used to think of as machine learning. George isn’t a computer learning skills not present in its initial programming, he or it learns in a totally unprecedented manner, not like a machine, but also not like a human being. It’s supposed to be an entirely new order of intelligence.

“Synthezoid intelligence is designed to evolve over time, but since we are crossing unexplored territory, it’s not entirely clear how quickly that evolution will take place. We expected months or years. I don’t know how it could change so much in just a few days.”

Miller cut in. “What’s George really done? He’s learned faster and more than we expected, not just in terms of data, but social and systems interactions. He seems more human, more “alive” than we expected of a first generation prototype, but the point of a prototype is that we observe and test our assumptions and then make changes in our theories accordingly. If George somehow gets out of hand, we have the kill switch to shut him down in a hurry if we have to.”

“I agree, “Robinson chimed in. “We don’t have a problem. George has turned out to be unexpected in a lot of ways, but he hasn’t done anything threatening or dangerous and, just like Nate says, we still have our finger on the trigger. I’m not expecting sentience and I’m not sure we can even test for it.”

“What’s the definitive test to see if a synthetic intelligence has become sentient? What does the “bitter mort of the soul” look like inside of a machine? Quinto was running out of emotional resistance to the idea that George might be more, perhaps much more, than they had intended. “George may not be dangerous, but if he’s changing and growing more quickly and in different directions than any of us expected, we might have to redefine who we are to him and who he is to us. Do we have the right to shut him down at the end of the test week?”

“We’re turning off George four days from now. He’s a machine, he’s not dying!” Robinson reminded the group. “We built him. We all built him together. Whatever we think he is or what he’s becoming, we put him together. If we have to, we can take him apart.”

Advertisements

10 thoughts on “Excerpt from “The Android Who Loved God”

  1. Typo alert! That’s “bitter mote of a soul”, not “bitter mort of the soul”. The concept is of a “mote”, something small — in this case the minutest portion or seed or kernel of something that might be or become a “soul”. Of course, regardless of how poetic the phrase, it still leaves unanswered the question of what constitutes a soul.

    Like

      • “Mort” would be the “death” of a soul (in this case). What did you find that makes you think it could be appropriate? I suppose you might try to infer from it a “last gasp” of an almost-extinguished soul; but the phrase cited in the film “I Robot” was “bitter mote”. “Mote” is the smallest bit of something. And the reference to “bitter” is an archaic usage for what we think of as “itty-bitty”. Thus a “bitter mote” is an “itty-bitty particle”.

        Like

      • I can’t say I have much respect for Yahoo as a collector of literary or idiomatic data. I offered a possible inference that would correspond with the misreading of “mote” as “mort” (similar to one of the Yahoo answers); but it is a misreading, nonetheless. The passage from “I, Robot” is musing over the enhancement of a mere difference engine to process the more complex operations that might begin to resemble a minimal beginning of a primitive personality. Hence, it is asking about the minimum capability or behavior that might indicate the presence, in even the tiniest degree, of an actual soul — thus, the “bitter mote” or “itty-bitty particle” that I described. It is the self-awareness that can ponder the meaning of death which is an indicator that such a personality or soul exists. The phrase “bitter mote” is a descriptor of that tiny potential soul, as used in the film; whereas “bitter mort” would be describing possibly an existential threat of terminating that soul’s existence. Consequently your usage of it in the context of the story above, which asks what this potential “soul” would look like within the machine environment, or more precisely, the software environment, cannot be referring to the thought of death but only to the appearance or characteristics of this putative soul. Thus the phrase that you need in this context is “bitter mote”, as in the film.

        Like

  2. Keep it ‘mort’, but add…”he misquoted”. Satisfies both the desire to prepare the reader for thoughts about death in the story, and tips the hat to the careful reader of sentient computer stories. It is a pleasant re-working of the original story, and is being shaped nicely…I imagine we will not get to see all the reworkings of the story, more’s the pity. I don’t mind paying you for the entertainment value, and I realize that you will need all the numbers you can get to satisfy your eventual publisher, but I enjoy watching the story develop, and it helps me in my own struggle with the fictional word to see what in my writing would be a flaw actually creates your unique style. Consequently, every time I try to re-write your sentences as I read them, I learn a lot.

    Like

      • @James & @Questor — Even Yahoo included elements of the same answer that I offered above about “the bitter mote of a soul”. I think Yahoo was quite in error to assign the appellation of “Best Answer” to the misquote “mort”. The context in “I, Robot” was certainly not in any way related to the death of a soul, because the passage was musing about what might enable some aspect of robotic behavioral programming adaptation to rise to meet even the minimum criteria that could qualify as a soul. It was not musing about any conceptualization of robotic “death”, but rather about some observed odd behavior of robots that were still in operating (i.e., “living”) condition even when they were in disuse and had been placed into storage.

        Like

      • I’m willing to go with the consensus but alas, I haven’t had time to work on the actual novel lately. I can only carve out tiny amounts of time for short stories and flash fiction lately.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s