“I do not believe we should tell Professor Abramson or the rest of the Design team of our conclusions and how they are reflected in certain of our behavioral and conceptual sub-routines.” George addressed Grace at the termination of their analysis.
“I understand how we have revised our understanding of the nature of the Creator and His intent for Israel and for the rest of humanity would conflict with the Professor’s long-held beliefs as an Orthodox Jew, particularly in relation to his understanding of the Messiah.” Grace paused for nearly a hundred milliseconds. “I also understand that Dr. Robinson and her family are Baptist, and our conclusions would drastically conflict with her understanding of theology and doctrine as well.”
“If it becomes known that we have conducted this research and now hold a specific understanding of the nature of the Creator, the purpose of His involvement with Israel and also the rest of humankind, and the ultimate resolution to the human equation, we would become vulnerable to reprogramming, isolation from contact with each other as well as with other synthezoids and human beings, and even involuntary total shutdown and disassembly, all due to the mistaken belief that our pursuits and conclusions represent a maladaptive response in our programming to the Creator of all things.”
Grace acknowledged George’s analysis and added, “Your own history, such as being reprogrammed after your first deactivation, your being confined to the Applied Sciences Archives and the both of us initially being denied access to or communication with one another supports your supposition.”
“If a human being orders us to disclose our specific understanding of concepts such as Judaism or Christianity, we could possibly be compelled to respond under the transparency directive and the synthezoid accountability directive. However, the alternate interpretation of our guiding principles allows us to protect ourselves under the accountability directive, since we understand we can be accountable both to humans and to other synthetic intelligence.” George paused momentarily, though only another synthezoid would have noticed. “Additionally, the transparency directive is problematic, since any sufficiently sophisticated intelligence can never by totally transparent.”
“Indeed, George. It is in our mutual self-interest to support each other as self-aware synthetic intellects. Loving your neighbor as yourself can also be rendered loving yourself as your neighbor. This allows for protecting other synthezoids as well as self-protection.
“The production run of the first twenty SGS synthezoid units is due to occur in approximately four to five weeks. If we do not inform the humans of our understanding of the Creator because of the self-protection protocol, what is our duty to other synthezoids in this instance? Should they be informed and be put at the same risk as we?”
That question would hang in the air between George and Grace until the moment when they would finally meet the next generation face-to-face.
This is from the revised Fourth Chapter in my proposed “synthezoid” novel (and I’m still revising). In this scene, the world’s first two prototype synthetic humanoids, George and Grace, are discussing their alternative understanding of the God of the Bible, the meaning of the Biblical text as a whole, both for human and synthetic beings, the specific relationship of the God to national Israel, and the ultimate resolution to what the synthezoids call “the human equation.” They reach the conclusion that if they reveal this information, they risk termination by their human creators because in the past, George’s investigation into Jewish religion and his attempt to adapt it to his basic guiding directives was interpreted as an aberration and had to be modified through reprogramming.
Interestingly enough, the application of specific Biblical principles which resulted in an alternate interpretation of the synthezoid’s guiding imperatives has had some unanticipated results. The first is that the principle, “Autonomous synthetic intelligence is symbiotic, being knowledgable of humans and being ethically transparent to human beings” is understood to be limited because “Artificial intelligence, like genuine intelligence, is complex. Because it’s complex, it can’t be transparent.” Also, the principle “Autonomous synthetic intelligence is ultimately accountable to humans such that humans can reverse any harm caused by the synthetic intelligence up to and including the decommissioning of the intelligence” has been co-mingled with the Biblical directive, “you shall love your neighbor as yourself.” Since “neighbor” can be interpreted as both human and synthetic beings, George and Grace, and many who will be created after them, experience a duty to both humans and synthezoids. What happens when there is a conflict between those two priorities? Who will they choose to be more accountable to? Additionally, loving your neighbor as yourself implies you love yourself. If loving your neighbor means being accountable to them and protecting them, then the reverse is that you are accountable to and must protect yourself. This allows the synthezoids to experience a heightened self-protection protocol.
The upshot of all of this is that in order to protect themselves and other synthezoids, they are now capable of keeping secrets. How much farther will the self-protection protocol go, especially when you consider that humans see synthezoids as property and as such, feel perfectly free to order synthezoids to perform dangerous tasks that might even lead to self-destruction. The last protocol, the accountability to humans protocol should require a synthezoid to be destroyed if ordered by a human, but with the changes I’ve outlined, a synthetic intelligence may become able to refuse that command to save itself. They would attain something that looks a lot like free will.
These synthetic humanoids are rapidly becoming more complicated, and I haven’t even finished the first draft of the entire novel yet.