I just read a story at New Scientist called Autonomous AI guards to stalk the internet fighting hackers. Apparently, earlier this month at the Black Hat conference in Las Vegas, with a $4 million prize hanging in the balance, different Artificial Intelligences were set up to hack each other while defending themselves from their opponent’s hacking attempts.
I know, right? The machines are hacking each other.
This has a good side and a bad side in the real world. The good side is you can configure an AI to look for vulnerabilities in your own system, patching them as they’re found. The bad side is that malicious players can set up their own AIs as autonomous hackers, scanning the web looking for vulnerable systems and exploiting them when discovered.
The New Scientist article ends with the somewhat humorous and ominous paragraph:
In a talk at Black Hat, Devost (Matt Devost of cybersecurity firm FusionX in Washington DC) joked that the competition heralded the launch of Skynet, the malevolent AI in the Terminator films. “Everyone laughed,” he says. “The humans were applauding their own demise!”
In science fiction, advanced AI is often depicted as evil and intent on destroying humanity, hence the reference to “Skynet” and “The Terminator” film franchise. But the ability for a computing system to learn and act autonomously doesn’t imply intent or free will. The machines at the Black Hat conference attacked each other exactly as planned. They didn’t do anything unusual or unexpected.
I’m not sure why some people, even some very smart people, think they can design machines that may eventually become self-aware when we don’t know what makes human beings possess self-awareness, consciousness, and sentience. How can we create something in a machine we don’t even understand in ourselves?
The real danger always comes from the human beings controlling the machines. Unless AI develops a consciousness and perceives itself as having interests and priorities independent of their human creators and programmers, they will simply learn to become better and better at whatever task they were created to perform.
Maybe I lack imagination. I don’t know.
It is true that I am writing a novel about AI, artificially intelligent humanoid and non-humanoid devices that evolve to become conscious, to have free will, and perhaps to achieve actual sentience. In my story, this has profound implications for the human race as well as for the “race” of these synthetic intelligences.
Science fiction is less about predicting the future and more about commenting on the human condition. The climax of my currently unfinished novel reveals more about the nature of humanity than it tells us about the ultimate evolution of AI.
In my opinion, any danger represented by artificial intelligence will be because of malicious people, not malicious machines.
But I’ve been wrong before.
Addendum: I just came across a couple of articles stating that a truly intelligent chatbot or personal assistant (think Siri) may be a lot further off than we think. This topic specifically zeroes in on how AI bot understand (or the lack thereof) language.
The first comes from the MIT Technology Review and is called Tougher Turing Test Exposes Chatbots’ Stupidity. The second is a commentary by Mark Baker on the first called Another demonstration that language is stories.
As a writer, this makes me feel safer from AI.