The Rise Of Artificial Intelligence And The High-Wire Approach Of Its Regulation
My only extant memory of 1992 came when I was 6, sitting on the couch between my older brother and my father. Kyle, himself, at the time, nine, told me a new movie had recently come out on VHS — and that it was awesome. I was curious. What piqued me was my brother’s excitement to share something with me, an act not in line with his quiet character. As Terminator II: Judgment Day rolled, I watched with a grim fascination at the panning landscape of a wasteland. Debris and destruction pulled along the crimson background of torched sky, and smoke and human bones littered the rubble. The camera paused to show a human skull, up-close. In a moment I will never forget, a metallic foot appeared from screen-right and, with a vicious and rigid insouciance, crushed the skull. The foot was part of an armed robot, whose eyes glowed steadily, red, as the machine walked, with others exactly like it, across the scene. I had no idea what to think. I knew I was afraid.
James Cameron crafted the film (an Arnold Classic, as it were) to be a shocking, introspective look at the crashing tide of Artificial Intelligence (AI) against the will of the human race. This is no horror film, though. It can only, in the light of recent mechanical and computerized development, be seen as a prophecy, and a warning. AI has influenced the most recent election in the United States, with Russian-backed compu-bots having scrawled out thousands of automated Tweets, messages that, perniciously, were made to look to have come from the accounts of prominent lawmakers, actors, and other influencers of our culture.
Indeed, GPT-2, the AI system Russia had employed to this end, could be best called, to quote Oren Etzioni of the Harvard Business Review, “rogue actors.” What has proliferated since this terrorizing wave of forgery is alarming, and needs to be regulated by society and government. I fear the implications of Russia’s cyber-attack for its speed and believability. I, too, had thought some of these messages credible, if not important to my life.
The corner turned by AI scientists who had theretofore promised only health and convenience through automation was a dastardly shift, but was, amazingly, one that had come cloaked in the legacy of Aldous Huxley and the fleets produced by Henry Ford’s assembly line. Etzioni warns us, rightly, that these bots are capable, now, of doing much more than tweeting, or stomping through a paper-mache skull on a movie set. At present, AI systems of all sorts have spawned robots that can march, do backflips, and parachute from planes. Artificial bots have been implicated in the surreptitious hacking of American companies, and the appropriation of their communicative abilities, to be directed at consumers, and at citizens.
To quote Etzioni: “[The] forgery of documents, pictures, audio recordings, videos, and online identities will occur with unprecedented ease,” and his argument seems truthful and educated. There exist systems that, in some thunderous mutation of simple phone apps, can replace the mouths of politicians as they speak, even in real-time, with those provided by voice and video actors.
Lamo and Calo, in a call-to-vigilance published in the UCLA Law Review, assert that AI-bots “foment political strife, skew online discourse, and manipulate the marketplace.” What is to prevent one of these bots (or, to step-down, even, Cameron’s fearsome prophesy) from imitating the president or other heads of state — or anyone — giving us a call to war, or announcing falsehoods of any lesser order? What is required is a cool approach to measuring the accelerations of such AIs, and to skillfully, and humanely, harness their speed and power. The answer, sadly, is “Not enough.”
In his essay from Quartz, Dave Gershgorn reports that, as of July 1, 2019, California Law SB-1001 will require that any non-human interface alert the screen-scroller that it is not human, should the identity of such entities be uncertain. I think this a good first step, but more needs to be done. Governments need to slow, or, at least, supervise, the reach and accessibility of bots, not only vis-a-vis e-commerce, but toward the sustenance of a credulous, safely-hierarchical government system. In the rabid development of AI systems we can foretell, squinting, a future that is unreliable, at best. Can we know what we are looking at, now? Can we trust “the news” anymore? Our situation amounts to a snake eating its tail, or, more accurately, to the puppet coming alive against its handler. The problem with AI, specifically, is that the handlers are no more subject to disinformation than are the creators. The woodcarvers who, having barely finished crafting the dolls, shall come to find fewer and fewer steps are required before the model doll pops to life.
Where does this end come, logically, but in a planet occupied by clashing robots? Without interference that is swift, educated, and ethical, our society may not survive — and neither will the world that comes after.
Craig Malesra is a freelance writer and editor, and can be reached at email@example.com.