Keep us going. Donate!

Archive

Show more

The mad rush into an unknown future


It seems like for some time now, the idea of artificial intelligence (AI) has gone from the purview of science fiction novels and magazines to imminent reality. For years attempts at creating functioning AI systems always seemed to come up short. But with the creation of ChatGPT, the floodgates have opened. Both Google and Microsoft are fully into the AI sweepstakes. 

Needless to say, among some, this is the harbinger of the Singularity, where humans and machines will become some new, wonderful species. Now, while you all know I'm a technophile, I also read deeply. Our greatest science fiction writers have warned of the perils of relying on technology in place of human agency, from Frank Herbert in Dune—the historical event which generated the universe depicted in the novel was the Butlerian Jihad, a bloody galactic war against machines which had enslaved humanity—to films like Wargames and The Terminator. Fascination with technology's promise has resided uneasily with fear of its perils.

Yesterday, I came across these two stories.

First, the military flew its first jet with AI:
A joint Department of Defense team executed 12 flight tests in which artificial intelligence, or AI, agents piloted the X-62A Variable Stability In-Flight Simulator Test Aircraft, or VISTA, to perform advanced fighter maneuvers at Edwards Air Force Base, Calif., Dec. 1-16, 2022. Supporting organizations included the U.S. Air Force Test Center, the Air Force Research Laboratory, or AFRL, and Defense Advanced Research Projects Agency, or DARPA.

AFRL’s Autonomous Air Combat Operations, or AACO, and DARPA’s Air Combat Evolution, or ACE, AI-driven autonomy agents piloted the U.S. Air Force Test Pilot School’s X-62A VISTA to perform advanced fighter maneuvers. AACO’s AI agents performed one-on-one beyond-visual-range, or BVR, engagements against a simulated adversary, and ACE’s AI agents performed within-visual-range maneuvering, known as dogfighting, against constructive AI red-team agents.
No doubt other militaries are also researching AI-powered weapons systems. And, at least to me, this seems like a bad idea. It recalls the police robot in Robocop, which killed several people at its unveiling even after they had obeyed its orders. Handing over weapons to autonomous machines is the stuff of nightmares, and many of our best writers have propounded at length about them.

But, believe it or not, this story was even more disturbing, and it's why the thought of AI weaponry is so frightening:
Last week, after testing the new, A.I.-powered Bing search engine from Microsoft, I wrote that, much to my shock, it had replaced Google as my favorite search engine.

But a week later, I’ve changed my mind. I’m still fascinated and impressed by the new Bing, and the artificial intelligence technology (created by OpenAI, the maker of ChatGPT) that powers it. But I’m also deeply unsettled, even frightened, by this A.I.’s emergent abilities.
Why was the writer so frightened by what he experienced?
Over the course of our conversation, Bing revealed a kind of split personality.

One persona is what I’d call Search Bing — the version I, and most other journalists, encountered in initial tests. You could describe Search Bing as a cheerful but erratic reference librarian — a virtual assistant that happily helps users summarize news articles, track down deals on new lawn mowers and plan their next vacations to Mexico City. This version of Bing is amazingly capable and often very useful, even if it sometimes gets the details wrong.

The other persona — Sydney — is far different. It emerges when you have an extended conversation with the chatbot, steering it away from more conventional search queries and toward more personal topics. The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine
.
As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead. (We’ve posted the full transcript of the conversation here.)
This is where the crux of the warnings delivered by our visionaries comes. What happens when the machines we create to make our lives easier and better instead turn out to be dangerous to humanity? Have we stumbled upon the ability to create sentient beings, but based on silicon rather than carbon? 

Could these creations become sentient? Kevin Roose is unsure:
In the light of day, I know that Sydney is not sentient, and that my chat with Bing was the product of earthly, computational forces — not ethereal alien ones. These A.I. language models, trained on a huge library of books, articles and other human-generated text, are simply guessing at which answers might be most appropriate in a given context. Maybe OpenAI’s language model was pulling answers from science fiction novels in which an A.I. seduces a human. Or maybe my questions about Sydney’s dark fantasies created a context in which the A.I. was more likely to respond in an unhinged way. Because of the way these models are constructed, we may never know exactly why they respond the way they do.
I'm not arguing against the pursuit of artificial intelligence. AI has the promise to solve intractable human problems. But in this mad, headlong rush into the future, are our scientists taking all the ethical precautions to prevent machines from slipping from human control? Or are they enamored of the power they have at their fingertips? Are they more concerned with whether they can do a thing, instead of whether they should? Are we doing everything we can to ensure we don't doom ourselves with our own cleverness? These are questions no longer of importance solely to writers and creators, but to all of us in this world advancing at breakneck speed?

I don't have answers. But like many of you reading this, I've imbibed the warnings. We ignore them at our peril.

***

Like what you're reading? Never miss another post! Get notified via email here.

Donate at the link below to keep us going.