I happened, on two separate circumstances this week, to meet two technology engineers/scientists who were working on AI, one of them a corporate consultant for a leading global financial services firm, the other a scientist for one of the biggest tech companies in the world. We didn’t have in-depth conversations, I can’t play on that level, but I did press them on the subject of AI from one artist/writer’s point of view. They were quick to impress upon me that AI was simply there to make everyone’s life easier - that it was a tool to give options, to do the grunt work so that the artist, having had the opportunity to explore all possible avenues, could choose the one that was “best”. Of course, I leaned into the arguments of copyright infringement, the Writer’s Strike, and what it means to be an artist and the creative process; a writer or artist doesn’t want or need a tool to explore the possible permutations of a creative endeavor - the creative act is the thinking, not just the doing.
I had spent about four months painting the Northern Lights in watercolor for a book and for the most part, failing miserably. Around that time, Photoshop came out with a beta program offering a generative AI tool. I’m terrible at Photoshop, but the tool was easy to use and the prompts (at that time at least) were simple. The Photoshop sample prompt was, to my chagrin, Northern Lights. I typed in the prompt and readied my paints, paintings, brushes, computer, and livelihood to be tossed out the window.
The below was the result, some bizarre heart-shaped burn mark and a cloud of dust (the black boxes are by me as I can’t reveal the book images yet). Relieved, I did not toss everything out the window and finished my work, painstakingly and by hand.
Months later, for this article, I tried again and the results were a bit better, but not by much. At least they were green.
Granted, had I asked AI to generate an image within a photograph, as opposed to a watercolor, the results might have been spot on, who knows? That wasn’t a very involved prompt, images were parsed based on whatever the training input was. But those inputs are a black hole and the companies behind the technology guard them closely.
Whether or not AI ‘thinks’ is something that’s been heatedly debated by the likes of Gary Marcus, Professor Emeritus at NYU, Yann LeCun, VP & Chief AI Scientist at Meta and Geoffrey Hinton, Professor Emeritus of Computer Science, University of Toronto and formerly of Google Brain, and the arguments are technical and fascinating. You can read about it and Marcus’s response, but in short, oversimplified and incomplete summary, Hinton claims that LLMs (large language models that recognize and generate text, images and other things) have a limited understanding of language because what they do is confabulation, which is something people do, and Marcus responds that “wholesale unintentional fabrication” is not generally what people do at all, and is not equivalent to either making a flawed argument or trying to spin something so it goes your way.
In an interview with The New York Times and other news outlets, Hinton said he quit his role at Google because he was concerned that Open AI’s use would overwhelm users with output that could be false and that they wouldn’t know the difference. He has even intimated that it poses an existential threat to humanity and has far surpassed the human brain’s ability to learn, will outperform humans, and will interminably self-replicate. His argument for making this his life’s work:
“Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”… (He falls back on what he calls… )“the normal excuse: If I hadn’t done it, somebody else would have.”
That excuse has been dictating some of the most powerful issues of the last century: not only whether or not to build the atomic bomb, but also the hydrogen bomb, the Cold War and even the computer, which has made all of this, including Open AI, possible.
And while we’re on the subject of the Cold War, recall the proverb scholar Suzanne Massie pressed President Reagan to take to heart during the mid-to-late 1980s while the Cold War was still going strong, 'trust, but verify'.
What does this mean in today’s world? How can you verify when the very outputs there to aid the endeavor are untrustworthy?
I’m neither a computer scientist, cognitive psychologist nor attorney. I can, if pressed, regurgitate some information, like the capital of New York or Picasso’s Blue Period, though at a much slower rate than an LLM. But I don’t really care to. Humans don’t operate off a cut-and-paste model, but a thinking one. Yet back in 1950, Turing wondered exactly that: can machines think? At this moment, machines don’t seem to be sentient.
Google asked the question: what are people thinking. And the answer to that question became its own self-fulfilling prophecy and determinant in human behavior. Facebook sought to suck up human life, and we threw open the doors to our homes, invited it along on family vacations and offered up our innermost thoughts on issues large and trivial. Now it appears we’re asked to believe that Open AI “will increase the power of the mind much more than optical lenses strengthen the eyes…”. But will it?
Actually, Leibniz said that in 1716, about his calculus ratiocinator, which anticipated the digital computer. And now the genie is most definitely out.
At least as early as 2002, Google engineers were scanning books to be read, not by humans, but by AI. That brings a dreadful question to the forefront: who is art for?
We teach young people to read and think at the same time. Trust and verify. From a developmental perspective, this process occurs to varying degrees of understanding from roughly age three on. We mustn’t deceive children with the information we put out there, we mustn’t coerce them with information. The way children develop critical thinking skills is by evaluating statements from others. Those statements matter.
Technology is developing rapidly, but humans are not. George Dyson asked, “What if the price of machines that think is people who don’t.”1 Yes, we have limitations (biases, cognitive distortions, faulty memories) and Marcus outlines others as well as some of the things we’re much better than AI at doing.
Data errors, copyright infringement, bio-warfare, human enslavement on one side and human cognition, human understanding, the human condition and humanity on the other. It’s a tough one. Only one thing is certain, “We’re not in Kansas anymore.”
For some further reading:
Current legal:
In the News:
Here on Substack:
Dyson, George. Turing’s Cathedral: The Origins of the Digital Universe. Pantheon, 2012.
As a WGA member, I'm not that excited about our strike victory. Whenever we get comfortable and think everything is going to be fine, we're wrong. We're being naive if we don't have a long run strategy because the studios do and it doesn't include writers.