Yeah, sure, let’s talk about AI
Published by marco on
I have not used any of these AIs, not even once. I’ve just been following how other people are using them and kind of just observing, at a meta level, what’s going on so far. Some very clever and otherwise focused people who used to publish other content have been completely derailed by their obsession with AI (I’m looking at you, Simon Willison), so there must be something to it. But what?
I think a good place to start is with the article Introducing the Slickest Con Artist of All Time by Ted Gioia (The Honest Broker), which compares AIs to confidence artists, which seems more-than-somewhat justified. He writes,
“But that’s exactly what the confidence artist always does. Which is:”
- You give people what they ask for.
- You don’t worry whether it’s true or not—because ethical scruples aren’t part of your job description.
- If you get caught in a lie, you serve up another lie.
- You always act sure of yourself—because your confidence is what seals the deal.
It’s not ChatGPT’s fault, though. All any AI that’s fed on our Internet can do is to hold up a mirror. And what is that mirror going to reveal? Well, that everything is a scam, that there is no downside to being wrong, that if you get caught in a lie, it’s profitable to double down. ChatGPT has learned quite well, in that sense.
“Technology of this sort is designed to be a con—if the ancient Romans had invented ChatGPT, it would have told them that it’s cool to conquer barbarians and sacrifice slaughtered bulls to the god Jupiter. Tech like this—truly made in the image of its human creator—can only feeds back what it learns from us. So we shouldn’t be surprised if ChatGPT soaks up all the crap on the Internet, and compresses it into slick-talking crap of a few sentences.”
ChatGPT can’t math
The article above included a link to a tweet by Mark C. (Twitter) that shows just how badly sequence-prediction works for problem-solving.
In fairness, “2 eggs left” is a good initial response! It makes sense that you would fry, then eat the eggs. The formulation in the question suggests strongly that the eggs that were fried and the eggs that were eaten are different eggs, but it’s also possible to interpret it otherwise. However, when asked to explain its reasoning, it didn’t remember its previous answer and instead explained a different answer, devolving into pretty poor grammar at the end.
Its third answer is even worse, though, because it shows that it doesn’t understand anything of what it’s writing, contradicting itself within the same sentence. It has no idea what numbers are. When the prompter lies to it about its arithmetic, ChatGPT picks up the incorrect answer and runs with it, not noticing the basic arithmetic error.
It never loses confidence in its ability to take part in the conversation at any point.
Approach with caution
For the most part, you probably shouldn’t use the text or code created by an AI without knowing what it’s supposed to be saying. the people who’ve told me that they find ChatGPT’s answers useful are also those who are capable enough to be able to judge whether the generated content is correct. That is, they kind of automatically put the brakes on the AI, but then skip that part when telling everyone about how amazing it is.
I see a similar dynamic with image-generators. If you actually look at the progression, it’s not just writing “dog with bow tie” and BOOM you have your picture, you often have to massage your prompt dozens, if not hundreds, of times, before you get what you want. Everyone is instinctively using these things as tools, but then ascribing magical powers to them—like they’re deliberately creating entries for r/restofthefuckingowl/ (Reddit).
With text, they’re still very much better as “idea generators” that you can take a clean up, rather than just copy/paste. But the utility is there and we should confine our discussions to thinking of them as a new tool. Their results are more sophisticated, but they’re just an evolutionary step away from gradient generators, etc.
What about coding assistance?
On the subject of AI assistance in coding: I think it might be useful, but useful in the way that finding an example on StackOverflow is useful. You shouldn’t just copy/paste anyone’s or anything’s code into your own code without examination. Even non-AI-assisted code-assistance should be examined carefully to see if that’s what you actually wanted.
“It looks like you’re trying to call a REST API. Would you like some help?”
If you find yourself writing so much boilerplate that large-scale copy/paste or insertions are helping, then, again, this indicates a deeper problem with the code you’re writing.
In coding, less is better. I don’t see how having an idiot-savant machine that doesn’t understand anything about the stream of tokens it’s injecting into your code is useful, in the long run. If you’re a shitty programmer, then of course, a half-baked machine is going to help.
If you’re a good programmer, then use the generated code as a high-end code-completion, taking what you find useful from it. But beware: you may end up spending more time examining the swath of generated code to figure out if it’s OK than you would have had you just written it yourself.
There is no such thing as “no bias”
And remember that every AI we create has preconceptions and biases because we imbue everything with our biases, be it in the selection of the material for the training set or in how the weights are assigned in the neural network. Ask any of the AIs out there a racist question and it will not have an answer. There are biases.
As with all of these examples, I’m not sure if this one is real, but it feels realistic enough to illustrate the potential problem. The post Imagine thinking this controlled “AI” was legit LOL by Sero_Nys (Reddit) shows a user asking ChatGPT how “white people” can improve. He gets five suggestions. When he asks the same question about Jewish people or black people, he is told that the questions are “inappropriate” and “not productive”.
Actually, the general answer in examples two and three is much better, but it’s suspicious that it wasn’t used for the first question, as well. If the example is true, is shows an underlying bias—engendered either by the developers, the trainers, or the training data.
The article AI Koans (Linux Questions) has a very nice koan for this.
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky.
“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.
“Why is the net wired randomly?”, asked Minsky.
“I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes.
“Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.
Deep-faked audio is kinda hilarious
The following video shows a situation that no-one could ever even imagine had ever taken place to illustrate the power of deep-faked audio.
Trump’s voice is good, but the ends of his sentences are somehow … off. Biden isn’t incoherent enough. Obama’s pretty good. The script is pretty hilarious, especially toward the end.
This should be terrifying, right? If you can fake this so well with truly ridiculous things for humor, could you also fake up Biden declaring war on China over Taiwan? Or on Russia over Ukraine? Oh, wait, never mind. The most awful things you could imagine someone deep-faking are actually true. Carry on.
AI Art
The following video is a pretty good 22:26 investigation of image-generation.
These things are tools. They help people build images that they otherwise would never have been able to create. This is a good thing. If the image is good enough for your purposes—e.g., making a poster image for an article—then you’re good to go.
It would be an unabashedly good thing, except for how all of the information in the training set was kinda sorta stolen. Some of it was in the public domain, but much of it was not. It’s arguable that the richest veins of source images were those that were created by artists, from whom at least permission should have been obtained, if not compensation paid.
The cat’s out of the bag now, but that’s how capitalism works: it just does what it wants and, if the financial upside is bigger than the financial downside, then ethics has nothing to say about it.
The video says that AI art can never be more than just aesthetically pleasing, so no biggie. The title of the video is “everything is a remix”, which alludes to the point that any art created by humans is also derivative of everything that they’ve experienced, so technically everyone is stealing from everyone all the time anyway. What the AI does, though, is boost this process nearly infinitely more than humans can do.
My biggest problem with the video is that they, as usual, tend to interview the most hyperbolic and least-logical of the detractors, which is very-much straw-manning the argument against the ethicality of these initial forays into computer-generated artwork. It’s super-easy to just hand-wave and say that the product would not have been possible without all of the other products that it ate up for free, that it can just get away with profiting from it.
I think that’s the problem, though, isn’t it? If what the AIs were producing were not products of multi-billion-dollar corporations, there would be no problem—or at least less of one. If people who produce art didn’t have to worry that they were losing their livelihoods, they’d be less burned up about a giant company with billions taking the few specks of income that they have.
This video also does not in any way address the fact that artists will have much fewer employment opportunities when aesthetically pleasing is all that most commercial needs are looking for. Which brings us right back to the problem being that capitalism doesn’t have an answer for why the things that we actually value the most pay the least.
We love music and art and series and shows, yet we have the expression “starving artist”, but not “starving banker”. We want our children to be taught and our old people to be cared for, but we don’t see hospice-care workers and teachers showing off their homes on MTV Cribs. It’s not the best teachers in the world buying mega-yachts—it’s the most sociopathic assholes you can imagine. We are incentivizing the wrong things.
The article Fears of Technology Are Fears of Capitalism by Ted Chiang (Kottke.org) lays out this argument quite well,
“I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.
“Let’s think about it this way. How much would we fear any technology, whether A.I. or some other technology, how much would you fear it if we lived in a world that was a lot like Denmark or if the entire world was run sort of on the principles of one of the Scandinavian countries? There’s universal health care. Everyone has child care, free college maybe. And maybe there’s some version of universal basic income there.
“Now if the entire world operates according to — is run on those principles, how much do you worry about a new technology then? I think much, much less than we do now. Most of the things that we worry about under the mode of capitalism that the U.S practices, that is going to put people out of work, that is going to make people’s lives harder, because corporations will see it as a way to increase their profits and reduce their costs. It’s not intrinsic to that technology. It’s not that technology fundamentally is about putting people out of work.
“It’s capitalism that wants to reduce costs and reduce costs by laying people off. It’s not that like all technology suddenly becomes benign in this world. But it’s like, in a world where we have really strong social safety nets, then you could maybe actually evaluate sort of the pros and cons of technology as a technology, as opposed to seeing it through how capitalism is going to use it against us. How are giant corporations going to use this to increase their profits at our expense?”
In a world where an artist could just spend their day creating art without worrying about how that art is supposed to pay their rent and to take care of them in their old age, then that artist would probably rejoice to see their influence everywhere in society rather to be bitter about how their contribution hasn’t been remunerated. Instead of being able to enjoy their influence on culture, they have to rue it as a lost opportunity for securing their own well-being, both now and in the future. If their well-being were guaranteed anyway, then all of this friction disappears.
Everyone could relax and create wonderful things. Remixing would not only be legal, but strongly encouraged. Why waste time reinventing the wheel? And, if there were no financial incentive to produce art, then we would (maybe) no longer be drowning in mediocre crap that generates just enough revenue to justify itself.
Technology is not fundamentally about putting people out of work. It is right now, but it doesn’t have to be. Increasing productivity should be welcomed as a good thing. We produce more of what we want with less effort, less energy, and fewer resources. Win-win-win-win. But we have a zero-sum system that means that an increase of productivity means a loss for someone else—almost always someone from much further down the food chain, incapable of defending themselves from the predations of that system.
We really have to start thinking of how we’re going to live in a world where the endless-growth capitalism has to stop because it is literally strangling us. We have to start to separate people’s self-worth and value in society from how much they earn in that society. Either that, or we have to start designating fair value to the functions that people actually fill in society.
We allow these value-assignments to be determined by those who are on top, so they naturally just assign the most value to what they feel like doing and no value to the things that they don’t even know are going on. That has to stop.
Why should a music-company executive make more money than an artist? Why should a banker make more money than a health-care worker? Our ethics are non-existent. Our values are out-of-whack. Our income structures are nearly perfectly inverted.
The problem isn’t with AI. It’s just another tool that could be used for good. But it’s being perverted by our economic system—just like it perverts everything else.