|<<>>|1 of 283 Show listMobile Mode

Studio Ghibli and AI guardrails (a plea for free software)

Published by marco on

A little while ago, OpenAI released a tool that is much better at copying styles from other artists than previous models had been. This one was particularly good at copying the Studio Ghibli style.

So, people are generating all sorts of moments in history with ChatGPT in Studio Ghibli style. The tweet no fucking way dude, this studio ghibli thing has gone way too far (Twitter) provides a provocative example.

 9 11 in Studio Ghibli style

It’s pretty good, bro.

Be me.

Wanna try it.

So I went to Copilot and asked it to render the “Famous Challenger explosion in Studio Ghibli style.”

“Your request would contravene the designer guidelines.”

WTF. LET ME HAVE FUN.

So then I told it to “make a picture of four frogs frolicking in a field of flowers by a pond. One frog is much bigger and wearing a waistcoat and a monocle. Studio Ghibli style.”

IT WAS PRETTY GOOD.

I lost the page, so I don’t have it, but you can imagine it. It wasn’t Studio Ghibli, so I told it to make it more like that.

It was better but still not as good as the ones in the Twitter thread.

Then I told it to make the big frog hold a globe.

Bro’s holding a globe now. The whole picture had changed, but the frog had a globe.

“Now make the big frog be trying to hide an erection.”

“Your request would contravene the designer guidelines.”

THIS TOOL IS THE DEATH OF ART.

The guardrails are very, very narrow.

This holds true for my limited experience and especially if you’re logged in with a corporate account. I’ve used Copilot at work and it’s very limited. It won’t even suggest a “salacious” term. This time I used my teacher account (it was logged in and has Copilot). Also very limited.

Maybe if you pay ChatGPT $20, it’ll let you be a dirty, dirty boy. I dunno.

A friend suggested “Count Frog Hyper-Erection.”

“Your request would contravene the designer guidelines.”

😂 Sadly, this world is not for us.

He was more hopeful that the good times would come back. I responded that I, too, am hopeful. We will keep the flame alive. It will gutter and spit in the howling roar of corporate inanity and slack-jawed lumbering indifference, but we will keep that bloody flame alive.

Cue the rousing opening chords of the old Soviet national anthem…

Soon after, Thoughts on setting policy for new AI capabilities by Simon Willison cited some fools from OpenAI,

“[…] we’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm. The goal is to embrace humility: recognizing how much we don’t know, and positioning ourselves to adapt as we learn.”

F@&k your paternalism. Am we supposed to thank them for telling us that they’ve changed their opinion about how they’re going to use their tool to censor us? We should all be using free software, free models.

If we accept that this technology is incredibly useful and will usher in a new age for humanity—just bear with me—then it is absolutely ridiculous that a handful of tyrants at a handful of U.S.-American companies get to decide what these tools can do for us.

The OpenAI employees went on,

“AI lab employees should not be the arbiters of what people should and shouldn’t be allowed to create.”

No shit. And yet, there is no way to avoid this when the models are offered by a for-profit corporation.