AI morality

I cobbled this together from a few comments I’d made on Facebook – please forgive the lack of coherency.

Is it moral to ask AI art program to create something “in the style of” a living artist?

Is it moral to ask an AI art program to create something in between styles of two artists?

What about three artists? Four?

Stable Diffusion and other AI art bots take this to the nth degree. Just because the number, n, of artists is very high does not mean that it is not immoral to profit off of the work of artists. And remember – there are people who are using these artists’ work to get rich.

Also note that just because this usage of public domain hasn’t been challenged in the court of law doesn’t mean it never will be.

I think this is a brand new day for IP law.

And for those who might argue that “impressionism or cubism isn’t protected by law,” it’s not so much that the style of art is protected, but that the works made in those styles have been directly and algorithmically derived from artists without their consent. Today it’s “make a comic in the style of Shen Comix” and tomorrow it’ll be “make me a video of Jacob saying he hates tacos” – or worse. But whether it’s people’s public images, the works they put online, or data that’s scraped, I think the law will need to catch up in protecting the data that we produce and the impact of its derivations on us.

Another way to think of this problem is that intellectual property is supposed to protect the rights – and properties – of the inventors. In the case of AI produced properties, who is the inventor of the work? The person who wrote the prompt? The AI that responded? The owners of the data that it was trained on? Or the author of the algorithm that produced the AI model?

Either way, I think it’s a valid argument to make that the owners of the training data should have a say in this.

I want to acknowledge that AI is really exciting and out of everyone in my circles, I’ve probably used it the most. I’ve used Stable Diffusion on my Mac to produce images and I’ve also used ChatGPT to produce code, or as a pair programming partner. The technology is really exciting and magical.

But I think we all underestimate what we’re owed here. Researchers have trained these bots on our data – likely our posts on Facebook, my public blog and open source code have all been used to train various AI. Media artists may have the most skin in the game here, with the amount of labor and the sheer amount of data they produce.

This is a data issue, but ultimately it is a labor issue. All the data we produce – and we produce it constantly simply through the labor of existing – should be our intellectual property and should be protected under the law. These kinds of AI wouldn’t be possible in jurisdictions with strong data privacy laws like the EU and California.

And it shouldn’t be, because the ill gotten gains of those data violations end up, ultimately, in the hands of, like, three people who have been chosen mostly by privilege, racism, and heritage to amass wealth.

Now if there was an AI model that was trained using *only* fully consenting users’ data† and that model was free for everyone to use everywhere, I’d have a lot less to complain about. However, Midjourney, Lensa, ChatGPT, et al. are not that. Everyone should read Karla Ortiz’s post about the moral issues surrounding the image databases that were used to train Stability AI, Midjourney, and even Google:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s