With Artificial Intelligence (AI) spreading across the arts and creativity landscape like a plague, the very definition of intellectual property is changing by the day.
The boundaries of what plagiarism actually is and how much AI-generated content can be considered proprietary artwork are becoming as thin as the sheet of paper of your latest comic book.
As the copyright war begins, new methods to “poison” creative work and stop data scraping look to be the solution to prevent machines from ‘stealing’ what humans created.
But aren’t machines just doing what humans tell them to do? Is this a war against thieving machines, against other humans, or just against the way technology is forcing art to evolve?
To understand this, we must first focus on what kind of changes AI is bringing to the table of creative work.
Stealing, Copying, and Plagiarizing, or Just “Drawing Inspiration”?
Before the advent of generative AI, artists were free (to some degree) to take inspiration from wherever they wanted. The very definition between “taking inspiration” and plagiarizing someone else’s work is never always a fuzzy line, which sometimes needs the opinion of courts — it is a hard rule to set in stone.
Still, creative human work was still necessary to generate any kind of art – even plagiarism takes some skill.
Things became a little bit more hectic with the introduction of visual tools and aids like Photoshop since they were often seen as a convenient shortcut. But still, human work was human work.
Then, in came AI. Nowadays, the vastness of the AI’s reach is so enormous that “inspiration” can be taken, quite literally, from the whole Internet at once.
In broad terms, this means that every single piece of artwork can be drawn from and then transformed into something else in a matter of seconds. Not only does this defeat — or at least call into question — the whole meaning of “creativity”, but it means that anyone’s legitimate effort in creating something unique loses all safety rails now.
Anything can be plagiarized, pretty much instantly, without anyone to seek grievance against — the human work has been all of ten seconds to write a prompt, and the rest is almost instantaneously code-generated. This hurts, especially when you see your piece of art being easily surpassed by something created by a machine.
Poisoning Artwork – Underhanded Fighting Methods or Necessary Weapons?
In a tale as old as time, when humans start squabbling for something, someone else profits by creating a new weapon to aid one side.
What if artists could finally protect their intellectual property without having to worry about what copyright actually covers and what it doesn’t?
Here comes Nightshade, a new tool created to “poison” artwork so that any training data using that art will corrupt the AI using it.
It’s a terrible yet clever way to do it, with Nightshade invisibly modifying pixels in digital arts so that when a generative AI model such as Midjourney, DALL-E, or Stable Diffusion ingests it, it will get confused and intoxicated.
In other words, it will read the picture of a cat as the picture of, say, an airplane, destabilizing text-to-image generative models and preventing them from generating meaningful pictures.
The idea is to bring the fight back to artists, and since you can’t prevent AI from using all that has been made so far, you can at least stop it from using what is going to be created from now on.
The alternative is filing a lawsuit, which is hit-and-miss since the entire situation is novel and in the early days of testing, and you can’t expect tribunals to be properly equipped to deal with circumstances that have minimal precedent has been set.
Also, your individual artist might be fighting against a tech behemoth such as Google, making your chances of winning even slimmer. However, Nightshade’s approach is a pretty aggressive one since it will literally damage the AI model, preventing it from scraping data without the risk of becoming permanently corrupted.
Nightshade AI Poisoning: The Ethical Questions
We have an extremely complex landscape that is swept by more than just a wind of change – it’s a true hurricane.
If that’s our horizon, it’s time to take a deep breath and analyze the circumstances in the most objective way possible.
It is tough to define how much human work is actually necessary to consider creation as artwork deserving of intellectual property protection. A reasonable person will understand that “writing a few lines of script” is not on the same level as spending weeks on a canvas or behind a typewriter, especially since the skills required are more akin to those of a good programmer than to those of a painter.
However, art changes, as it always has and always will. Back in the 1800s, the introduction of photography revolutionized how we perceived visual arts. Painters were hired not just as artists but as artisans as well by anyone who wanted to preserve their own features through a portrait.
Photography makes things easier, freezes a moment in time forever, and takes seconds to do what painters took days to finish.
Elsewhere in history, we get to the invention of molds and later synthetic resins, which disrupted the work of thousands of sculptors. Indeed, we think of sculpting now as made by people who model 3D artwork rather than being masters of the scalpel.
Still, who can complain today when we can now enjoy some practical and awesome-looking resin miniatures with just a few bucks?
Nonetheless, one of the main aspects that sound (and we’d suggest is) unfair is that when a piece of creative art is stolen, those who get sued are the AIs. They’re not humans, so they can’t be blamed!
There’s a human behind that AI, and while it may be argued that that human couldn’t know that the AI stole your specific piece of art, it’s still humans who are creating the exploitative tools and then feeding the prompts in.
It’s not about artists vs AI, but rather how easily AI can be used by people who want to produce artwork without paying artists. As it always happens in all human societies, it all comes down to some people wanting to make more money easily versus others being suddenly paid less (or not being paid at all) for their efforts.
Nightshade is an extreme and destructive reaction to a controversy that is becoming more and more volatile by the day. Not that it is not justified by itself, as it looks like a way to escalate the confrontation rather than defuse it by finding a compromise.
A suggestion that could bring some balance to the economy of generative AI artwork is providing a fee to any artist feeding content that trains these models, all while forcing people to pay for using tools.
This will hardly make things fair overnight, but it might be a step in the right direction.
We must remember something that we have already said for years in tech: That AI is going to revolutionize our world. And “revolutionizing” means that things can’t and won’t be the same — no matter how we struggle to keep the status quo.
Someone is going to profit, and someone is going to lose something in the process, it’s something we all must accept.
However, as everything goes, we should at least strive to find a middle ground.
But at least a compromise that makes a few people a bit dissatisfied is much better than having two sides who are both angrily arguing against each other until, as those Swedish original artists once wrote, the winner takes it all.