"Move fast and break things." Cool, but is it right? Or is it just the future?

1

The first image is the work of a diligent artist. Surely we can consider using his/her art without permission wrong, even if a very minor offense.

2

The second image is generated off the work of the first. Some courts already argue that this use of AI does violate the artists rights. Seems hard to prove, and one does still feel a bit wrong, but it does seem a more minor offense.

3

The third image is generate off the second, this time adding some different text such as "cyberpunk". The criminality here becomes arguably vague.

4,5

The last two images are iterations on n-1. It is extremely unlikely that any infringement issues would ever come up, but does it feel, even in the tiniest way, wrong?

I would say it's not ethical, but eating meat is also not ethical and I had a cheeseburger for lunch yesterday.

As I see it, the infringement is not in the artwork itself but rather in the process of training the neural nets. It's a different type of ipr infringement than direct plagiarism. So it doesn't matter to which degree is the output "remixed" or iterated. What needs to be looked at are images used for training. If training dataset contains images whose ipr holders didn't give permission for such usage then the training itself as well as financial exploitation of images generated with the network in question should be considered an act of infringement, imho. In cases of derivative training datasets, the whole genealogy should be considered, tracing images back to human origins. This is, of course, currently not regulated, so...

Full disclosure, I have a very strong opinion about AI art even by my own standards and despise it.

The whole AI art thing is just a scam and it is largely done by poorly patching together images from the internet and I've tested this thoroughly because when you use certain keywords you can see that the generator is just directly ripping images from online rather than making it's own from scratch as is the claim. There are two types of people who defend this, the lazy who just want to steal from other artists online and claim it was the AI that did it and then there's the tech illiterate who know nothing about AI but frequently post like they're experts on the topic and have convinced themselves they know everything and AI is going to take over the world. To anyone who actually knows even a little bit about how this stuff works the idea is laughable.

Yes, the artwork generated by these algorithms are stolen, shamelessly so, but good on you @Erich_L for spotting it and realising. I wouldn't use AI generated art at all, by all means though, take inspiration from artists and even credit them in your work if the style is similar, but yeah, don't use the art generators for actual projects, it's a tool for lazy and dishonest people. Procedurally generated textures etc.? That's fine and I use them myself and it helps workflow dramatically, don't have a problem with procedural if it's used the right way, generated art though to put it politely can go die in a fire.

Hopefully I kept my rant professional enough for this forum but LOL, it's not even this AI fad that's driving me mad, it's often the people who keep spamming about it.

I would say 4 & 5, even 3 actually are fine just like an artist taking inspiration from the works of others is fine, so long as they do enough to distinguish their own work from the others that inspired them. If some degree of idea mixing or remixing can't be done and isn't ok then we are all going to very rapidly approach a point where nothing new can ever be created at all.

  • xyz replied to this.

    I did pay an artist to help me illustrate three children's books. The first thing she'd do for each scene is find an image online she wanted to approximate. She'd put that image (someone else's work) in the background and redraw it using her own style. I asked her about it and she said it's just the best way to do it.
    @xyz You could say "images used for training", I think this is dangerously close in meaning to "images seen". The mere act of seeing informs the model, much in the same way I can close my eyes and imagine a Hyundai car.
    Perhaps we're afraid/jealous, unlike us, the model can print out instantly what it imagines, while my mental renderings sit stationary and unshared.

    Edit: I guess what I'm getting at; what if these models were not built using images scraped from the web, what if instead they were built by robots walking around taking pictures, using a camera. What if those robots could also sit down and look at pictures online?

    • xyz replied to this.

      There's using reference and then there's direct copying and what that artist you hired did was definitely copying, ironically, very similar to what the AI generators are doing, when it comes to 3D modelling photograph reference? Less bothered about that, because it's something real you're just doing up for a 3D model, loads of people do that. It's amazing to me though how acceptable being lazy and scammy like this has become in industries. No wonder then as a result most of the stuff we see being released now is terrible.

      Megalomaniak The problem is; image generators do not remix ideas. They remix image data, i.e. literal pixels. Although they do it in such a way that it looks like they're juggling ideas (like humans do). I think it's more likely we'll reach "nothing new" state if world of images gets overpopulated with generated imagery as generators can't really create anything truly new. They just chew existing stuff and spit out more of it. And if the practice of generating images from already generated images (possibly cascading into infinity 🙂) becomes widespread we'll converge into hyperproduction of same-y mediocre images that nobody would give a dime about.

      Tbh I have yet to see a good ai generated image. Intangible qualities that make a visual artwork really stand out are uniformly missing. I'm talking about visual unity, notan, rhythm, compositional balance... ai generators are "clueless" about this stuff. Probably because it's extremely hard to cathegorize image data in this way, if not entirely impossible.

        xyz The problem is; image generators do not remix ideas. They remix image data, i.e. literal pixels.

        For all intents and purposes, same difference.

        xyz I think it's more likely we'll reach "nothing new" state if world of images gets overpopulated with generated imagery as generators can't really create anything truly new. They just chew existing stuff and spit out more of it. And if the practice of generating images from already generated images (possibly cascading into infinity 🙂) becomes widespread we'll converge into hyperproduction of same-y mediocre images that nobody would give a dime about.

        Great, so human artists might still have a chance and a slight niche then.

        xyz Tbh I have yet to see a good ai generated image.

        operative term being 'yet'.

        It's an interesting discussion to have. And it speaks to something fundamental about human culture.

        AI - generated art, is by its nature, derivative. There is no modern AI that is not derivative. We still haven't developed anything comparable to volition. We can feed a complex computer system data, and have it provide randomly generated behaviors based on that data. But it is still just a reflection. AI only holds up a mirror to its creators.

        But this is also true to some extent for human culture. The ability to record information in its myriad forms is foundational to cultural progression. In many ways, the true benefit of digital technology is improved efficiency in the storage, organization, and retrieval of information and knowledge. This is a project humanity has been striving towards for thousands of years.

        The acquisition of information, and by extension, inspiration, is easier today than ever before. The average person can now gain access to more art, literature, film and various other creative efforts than ever before in history, and can do so with a keystroke. Future artists can produce work while being inspired by thousands, even hundreds of thousands of other artists works from the past, and never have to leave their house to view these works of art. In this sense, even modern original works by modern human artists are also derivative, in a very similar sense to how AI generated art is produced.

        Value and ownership of something as ephemeral as art has always been a sticky issue. The sad problem is that artists often have to fight to be compensated for their works. Simply producing art is never enough.

        And the greatest reappropriate, refactor and refurbish. RRR, Rolling Rich in Resources.

        It seems to me that the ethics of using learning software are nothing more than an intellectual exercise. It is already cheaper to use software to generate reports, presentations, and artwork than to do it "manually". Most of our planet runs on greed capitalism, which means that the solution that makes the most money will eventually be accepted. John Henry does not win -- the steam drill does. (And we're seeing very primitive examples of "AI" so far.)

        The introduction of the power loom in the early industrial revolution turned weaving from a cottage industry employing most of the rural communities in England into a sweatshop industry employing orders of magnitude less. It ruined quite a few lives and arguably made many more worse. People hated it even at the time, but their anger made little difference in the end.

        My advice to artists is to roll with the punch and start using the new software. If you don't, your competitor will.

        And, in the end, learning software is just a force multiplier. It does nothing on its own. A human still has to decide if the result is useful or not. (This is why making decisions based on AI is so dangerous -- by the time you know if the result was useful, it's probably too late.) So, there will still be jobs for artists, just not as many.

        Now, on a side note. This is the outcome programmers have been working at for most of a century. This is our nirvana -- the machine does the work, with minimal intervention. Programmers are lazy by nature; that's why we're willing to spend days coding a utility that will save a few hours of effort. Be careful what you wish for. 🙂

          The topic is quite provocative…

          My question is, have the descendants of the Arab who invented the numbers been paid royalties yet? All human culture is profoundly secondary. Already in Ancient Greece they formulated all the plots in literature and since then only repeat them in slightly different interpretations.

          duane Programmers are lazy by nature; that's why we're willing to spend days coding a utility that will save a few hours of effort.

          Right, but the spent days will go to one person, and the hours saved will go to many, resulting in a total gain of years.

            Related to this, and something that many of us may run into without realising: people are using ChatGPT to farm points on stack overflow and similar sites by answering as many questions as possible. So there's a growing number of answers that are unverified by a human, just fake answers that may or may not be correct.

              Kojack I am not shocked by that at all, I feel like something similar is happening on steam where weirdos are trying to farm award points on there too, lots of stupid meme reposting in the reviews among other things that's more bot behaviour though than ChatGPT but it will probably infest that store as well.

                Erich_L I guess what I'm getting at; what if these models were not built using images scraped from the web, what if instead they were built by robots walking around taking pictures, using a camera. What if those robots could also sit down and look at pictures online?

                Well these robots of yours, whatever they are 🙂, they can take as many pictures as they like on and off line, but that's not the real source of their "smarts", is it? The pixel data needs to be tagged in all sorts of ways. That's where the juice is. This is very hard to automate. You'll always need humans to do it. Now you'll say: but but ai can teach itself. Sure, but it can only operate inside constraints of what it had already been "taught" via human tagged data. Every time a new apparent concept needs to be introduced into an ai system you'll need humans to format the data. Otherwise the ai system will just grind itself into derivative stagnation. Which is good if you look at it as an automation tool, but bad if you want to hype it out as a milestone towards agi. To put it poetically; generative ai is vampiric in nature. It needs constant influx of fresh human creativity to appear to be "getting smarter", and consequentially stay relevant as a cultural phenomenon. That's why it would never be able to beat humans at the game of concept invention. The real strength of ai is not in any way different from strength of your regular digital automation; it can munch and burp out a lot of derivative boring stuff in small amount of time. This, as always with tech automation, means that people who do a lot of derivative boring stuff will get a strong incentive to stop doing it. Be it "art" or number crunching.

                  duane Programmers are lazy by nature; that's why we're willing to spend days coding a utility that will save a few hours of effort.