There's using reference and then there's direct copying and what that artist you hired did was definitely copying, ironically, very similar to what the AI generators are doing, when it comes to 3D modelling photograph reference? Less bothered about that, because it's something real you're just doing up for a 3D model, loads of people do that. It's amazing to me though how acceptable being lazy and scammy like this has become in industries. No wonder then as a result most of the stuff we see being released now is terrible.

Megalomaniak The problem is; image generators do not remix ideas. They remix image data, i.e. literal pixels. Although they do it in such a way that it looks like they're juggling ideas (like humans do). I think it's more likely we'll reach "nothing new" state if world of images gets overpopulated with generated imagery as generators can't really create anything truly new. They just chew existing stuff and spit out more of it. And if the practice of generating images from already generated images (possibly cascading into infinity 🙂) becomes widespread we'll converge into hyperproduction of same-y mediocre images that nobody would give a dime about.

Tbh I have yet to see a good ai generated image. Intangible qualities that make a visual artwork really stand out are uniformly missing. I'm talking about visual unity, notan, rhythm, compositional balance... ai generators are "clueless" about this stuff. Probably because it's extremely hard to cathegorize image data in this way, if not entirely impossible.

    xyz The problem is; image generators do not remix ideas. They remix image data, i.e. literal pixels.

    For all intents and purposes, same difference.

    xyz I think it's more likely we'll reach "nothing new" state if world of images gets overpopulated with generated imagery as generators can't really create anything truly new. They just chew existing stuff and spit out more of it. And if the practice of generating images from already generated images (possibly cascading into infinity 🙂) becomes widespread we'll converge into hyperproduction of same-y mediocre images that nobody would give a dime about.

    Great, so human artists might still have a chance and a slight niche then.

    xyz Tbh I have yet to see a good ai generated image.

    operative term being 'yet'.

    It's an interesting discussion to have. And it speaks to something fundamental about human culture.

    AI - generated art, is by its nature, derivative. There is no modern AI that is not derivative. We still haven't developed anything comparable to volition. We can feed a complex computer system data, and have it provide randomly generated behaviors based on that data. But it is still just a reflection. AI only holds up a mirror to its creators.

    But this is also true to some extent for human culture. The ability to record information in its myriad forms is foundational to cultural progression. In many ways, the true benefit of digital technology is improved efficiency in the storage, organization, and retrieval of information and knowledge. This is a project humanity has been striving towards for thousands of years.

    The acquisition of information, and by extension, inspiration, is easier today than ever before. The average person can now gain access to more art, literature, film and various other creative efforts than ever before in history, and can do so with a keystroke. Future artists can produce work while being inspired by thousands, even hundreds of thousands of other artists works from the past, and never have to leave their house to view these works of art. In this sense, even modern original works by modern human artists are also derivative, in a very similar sense to how AI generated art is produced.

    Value and ownership of something as ephemeral as art has always been a sticky issue. The sad problem is that artists often have to fight to be compensated for their works. Simply producing art is never enough.

    And the greatest reappropriate, refactor and refurbish. RRR, Rolling Rich in Resources.

    It seems to me that the ethics of using learning software are nothing more than an intellectual exercise. It is already cheaper to use software to generate reports, presentations, and artwork than to do it "manually". Most of our planet runs on greed capitalism, which means that the solution that makes the most money will eventually be accepted. John Henry does not win -- the steam drill does. (And we're seeing very primitive examples of "AI" so far.)

    The introduction of the power loom in the early industrial revolution turned weaving from a cottage industry employing most of the rural communities in England into a sweatshop industry employing orders of magnitude less. It ruined quite a few lives and arguably made many more worse. People hated it even at the time, but their anger made little difference in the end.

    My advice to artists is to roll with the punch and start using the new software. If you don't, your competitor will.

    And, in the end, learning software is just a force multiplier. It does nothing on its own. A human still has to decide if the result is useful or not. (This is why making decisions based on AI is so dangerous -- by the time you know if the result was useful, it's probably too late.) So, there will still be jobs for artists, just not as many.

    Now, on a side note. This is the outcome programmers have been working at for most of a century. This is our nirvana -- the machine does the work, with minimal intervention. Programmers are lazy by nature; that's why we're willing to spend days coding a utility that will save a few hours of effort. Be careful what you wish for. 🙂

      The topic is quite provocative…

      My question is, have the descendants of the Arab who invented the numbers been paid royalties yet? All human culture is profoundly secondary. Already in Ancient Greece they formulated all the plots in literature and since then only repeat them in slightly different interpretations.

      duane Programmers are lazy by nature; that's why we're willing to spend days coding a utility that will save a few hours of effort.

      Right, but the spent days will go to one person, and the hours saved will go to many, resulting in a total gain of years.

        Related to this, and something that many of us may run into without realising: people are using ChatGPT to farm points on stack overflow and similar sites by answering as many questions as possible. So there's a growing number of answers that are unverified by a human, just fake answers that may or may not be correct.

          Kojack I am not shocked by that at all, I feel like something similar is happening on steam where weirdos are trying to farm award points on there too, lots of stupid meme reposting in the reviews among other things that's more bot behaviour though than ChatGPT but it will probably infest that store as well.

            Erich_L I guess what I'm getting at; what if these models were not built using images scraped from the web, what if instead they were built by robots walking around taking pictures, using a camera. What if those robots could also sit down and look at pictures online?

            Well these robots of yours, whatever they are 🙂, they can take as many pictures as they like on and off line, but that's not the real source of their "smarts", is it? The pixel data needs to be tagged in all sorts of ways. That's where the juice is. This is very hard to automate. You'll always need humans to do it. Now you'll say: but but ai can teach itself. Sure, but it can only operate inside constraints of what it had already been "taught" via human tagged data. Every time a new apparent concept needs to be introduced into an ai system you'll need humans to format the data. Otherwise the ai system will just grind itself into derivative stagnation. Which is good if you look at it as an automation tool, but bad if you want to hype it out as a milestone towards agi. To put it poetically; generative ai is vampiric in nature. It needs constant influx of fresh human creativity to appear to be "getting smarter", and consequentially stay relevant as a cultural phenomenon. That's why it would never be able to beat humans at the game of concept invention. The real strength of ai is not in any way different from strength of your regular digital automation; it can munch and burp out a lot of derivative boring stuff in small amount of time. This, as always with tech automation, means that people who do a lot of derivative boring stuff will get a strong incentive to stop doing it. Be it "art" or number crunching.

              duane Programmers are lazy by nature; that's why we're willing to spend days coding a utility that will save a few hours of effort.

              Megalomaniak Who knows, we might be chatting a GPT in here too.

              We should have a bot member. phpBB had (and maybe still has) that as an add-on. It's open source, so it shouldn't be that hard to port it.

              Personally I have to admit I am all for it. Though when I had Wombo.art generate my sewage pipe piece, I started with my own crude drawing done in paint. A few iterations later and I got a shaded image I was happy with. I think the trick here to use such tools more efficiently is to not rely on the text prompt. For us in game dev, we often know what general shape or angle we want for an image so you can save a lot of time by using the option to generate an image based on a pre-existing one. Previously I thought (as I assume many do) that it's all about the text prompt. I beg to differ. I say take a page out of the average artist's playbook and start with an image reference reference.
              @duane I like your take, I assume that no, you don't/wouldn't feel any guilt at all even if starting with a "protected" image.
              @xyz As far as I know humans labeling data is largely already a thing of that past. "Otherwise the ai system will just grind itself into derivative stagnation." I have not experienced yet this even in the slightest. I think that most of us maybe besides our esteemed futurist cyber are looking at these models in their capacity to replace human thought rather than to aid it.

                xyz The pixel data needs to be tagged in all sorts of ways.

                Fortunately for the language models, humans are quite willing to do that for free, in our spare time -- as the existence of social media shows. Most of us don't even care how someone might be using our text.

                Tomcat Right, but the spent days will go to one person, and the hours saved will go to many, resulting in a total gain of years.

                That's the best case, but I never thought altruistically when I started banging together a utility, and most of them were forgotten before they ever got used again. If you've got a geeky manager, they'll excuse any delay when you show them a neat piece of code. 🙂

                Erich_L I assume that no, you don't/wouldn't feel any guilt at all even if starting with a "protected" image.

                I probably would feel guilty if I wasn't buffered from the original by the software, but I'd get over it. Anyone who is really worried about it could check this out.

                https://arstechnica.com/information-technology/2023/03/ethical-ai-art-generation-adobe-firefly-may-be-the-answer/

                Copyright doesn't work in the digital age. We've been trying to shoehorn it in for decades, but it's what programmers refer to as a kludge. It might be somewhat functional now, but you know it will break as soon as anything changes. The only reason the concept ever existed is the pre-industrial idea of patronage -- in this case, society would go out of our way to encourage people to pay the author.

                It still exists because wealthy people bought lots of authors' work and want to continue to charge money for them. (*cough* mickey mouse) Make no mistake, the people with the power to make decisions don't care about struggling artists, except in the sense that a farmer cares about a potato plant. However, they will probably attempt to quash learning systems to protect their own portfolio.

                I'd normally say that such an effort is impossible -- the genie's out of the bottle, and it's not going back in -- but what if some hack puts together a legal "AI spotter". The ultimate copyright troll enforcer that tags millions of "offenders" every day. Then someone creates a framework of law that allows "AI judges" to rule on very limited cases of copyright infringement and redirect any attempts at payment to the copyright owner. (Sound familiar?) If you've read Melancholy Elephants, this is worse.

                I can think of much, much nastier things that this software could be (and probably is being) used for, but I don't want to give anyone nightmares tonight. 🙂 Anyway, it's more likely that the results of AI art will be declared original, since the money makers would be happy to dispense with the bothersome human artists and save a few bucks.

                @Erich_L I don't see how a language model can ever "replace" human thought? In its essence it's just a database lookup. GPT, for example, can "say" or "code" only that which humans already said or coded. It can't and never will be able to implement a new algorithm from a textual description that it sees for the first time. It doesn't do programming... or thinking. There's nothing even resembling conceptual thinking there. It's just an intricate statistical analysis of enormous body of existing text, aggregated thanks to the internet and hordes of humans willingly typing all kinds of stuff into it. And as @duane pointed out, all of that text is implicitly tagged. That's the essence of what we currently call ai. Fueled by all the recent hype from companies who want to sell magical ai products to mass consumers, people have shown a strong tendency to project "ghost in the machine" entities into what these software products output. It's a peculiar form of modern superstition. I call it "digital pareidolia". Ok for common folk to think we're in some kind of B sf movie, but software engineers should know better.

                Erich_L As far as I know humans labeling data is largely already a thing of that past

                I'd disagree here, at least in the case of pixel data for image generators. If you want ai to emulate humans, your best bet is to train it with human tagged data. Ai trained on ai tagged data would only mimic itself. This is suboptimal if your goal is to incrementally improve the emulation.

                @duane I don't think we'll even need an AI spotter. By mere power of hyperproduction, synthesized images will become boring and easily recognizable by anyone, losing all of the allure they currently appear to have. Just like lens flare effect did in the 90s 🙂

                I tried to add a ChatGPT bot to the forum, but it wasn't working. The developer of the extension said there was a bug he was looking into. But I like the idea, it should be working in a little bit.