duane

  • Nov 10, 2023
  • Joined Jan 4, 2019
  • 40 best answers
  • I see a lot of straw men set up here. I don't remember anyone saying that LLMs have human intelligence. The question is, do they need to?

    And we all know intelligence can't create itself from simple components -- oh, wait, that happens in the womb on a pretty regular basis. Intelligence showed up on this planet as a result of the interaction of basic forces, and that was without an army of researchers pushing it along.

    To get back on topic though, the real question is whether or not people will make more money with AI assisted games/game creation. You can argue about the morality of replacing jobs with AI, but it's all going to come down to the bucks. I suspect that those who embrace "AI" will out compete those who don't, despite their lofty ideals. I don't know any gamers (who aren't also developers) who care much about how the game was created.

    • xyz replied to this.
    • xyz There's one problem with hive minds - they are solipsistic and stagnant

      How many hive minds have you met?

      xyz robot fetishizers

      Really? ;-)

      I've never used a LLM yet, but I wouldn't have any problem with movie companies and game designers using them for actors and writers. How good was the acting in the last game you enjoyed (or for that matter, the last movie)? There aren't that many writers or actors making a decent living. If you want to worry about someone, worry about the huge number of professional drivers (and their support jobs) that could be ousted when self-driving cars improve a bit (and they will).

      The real niche for "AI" is in quests for role-playing games. I can't imagine chatgpt doing any worse than the repetitive junk quests I find in most games. And anything would be an improvement on npcs with ten canned responses as dialog. People love expansions for good games -- simple AI could generate them for pennies since its all remixing something that already exists.

      Edit: And, I'm not saying this is a good thing, but imagine how many big corps are drooling over the idea of using AI to produce junk video/games/books to do nothing but extend the copyrights on their "intellectual property". (Yeah, I'm not a fan of US copyright law.)

      • Genie without connecting each by hand

        You could assign them all to a "mouse" group and have a function connect the appropriate signals to every member of the group.

        • Have you tried running "C:\Users\Ryan\AppData\Local\Android\Sdk/build-tools/apksigner" from the shell?

        • Tomcat

          For my money, that deserves its own topic.

          Edit: Porting half a million lines of c# code from one engine to another in two days sounds amazing. I'm still struggling to port 7K lines from godot to godot. 🐢

          • retroshark

            I hope aiXplain understands what you're saying, because I don't. Feel free to tell me it's none of my beeswax, but was someone concerned about copyright? showing possibly sensitive information in the video?

            No offense intended, but the project just looks like a very specific terminal emulator to me. I can't imagine why it would worry anyone.

            • award important fixes aren't making it in as quickly as they could

              Then I'd say that your idea of important and the core developers' don't jibe. In fact, if people are moving on with their lives and leaving PRs behind, they must not think the PR is very important either.

              The normal way to fix this would be to get a lot of volunteers to do the bug catching, formal testing, responding to new issues, and documentation that bogs down development, so the project leaders could focus on checking pull requests. Of course since this is all volunteer work, that method assumes that the project leaders enjoy checking other people's work and won't be distracted by more interesting work of their own. 🙂

              Or, if you're really ambitious, you could fork the whole tree and approve pull requests yourself. Document the whole process heavily so that the project leaders can follow it easily. Then get a lot of people to test it, to show that they really do work. You could call it the warp-speed project, or some such.

              • xyz

                Yes, it was using the generic word, node, in the first case, and the specific term, Node in the second, which no one should be expected to spot. I didn't see it myself until I read it again.

              • RPGguy In what ways can one set parameters to an autoload?

                Actually, I meant to say property. Just make the settings variables in the autoload script, which automatically makes them properties of its class. Then have any scripts that use fonts check those properties when they're created or updated.

                If you don't want to have the other scripts check settings repeatedly, use signals to call a method in any class that needs to change, as Toxe says.

              • While playing with AStarGrid2D, I made this gif of two hundred mobs using A* to solve a maze.

              • I'd just set a parameter of an autoload, and have the label object set its font based on that.

                • xyz

                  Since we disagree on what knowledge is, we're not likely to agree on much in this discussion. I know that a car is a vehicle. That means that somewhere in my brain there are ideas of car and vehicle which are related by the activity of neurons. That sounds a lot like what a large language model does, and it's the simplest definition of knowledge. It doesn't seem to me that empirical data is necessary.

                  But that's all highly academic. Most people can only interact with chatgpt by asking it questions, so its answers are the only metric to judge its intelligence by. I know the Turing test has fallen out of favor, but it's a very practical one that doesn't rely on trying to map every change in a giant piece of software. The value of an answer to the questioner is the only thing that matters.

                  If I ask a human what a real number is, and she paraphrases what she was taught about them (possibly based eventually on Prinicipia), then I ask chatgpt the same question and it paraphrases a similar body of knowledge, and they both produce similar results, is there any real difference? Not for practical purposes. It wouldn't surprise me if the software gave better results most of the time.

                  So suggesting that a large language model's answers shouldn't be trusted because it can't think seems really dubious. Chatgpt can produce nonsense (I can too, sometimes 🙂 ), but it's not the only model out there. As I've said before, the software doesn't have to think, it knows the answer already.

                • Vercte The window is transparent for the first half second as it fades in (when the project starts) but quickly becomes opaque

                  This is probably a stupid question, but does your window manager's compositor normally fade windows in when opening them?

                • xyz My point was that LLMs are not a reliable way to gather information or knowledge because they don't deal with knowledge. They deal exclusively with lexicographical tokens.

                  I tend to disagree with that statement. I'm not a neuroscientist, and I can't claim that I understand how the human brain works, but I think the difference between dealing with knowledge and shuffling tokens is questionable. If you get the same result, I'm not sure the difference is meaningful. (And before anyone says that we don't get the same results, consider asking all the people surfing the net, not just the smart ones.)

                  xyz Godot's official documentation is a book.

                  Now you're getting into the definition of a book. I usually think of a book as a printed work that I own.

                  I've never read a book on linux, python, or lua, even though I've worked with the software for over twenty years. I've used online "books" as references, of course, but most of the time, when I want to learn something, I search the net for the answer first. I often find answers from real people that are wildy incorrect, but I can skip them and find better ones in minutes. Even a book in my hand would take a lot longer to pore through, and the knowledge I get would be whatever the author prioritized. Maybe that's good, maybe not.

                  The answer in the original post seems better than I could construct. I just didn't like the question.

                  • xyz replied to this.
                  • Tomcat You've just described the education of the vast majority of the Earth's inhabitants.

                    ...throughout the whole of history. Even extremely formal disciplines have a hard time verifying references. It's not cost-effective for most of us to attempt it.

                    A real artificial intelligence would have responded immediately, "That's a stupid question." It didn't specify how much automation the two humans had at their disposal, what resources they had available, or how much information they had on the process. (Kind of like asking "how long does it take to build a FPS?) More importantly, why would two people want to build a vehicle that would require a crew of hundreds (at least) to operate?

                    On the other hand, if you'd asked how many man-years were required to build one, you'd probably get a more interesting answer. I think the real problem with large language models is that people are still asking the questions.

                  • I did find a way to speed it up about 20%. If I run it in compatibility mode and limit the number of colors, godot will use opengl sprite batching, which definitely helps. I'm not sure if vulkan has anything like that, but forward doesn't seem to batch at all for me.

                    bun-color-batch.zip
                    99kB

                    I can get 6000 bunnies on my laptop and 2000 on my phone this way. Unfortunately, it will probably slow things down on better hardware.