Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
orangelex44
Oct 11, 2012

Definition of orange:

Any of a group of colors that are between red and yellow in hue. Middle English, from Anglo-French, from Old Occitan, from Arabic, from Persian, from Sanskrit.

Definition of lex:

Law. Latin.
I think it's an impossible dream, and that 4X games need to realize that. Just have the AI play a different game from the player, and for gently caress's sake stop pretending that the AI can "win" in the classical sense. The player can lose for sure, but the AI shouldn't be bound to the same wincons.

Adbot
ADBOT LOVES YOU

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
I do not play a huge amount of multiplayer in these games, so I may be completely off-base, but I feel like the very nature of these games is that a player will eventually snowball past everyone else because every single system is to designed to turn small early advantages into bigger long-term ones. Like, at equal skill levels people might be able to keep up longer, but that's just delaying the inevitable snowball.

A Wizard of Goatse
Dec 14, 2014

Clarste posted:

I do not play a huge amount of multiplayer in these games, so I may be completely off-base, but I feel like the very nature of these games is that a player will eventually snowball past everyone else because every single system is to designed to turn small early advantages into bigger long-term ones. Like, at equal skill levels people might be able to keep up longer, but that's just delaying the inevitable snowball.

usually yeah. It's a mild annoyance in singleplayer and way worse when you're asking multiple people to either slog through hours of a no-win situation or give up 10% of the way through your game but barely anybody bothers change the formula because the losers should git gud or tab over to discord to work out a deal to gang up on the winner

A Wizard of Goatse fucked around with this message at 03:34 on Apr 9, 2023

Lowen
Mar 16, 2007

Adorable.

deep dish peat moss posted:

It's more about interpretation of data. It can interpret data in ways that humans never could and spot patterns in it that Humans wouldn't be able to. So while the responses might be synthesized based on human-written text, it's the connection between data points and response that separates its "intelligence" from the data it was trained on. Doctors are going to be one of the first jobs to be fully replaced by AI because it is literally impossible for a doctor to keep up with the speed of medical research.

Are you for real? Do you seriously believe this? Why? What was so convincing that you heard or read it, looked at the current assortment of AI bullshit around right now and thought: "sure, this is going to replace doctors, they're going to be one of the first jobs to go"? What is the source of this incredible nonsense, I honestly want to know.

quote:

A ML-driven Civ AI would absolutely crush 99.99% of players right now if that was its goal, because it could correlate the data from across millions of games to devise an unbeatable strategy and prepare itself for every counter that players throw at it.

What are you basing this statement on? Has anyone made a good machine learning AI player for any Civ game that uses player game replay data, or done anything similar? If so, did it actually crush 99.99% of players?

quote:

That kind of AI wouldn't be fun to play against for the vast majority of players. That's not because it's a "tactical genius" or whatever, but just because this theoretical AI has watched every Civ game ever played. Some players would be able to beat it, sure, but over time the tactics they used would stop working.

There isn't any kind of ML that works this way at all. No one is going to get good results by just feeding something a large volume of training data and doing nothing more. Machine learning developers spend a lot of time and effort tailoring training data for a reason. Simple stuff like getting ordinary people to label images via Recaptchas is only part of that. When it eventually works it might feel like magic to the end user that doesn't know how it's done, but of course it's not, there's a lot of human work, ingenuity and knowledge involved.

The typical 4x game is: huge (compared to most studied games), random, asymmetric map, completely unexplored and invisible, except for what's near to you - played by 1 human player that plays in a distinct way compared to the AI players, and none of these players really have any kind of concrete goal you can assume they're trying to optimize for, like winning. The human players all have wildly different skill levels, and they're mainly just trying to have fun, whatever that means for them, and will probably change their strategies from game to game. They will often deliberately not play strategies that they know are the most effective, because they're too effective to be fun, or because they're how they won last game - and then they'll win anyway, even though they played in a deliberately not optimal way. The AI players are designed to provide a fun challenge to the humans, they also aren't trying to win, except insofar as it makes the game more fun for the player if they try a little bit. You could just look at multiplayer matches but then you run into some of the same problems, some new ones, and there are a lot fewer examples to use compared to single player.

There's no number of games you can train an ML on like that that are going to teach it to play the game well.

MAYBE if someone put a lot of work into isolating some small specific task, and really carefully tailored the training data for that, then they might get something useful, like an AI that can manage to do an OK job moving an army in a small area, with the goal of capturing a city. But someone needs to do some work isolating tasks and grading examples. I'm guessing it would be faster and easier to make a conventional AI that does the same thing, and if the game changes a little it's easier to tweak that.

Good AI of any kind, even ML AI, doesn't really play nice with computer game development, because unlike traditional games with stable rules, computer game rules change often in an iterative, experimental process. Game mechanics are constantly being added, removed, or tweaked, and balance values are constantly being changed around. ML for computer games is something ML researchers might do as a flex, and only with games that meet a rare set of criteria (stable rules, preset, symmetric maps, lots of good data from competitive, 1v1 matches... So basically, just Starcraft).

I think developers as a group could do a better job of making conventional AI players that are fun to play against, but that's only because I've seen a lot of mixed quality in 4X AI - a few games are actually good, but most are bad. The fact that so many come short of what I've actually seen done by small, modest teams makes me think that the main problem holding things back isn't any sort of technical limitation or lack of funding. If Firaxis (as an example) thought that the quality of their Civ 6 AI was unacceptable, then they could have allocated resources to improve it by now. They haven't done that because it's working well enough according to their standards. And I think they kind of have a point. My standards are absurdly high due to the rare and unusual amount of time I've spent playing these games, going back to Civ2 when I was a kid. Most players just won't have anything like that number of hours. Even so, Steam says I spent just short of 500 hours playing Civ6, mostly single player, and it's not as if I was bored and swearing at how bad the AI was the entire time, just for the last 100 hours or so :).

nrook
Jun 25, 2009

Just let yourself become a worthless person!

Megazver posted:

Not really a 4X game, though. I respect the design, but I didn't enjoy it. But Old World also had solid AI. Soren Johnson designs and balances all of his games by making MP versions first, which seems to be the trick.

I think OTC qualifies as a 4X game, though admittedly the first X isn’t really there in multiplayer. But playing it in a multiplayer environment with my friends felt similar to playing Civ V with my friends, despite the very different mechanics. Elements like evaluating the value of territory, balancing internal development and external attacks, predicting and taking advantage of other people’s play, and making decisions about when to engage in direct conflict are common to both OTC and more traditional games in the genre. It certainly felt more like a 4X game than it did anything else.

There’s no tactical war, but let’s be honest— we can all think of 4X games that would have been improved by an abstract war system.

Super Jay Mann
Nov 6, 2008

If you want to see how machine learning algorithms perform when tasked with playing a complex single-player game with consistent mechanics and movement rules but random maps and asymmetric and highly chaotic entity interactions, you should read up what happened when competing teams were tasked with writing an AI to play Nethack.

Spoiler: It didn't go so well for the ML folks

Bremen
Jul 20, 2006

Our God..... is an awesome God

Super Jay Mann posted:

If you want to see how machine learning algorithms perform when tasked with playing a complex single-player game with consistent mechanics and movement rules but random maps and asymmetric and highly chaotic entity interactions, you should read up what happened when competing teams were tasked with writing an AI to play Nethack.

Spoiler: It didn't go so well for the ML folks

To be fair, I think experienced Nethack players forget how hard a task it is to train a human being to play Nethack. It's mechanics are a pile of spaghetti with thousands of unpredictable bits added over a long dev history.

I do think machine learning has potential large impacts in gaming - here's a video where someone combined voice synthesis and chatGPT to get the NPCs in skyrim to conduct a conversation in (not exactly) real time. But 4x AI is not going to be magically solved so easily.

Bremen fucked around with this message at 16:34 on Apr 9, 2023

Mayveena
Dec 27, 2006

People keep vandalizing my ID photo; I've lodged a complaint with HR
Somebody read it and tell us what it says

https://www.economist.com/business/2023/04/05/how-ai-could-disrupt-video-gaming

Orange Devil
Oct 1, 2010

Wullie's reign cannae smother the flames o' equality!

orangelex44 posted:

I think it's an impossible dream, and that 4X games need to realize that. Just have the AI play a different game from the player, and for gently caress's sake stop pretending that the AI can "win" in the classical sense. The player can lose for sure, but the AI shouldn't be bound to the same wincons.

This is why Colonization should have been the game emulated rather than Civilization.

SIGSEGV
Nov 4, 2010



It quotes some a16z people without mocking their boss for buying in Theranos and calling for the end of the scientific method, which is already a good sign. It's about AI generation of assets, replacing composers by AI, replacing object artists by AI, replacing the NPC idle chatter writers by AI, how it will all make everything much easier and faster to make and all that.

It's surprisingly not bad for the economist, they mention that workers fear it may take their jobs, that there is union action against it and that there's also some copyright legal mess inbound, and it doesn't say to send in the marines to shoot those past loving losers.

Lowen
Mar 16, 2007

Adorable.


This is getting off topic for the 4X thread, but eh.

The premise hinges on the idea that AI can be improved so that it generates quality assets for video games, or can at least help in some way. This is supposed to change the video games industry more than any other entertainment industry.
There's nothing I've seen in this article or elsewhere that leads me to believe that AI can create quality assets.
I've seen lots of different cool stuff made using machine learning, like AI dungeon, funny deep fakes, automatic rotoscoping of videos based on a single hand drawn style prompt keyframe, and all sorts of other stuff. None of that translates to making quality video game assets.

For technical claims like this I would like to see some support from sources in the field of ML AI, game asset creation, or both. There is no such person quoted in this article, and I think anyone can guess the reason.

The article, paraphrased:

"Games need a lot of content so: AI content generation will change the games industry more than anything else."

...Then there's a paragraph that just lists companies that have added features to their software that they claim use AI in some way...
...And another paragraph just barfing out "company y has added feature z that is vaguely AI related in some way."...

"some unnamed executive predicts that small firms will invent new genres only possible with AI tech"

"AAA studios might have an advantage with big marketing budgets and use AI to generate several high quality games per year rather than just banking on one hit"

"Large publishers can work around copyright concerns in training data by using their large reserves of past assets that they own"

The final paragraph is pretty well described by SIGSEGV re: unions.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Lowen posted:

Machine learning developers spend a lot of time and effort tailoring training data for a reason. Simple stuff like getting ordinary people to label images via Recaptchas is only part of that. When it eventually works it might feel like magic to the end user that doesn't know how it's done, but of course it's not, there's a lot of human work, ingenuity and knowledge involved.
[...]
Good AI of any kind, even ML AI, doesn't really play nice with computer game development, because unlike traditional games with stable rules, computer game rules change often in an iterative, experimental process. Game mechanics are constantly being added, removed, or tweaked, and balance values are constantly being changed around. ML for computer games is something ML researchers might do as a flex, and only with games that meet a rare set of criteria (stable rules, preset, symmetric maps, lots of good data from competitive, 1v1 matches... So basically, just Starcraft).

Newer models focus more on building a good alignment and feedback system for their human element. They don't do a whole lot of tailoring the training data - they just try to get as much of it as possible. To the extent data is tailored for a lot of these systems it's mostly identifying and removing objectionable content that you don't want the model to know about or use.

We're also getting much better at developing few-shot systems that can extrapolate from a very limited datataset based on its previously developed understanding that seems like it would actually be a pretty good fit for the sort of constant tweaking many games have.

The problem is, as you say, for the forseeable future its only going to be something researchers make as a flex (again, see CORSICA, which I keep bringing up and no one has even seemed to have acknowledged yet), and for that the game either should be as simple as possible in the areas where the AI improvements they want to point out aren't particularly relevant, and the game should be notable enough that people give a poo poo. Jeopardy, Chess, Diplomacy, Go, those have been done because because they allow you to highlight a specific set of abilities on a form a lot of people are going to be familiar with and understand why they should be impressed by. There's no chance of a game company ever investing the sort of resources into a project like that it would actually take to accomplish.

No one in research is gonna give a poo poo about researchers developing any specific 4x AI, so the earliest we're going to see anything from that direction is an AI that is given a baseline understanding of how to play games *in general* to some extent, which can then be few-shot specialized by letting it play against itself for whatever games they want to throw at it. And there's definitely folks working on that! But it's not happening any time soon.

Super Jay Mann posted:

If you want to see how machine learning algorithms perform when tasked with playing a complex single-player game with consistent mechanics and movement rules but random maps and asymmetric and highly chaotic entity interactions, you should read up what happened when competing teams were tasked with writing an AI to play Nethack.

Spoiler: It didn't go so well for the ML folks

This seems more like a recruiting event to snatch up anyone with promising ideas then it does a serious attempt to build something that could play nethack. Considering it was a short timeline for-fun hobby-side-thing event, and they acknowledge they misaligned the competition goal with what they were hoping to accomplish, I honestly think it turned out pretty well?

GlyphGryph fucked around with this message at 23:44 on Apr 9, 2023

Lowen
Mar 16, 2007

Adorable.



What's CORSICA specifically? It's not a very searchable term.

GlyphGryph posted:

No one in research is gonna give a poo poo about researchers developing any specific 4x AI, so the earliest we're going to see anything from that direction is an AI that is given a baseline understanding of how to play games *in general* to some extent, which can then be few-shot specialized by letting it play against itself for whatever games they want to throw at it. And there's definitely folks working on that! But it's not happening any time soon.

I don't think it's wise to believe a technology is possible until it's demonstrated, there have been too many cases of hype or outright fraud. What you describe is different from any machine learning technology application I've actually seen demonstrated.

LordSloth
Mar 7, 2008

Disgruntled (IT) Employee
The solution to the AI problem: make a dumber player. Bing Bong, so simple.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

GlyphGryph posted:

and they acknowledge they misaligned the competition goal with what they were hoping to accomplish, I honestly think it turned out pretty well?

Why does it matter that the competition goal doesn't match what the organisers were really hoping for? All the teams knew what the actual, measured goal was, and the goal in question was actually much easier for a NN-based AI to figure out than what an actual human would consider to be "best play". Even with that advantage, NNs still got absolutely crushed by traditional AI, and even the best "NN-based" entries actually mostly used a traditional hand-written AI and only delegated to a NN in specific scenarios.

That doesn't seem like it turned out "pretty well" at all if you think that NNs are some sort of magic crutch for producing good AI with way less effort than doing it by hand!

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Lowen posted:

What's CORSICA specifically? It's not a very searchable term.

The AI that plays Diplomacy.

Jabor posted:

That doesn't seem like it turned out "pretty well" at all if you think that NNs are some sort of magic crutch for producing good AI with way less effort than doing it by hand!

Is that what people think? AI is hard, though, including neural nets.

Super Jay Mann
Nov 6, 2008

Jabor posted:

Why does it matter that the competition goal doesn't match what the organisers were really hoping for? All the teams knew what the actual, measured goal was, and the goal in question was actually much easier for a NN-based AI to figure out than what an actual human would consider to be "best play". Even with that advantage, NNs still got absolutely crushed by traditional AI, and even the best "NN-based" entries actually mostly used a traditional hand-written AI and only delegated to a NN in specific scenarios.

That doesn't seem like it turned out "pretty well" at all if you think that NNs are some sort of magic crutch for producing good AI with way less effort than doing it by hand!

Yeah, I should’ve been been specific about the fact that the most damning thing about the Nethack competition isn’t that the ML algorithms were woefully inadequate compared to humans, it’s that they were woefully inadequate even compared to just writing a bot to play the game like people have been doing for decades.

Even in Chess, one of the most famous and most favorable demonstrations of the power of neural net AI, powerful chess engines still outperform those neural net AIs by a long shot.

GlyphGryph posted:

Is that what people think? AI is hard, though, including neural nets.

The fundamental philosophy underpinning all ML research is that these AIs will eventually be able to solve any problem so long as you throw enough data at it. No need for us to actually design algorithms to solve specific problems, the computer will figure that stuff out for us and we can just sit back and watch it all happen.

That sounds a hell of a lot like a “magic crutch” to me.

Super Jay Mann fucked around with this message at 01:01 on Apr 10, 2023

nrook
Jun 25, 2009

Just let yourself become a worthless person!

GlyphGryph posted:

The AI that plays Diplomacy.

Could you provide a link describing the Diplomacy AI named CORSICA?

The Chad Jihad
Feb 24, 2007


Cicero, he means Cicero

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.
Yeah sorry dunno how I drifted on the name there considering I got the name right in my posts yesterday, hah. It's Cicero not Corsica

Lowen posted:

I don't think it's wise to believe a technology is possible until it's demonstrated, there have been too many cases of hype or outright fraud. What you describe is different from any machine learning technology application I've actually seen demonstrated.

its being developed by the OpenAI folks who did ChatGPT and Dall-E. That doesn't mean they're actually going to pull it off, but they haven't actually been hyping it and their other stuff is quite real so I doubt the attempt is fraudulent. That doesn't mean they're guaranteed to succeed, of course, just that it's a problem they're working on and seem to believe they will eventually solve.

GPT-4 is already doing novel tasks with few-shot training samples, for what it's worth, and their goal has always been to get things working on the other three fronts as well.

GlyphGryph fucked around with this message at 01:51 on Apr 10, 2023

Lowen
Mar 16, 2007

Adorable.

GlyphGryph posted:

The AI that plays Diplomacy.

Oh so this then: https://ai.facebook.com/blog/cicero-ai-negotiates-persuades-and-cooperates-with-people/? I remember this was mentioned earlier.

It looks neat, but Diplomacy isn't much like a 4X game. It's a step in the right direction sure, but there is still a long way to go before we're in 4X complexity land.

GlyphGryph posted:

its being developed by the OpenAI folks who did ChatGPT and Dall-E. That doesn't mean they're actually going to pull it off, but they haven't actually been hyping it and their other stuff is quite real so I doubt the attempt is fraudulent. That doesn't mean they're guaranteed to succeed, of course, just that it's a problem they're working on and seem to believe they will eventually solve.

I mean personally I expect this to eventually work as well, it's just that my expectation of "working" is you get an AI that can play about as well and in the same style as an average human. That's pretty great, but it's not anything like deep dish peat moss saying that it's possible someone could train an AI using every single game of Civ, and that this would result in an AI that developed an uncounterable strategy that would crush 9999/10000 players until the AI learned to beat those too.

Maybe I was taking them too seriously and it's a case of source your posts.

Fangz
Jul 5, 2007

Oh I see! This must be the Bad Opinion Zone!
Diplomacy is an extremely poor test for a strategy AI because Diplomacy is dominated by social factors. Cicero can get pretty far by just being more polite than most humans in its auto-generated chat dialogue. The action space is also many orders of magnitude less complicated than a 4x game.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.
Ah, yeah, I mean, I'm not gonna predict how well it will eventually do, my main point was that no one capable of doing so was likely ever going to specifically build a proper 4x trained/learning AI, so that sort of generalist game playing AI being developed and then far more cheaply specialized on and shoved into a 4x is the earliest we'd hypothetically see it.

I do tend to think it would do quite well, if and when it happens, not expert-level play but I don't think most 4x players are particularly good at the games they play (it's hard for humans to get good considering the problem space and the fact that most have only ever played against, well... current 4x AI or similarly unskilled players), so crushing your average player is actually pretty likely? But that's pure speculation on my part.

sebmojo
Oct 23, 2010


Legit Cyberpunk









Lowen posted:

Oh so this then: https://ai.facebook.com/blog/cicero-ai-negotiates-persuades-and-cooperates-with-people/? I remember this was mentioned earlier.

It looks neat, but Diplomacy isn't much like a 4X game. It's a step in the right direction sure, but there is still a long way to go before we're in 4X complexity land.

I mean personally I expect this to eventually work as well, it's just that my expectation of "working" is you get an AI that can play about as well and in the same style as an average human. That's pretty great, but it's not anything like deep dish peat moss saying that it's possible someone could train an AI using every single game of Civ, and that this would result in an AI that developed an uncounterable strategy that would crush 9999/10000 players until the AI learned to beat those too.

Maybe I was taking them too seriously and it's a case of source your posts.

Idk, I think if chatgpt is a dead end that's currently about as good as it's gonna get then you're probably right, if we are at the bottom of the inflection curve and it's going to keep getting better for the next few years then you're definitely wrong

nrook
Jun 25, 2009

Just let yourself become a worthless person!

GlyphGryph posted:

Yeah sorry dunno how I drifted on the name there considering I got the name right in my posts yesterday, hah. It's Cicero not Corsica

I should say sorry too, my comment was a very snarky and mean-spirited way to make this point. I did refer to CICERO obliquely earlier when I was talking about possible applications.

nrook posted:

Neural networks have clear applications for game AI. This has been true for awhile. I have no idea why people are talking about ChatGPT though. The input isn’t text, and the output isn’t text either, so what do LLMs have to do with generalized game playing?

I guess in theory you could write a 4X where diplomacy happens by conversing with a chatbot. That would be a cute gimmick, but I don’t think it would make sense in any game that isn’t similar to Diplomacy.

I'm also quoting this again because people keep bringing up ChatGPT, with what appears to be the following syllogism:

A technological advance in AI is necessary for 4X AI to improve.
GPT is a technological advance in AI.
Therefore, GPT will create improvements in 4X AI.

This is not a real deduction! If anyone wants to argue that LLMs are relevant to game playing for applications beyond the obvious one CICERO used (communicating with humans using text), they're going to make an explicit argument to that effect. One can't just say "well ChatGPT is very impressive" and expect that to convince anyone.

PerniciousKnid
Sep 13, 2006

orangelex44 posted:

I think it's an impossible dream, and that 4X games need to realize that. Just have the AI play a different game from the player, and for gently caress's sake stop pretending that the AI can "win" in the classical sense. The player can lose for sure, but the AI shouldn't be bound to the same wincons.

What we need isn't AI that can win but AI that truly understands their character. Al that really gets what Sister Miriam is about, you know? What she wants from life?

Jeb Bush 2012
Apr 4, 2007

A mathematician, like a painter or poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas.

Super Jay Mann posted:

The fundamental philosophy underpinning all ML research is that these AIs will eventually be able to solve any problem so long as you throw enough data at it. No need for us to actually design algorithms to solve specific problems, the computer will figure that stuff out for us and we can just sit back and watch it all happen.

That sounds a hell of a lot like a “magic crutch” to me.

for anyone who is wondering, this is not in fact the fundamental philosophy underpinning all ML research

Mayveena
Dec 27, 2006

People keep vandalizing my ID photo; I've lodged a complaint with HR
All an AI in Diplomacy has to do is find the stalemate lines and go after those. Game over. Having played the game for many years (long ago) I don't see how an AI could make Diplomacy better.

Panzeh
Nov 27, 2006

"..The high ground"
Honestly, if a game could have an AI that plays it well developed upon release, the game probably isn't very good because, IMO, the mark of a good game is when there's enough depth that the strategies go well beyond what a developer thought out. There are in fact game AIs that are quite good, but the business of the industry and nature of the problem mean they rarely happen- the best game AIs are people who are working 15+ years after the game is released, who know the strategies, and can develop an AI to execute them well. Of course, most games don't have much interest left in that timeframe, not enough to draw in a programmer and enthusiast about the game.

Fangz
Jul 5, 2007

Oh I see! This must be the Bad Opinion Zone!
Honestly, the more I read about Cicero the less I am impressed by it. The main points:

1. Cicero never lies or backstabs any player. Arguably you could say this is smart play, but this is a technical limitation of the program, not something the AI arrived at.

2. The tests only played blitz (5 minute turns). This gave the program an advantage because the AI could conduct conversations with multiple human players simultaneously, monopolising other players diplomatic time. It presumably also came across as a very attentive and thus trustworthy player. (In Diplomacy if you are trying to talk to someone and they don't answer back for an extended period of time, they are clearly talking to someone else, which is suspicious)

3. The actual strategic engine isn't really that different from a chess engine.

Fangz fucked around with this message at 12:37 on Apr 10, 2023

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

nrook posted:

This is not a real deduction! If anyone wants to argue that LLMs are relevant to game playing for applications beyond the obvious one CICERO used (communicating with humans using text), they're going to make an explicit argument to that effect. One can't just say "well ChatGPT is very impressive" and expect that to convince anyone.

On the other hand, can you imagine an actual conversational AI layer handling the diplomatic components of a 4x? Even if it offloaded/got given relative strengths scores by a traditional symbolic AI that was at best middling at the game, the ability to recognize and coordinate with both other computer players and the actual human players in natural language would be pretty incredible. In terms of AI, "basic diplomacy" has always been a huge, glaring shitheap in every 4x I've ever played, though, so its not like it would be hard to improve on. But it's also one of those areas where appearances and how it "feels" can add a lot of value.

GlyphGryph fucked around with this message at 13:06 on Apr 10, 2023

Cabal Ties
Feb 28, 2004
Yam Slacker
These advances in ai will literally change the game I would hope even for the 4x genre, notoriously played with terrible ai.

Go and watch some of the deep mind videos to get an understanding of how to go about doing this (reinforcement learning, self training ai) for games.

The big limit to scale this for the gaming industry is computing power but I believe even unity has machine learning tools to help train ai built into it now, though im not sure how useful they are.

The technique does indeed require human input to get the output you desire, it’s the “how it gets there” bit we don’t fully understand. but what we see is it applies new ideas to how it gets there when it does.

It’s literally been proven on multiple complexities of game however as the complexity of the game increases the ai needs to learn exponentially more things to be effective, and that limits it resource wise.

It would be good fun to theorize over a perfect 4x ai though, that injects civ like personalities into it, or traits that alter how it behaves or what actions it takes.

Fangz
Jul 5, 2007

Oh I see! This must be the Bad Opinion Zone!

GlyphGryph posted:

On the other hand, can you imagine an actual conversational AI layer handling the diplomatic components of a 4x? Even if it offloaded/got given relative strengths scores by a traditional symbolic AI that was at best middling at the game, the ability to recognize and coordinate with both other computer players and the actual human players in natural language would be pretty incredible. In terms of AI, "basic diplomacy" has always been a huge, glaring shitheap in every 4x I've ever played, though, so its not like it would be hard to improve on. But it's also one of those areas where appearances and how it "feels" can add a lot of value.

I can imagine it being absolutely loving terrible. What I want out of diplomatic AI is explainable decisions and predictable responses to my actions, as well as lore-relevant flavour in text conversations. All of these are things AI text generation is worse at than the status quo.

I do not want to wade through paragraphs of chatgpt boilerplate to try to figure out whether I got a +5 or a +10 diplomacy modifier.

Fangz fucked around with this message at 13:41 on Apr 10, 2023

Jack Trades
Nov 30, 2010

Even IF current AI tech could theoretically be used to improve video game AI (it can't, but let's assume) there's absolutely no incentive for any company, that has enough money to pull that off, to actually do it.
You can't use improved AI to sell more copies.

Panzeh
Nov 27, 2006

"..The high ground"

Jack Trades posted:

Even IF current AI tech could theoretically be used to improve video game AI (it can't, but let's assume) there's absolutely no incentive for any company, that has enough money to pull that off, to actually do it.
You can't use improved AI to sell more copies.

This is generally why you don't see companies try to use experience to develop the AI post-release, it's really not all that good a selling point. Most people seem to be playing these games more as elaborate sim city-type sims than as competitive games.

Orange Devil posted:

This is why Colonization should have been the game emulated rather than Civilization.

Col is significantly more interesting from a competitive sense than Civ, to be fair- it actually has a fairly well defined goal and a race to it. It could be developed as a decent MP game if it got shortened up a bit. It also worked, to some extent, as a pure single player experience (the AI was indeed a bad joke, but more of a sort of obstacle than a competitor). Col is actually very easy though, once you understand it. It just gets tedious.

In general, 4x games from a game design perspective go on way too long. Just take the first hundred turns that you actually playtest out, find a rubric of victory, and end it there, rather than tacking on another two hundred turns that are just a victory lap. Unfortunately, the market expects that victory lap to be there, and yes, it is a little bit unsatisfying, but that's the point- you can win, or lose, and you move onto the next one.

A Wizard of Goatse
Dec 14, 2014

sebmojo posted:

Idk, I think if chatgpt is a dead end that's currently about as good as it's gonna get then you're probably right, if we are at the bottom of the inflection curve and it's going to keep getting better for the next few years then you're definitely wrong

There's definitely exactly equal odds of either of these outcomes, just like with the blockchain, or the metaverse, or any of the other dumb-sounding crap the same bunch of guys were prognosticating exponentially world-changing big numbers for

Demiurge4
Aug 10, 2011

I dunno why people think chatbot AI's mean video game AI will suddenly become good at 4X games. They're wildly different programs and the chatbots require constant moderation of their bias weights to curtail them becoming racist because they can't discriminate on their own, they can only repeat things.

I think a good chunk of people still think of artificial intelligences when they hear AI, but that's no longer true. The term AI has been appropriated for marketing purposes and now applies to anything automated.

Panzeh
Nov 27, 2006

"..The high ground"

Demiurge4 posted:

I dunno why people think chatbot AI's mean video game AI will suddenly become good at 4X games. They're wildly different programs and the chatbots require constant moderation of their bias weights to curtail them becoming racist because they can't discriminate on their own, they can only repeat things.

I think a good chunk of people still think of artificial intelligences when they hear AI, but that's no longer true. The term AI has been appropriated for marketing purposes and now applies to anything automated.

Yeah, I think the Operational Art of War games called it the 'programmed opponent' which is a much more precise term.

Mayveena
Dec 27, 2006

People keep vandalizing my ID photo; I've lodged a complaint with HR

Demiurge4 posted:

I dunno why people think chatbot AI's mean video game AI will suddenly become good at 4X games. They're wildly different programs and the chatbots require constant moderation of their bias weights to curtail them becoming racist because they can't discriminate on their own, they can only repeat things.

I think a good chunk of people still think of artificial intelligences when they hear AI, but that's no longer true. The term AI has been appropriated for marketing purposes and now applies to anything automated.

I would certainly agree that there's a large difference between automating something and using artificial intelligence. The video game Astroneer is great at automating tasks and players have made some fairly sophisticated productions, but there's no 'intelligence' in them. AI Wars is the game I think (in my limited experience) that comes closest to an actual artificial intelligence. It reacts appropriately to what the player is doing. The problem with the game is that frankly it's too abstract for most, and secondly it plays by entirely different rules than the players.

Adbot
ADBOT LOVES YOU

Veryslightlymad
Jun 3, 2007

I fight with
my brain
and with an
underlying
hatred of the
Erebonian
Noble Faction
Separate the AI victory condition from the player's, entirely. Make the game continue after the AI has "won" to see if you also win, and make the AI player absolutely insufferable with gloating if they achieve their victory. Then make a separate mode of difficulty that, in addition to your normal victory conditions, also requires you preventing every AI from reaching their objectives.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply