Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Xerophyte
Mar 17, 2008

This space intentionally left blank

Zaphod42 posted:

That's different though; that's not really AI at all, that's just a solved problem. Games like Chess, Tic-tac-toe, connect 4, etc. have so many limited board states and moves that the computer can simply calculate every possible move to infinity and always choose the best one.

The last of the 'traditional' games that we're struggling to "solve" fully like that is Go, because there's so many board states, but even then AIs are now beating pros at Go, and it'll be solved fully in a few Moore's generations.

Solving perfect information games is a lot harder than writing an AI for them.

We're still a very long way away from solving chess, barring weird breakthroughs using quantum computing or similar. Current endgame databases apparently have solutions for up to 7 pieces, and take 140 exabytes of storage. Storing the full game is more bytes than we'll have on Earth for a while. Maybe in a century or so.

Go has a state complexity that is one hundred orders of magnitude larger than chess. There's literally not enough entropy available in the universe (we have around 10^120 bits of universe) to store a solution to all of go, even though AIs will happily be trouncing human players from here to heat death. It's a fundamentally intractable problem for conventional computing techniques, not so much due to be solved fully in a few Moore's generations. It's may be that finding perfect play from an even opening is doable, but even that's probably going to be on the order of "my nanobot swam turned this spare galaxy into a supercomputer".

I think 15x15 gomoku is about the hardest game that's been solved to date, and that's "only" for the typical free opening rather than for every position or the swap2 tournament rules. Gomoku is pretty fun to actually code an AI for: the rules are simple to encode, the game is hard enough that you can't just brute force search to victory and your first AI will be easy to beat, but it's also easy enough that you can feasibly code a mean bastard search-based AI in a couple of days that will beat the poo poo out of you and everyone you know.

If you want a somewhat classic board game that's also monstrously hard to even write competent AI for, try stratego. 19x19 go has around 10^360 possible games, which is a fairly big number. Stratego has around 10^535. It's a bit different from the other games mentioned since it involves hidden information and bluffs so it's not solvable as a result, but the bluffing aspect is also why AIs aren't very good at it.

Xerophyte fucked around with this message at 10:43 on Aug 17, 2017

Adbot
ADBOT LOVES YOU

darkpool
Aug 4, 2014

Linear Zoetrope posted:

I'm a PhD student in AI (working with RTS games, actually, but I'm using them as a testing environment, making them play well isn't my focus) so I have the background to make something like this and I've toyed around with the idea but it would be hard to make fun. I think the only really good implementation of machine learning in a game is the creature in Black & White which... was fairly dumb, but seeing it learn was cool. Even then, they had to get really hacky and hand-tailory with the algorithms to get it to work.

The game ideas I've toyed around with are things like A-life games and city builders. Like, a city builder where the citizens have to learn how to do various tasks and the player controls things like they rate they try new things vs go with what they know (set on a per-building basis). It's an interesting idea, but a big problem is city builders are actually built very heavily around knowing how your citizens will do things so you can optimize. I think the idea of picking the "most optimal randomness" will hurt a lot of peoples' experiences.

I know there was a horror game which used some rudimentary machine learning that tried to "learn how to scare you" by seeing how you react to various horrors early in the game but I don't recall anyone being particularly impressed by it.

Some games do use more advanced techniques than standard games do. Total War uses Monte-Carlo Tree Search, and Planetary Annihilation uses Neural Nets in some capacity, but they're not too specific on what their structure is (direct prediction? value network?) or how they were trained. It works, but isn't super impressive. I think Democracy 3 also uses Neural Nets in some inscrutable way to simulate voter behavior?

I think ultimately, in order to make machine learning work in a game it has to be a relatively lighthearted, relaxing game with low (or nonexistent) consequences where you portray the things that are driven by the learning algorithms as simpletons or cute animals prone to folly or something. That'd prime players to forgive their flaws (within reason). Even if you're making a more complex game you need to restrict your AI to a relatively non-critical layer, using some learning AI might be good for making things look natural and thoughtful, and give the impression they're improving at things, without having to program special cases.

E: The Starcraft 2 stuff is mainly of technical interest, it's not intended to make a good in game AI.

Fun Fact: Bullfrog and Lionhead's AI programmer was Demis Hassabis, founder of DeepMind. Peter Molyneux hasn't achieved much without him. DeepMind will be taking on human StarCraft players at some time.

https://deepmind.com/blog/deepmind-and-blizzard-release-starcraft-ii-ai-research-environment/

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

TheresaJayne posted:

I don't yet have a solid idea - some places suggest that Racing Games are the easiest games to start with.

I am just looking for general "Game Networking 101" info, to provide options and suggestions, I have a good idea on how networking actually works but then there is the question that arose back in the Commodore Amiga/ Atari STfm days - I still fondly remember MidiMaze which networked ataris using the Midi port for a multiplayer maze shooter
i get lan based could use broadcast to tell all players the current game state from the server but that wouldn't work with internet gaming etc. so this is why I am asking, What is the benefits of UDP/TCP and what data should be sent? understanding this level will help with the "development" going forward as understanding what can and should be done will maybe mold and improve any software going forward.

There are lots of things that could be sent over the wire, what should be sent and what can use some internal code to track?

IE i shoot a bullet at the enemy player, do i send realtime the path of the bullet and speed or just send a Vector and let the enemy player's game calculate the trajectory (this is a derp example as Its obvious you do the latter) but its not just that - movement, bullets, mobs, sound triggers etc, I have seen a game where I hear one voiceline and other players hear a completely different voiceline (i think that was League of Legends but its been a while )

How many times do we have to say "it depends"? If you're asking about the differences between TCP and UDP, I don't believe you have a good idea on networking generally. If you're really interested, you should probably read some of the resources mentioned.

TCP is reliable and ordered, UDP is neither. The mechanics of these guarantees on TCP provide significant drawbacks on some types of games. I have literally sat through multiple 2 hour lectures detailing the specifics of what and why that is the case. Those lectures were non-exhaustive and made reference to other documentation on things left out or glossed over.

While in some ways it's not terribly difficult, it's not a topic that can be easily introduced in the format of a message board. It requires a lot specialized knowledge and a deep understanding of extant protocols is incredibly useful in building your own. We've listed resources detailing a lot of extant protocols. If you actually care about what you're claiming to care about, you should read them.

If you have specific questions, I'm sure all of us will be happy to help. This is a fairly specialized subject matter, though, so we can't give good advice unless you meet us half-way. And people around here generally strongly dislike giving advice they know is bad.

Sending broadcasts for your game state is almost certainly worse than sending directed messages unless you already know the topology of your network. Though it does solve the discoverability problem. If you were going to use broadcast for some reason, it's generally better to use multicast instead.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Xerophyte posted:

If you want a somewhat classic board game that's also monstrously hard to even write competent AI for, try stratego. 19x19 go has around 10^360 possible games, which is a fairly big number. Stratego has around 10^535. It's a bit different from the other games mentioned since it involves hidden information and bluffs so it's not solvable as a result, but the bluffing aspect is also why AIs aren't very good at it.
It seems like it's not that hard to incorporate bluffing into an AI strategy, it'd just require a layer of "apparent signals about unknown piece type" and "opponent's apparent/expected beliefs about my piece" with weights. Randomizing the AI's level of willingness to bluff would suffice to offset the "know your opponent" problem (the AI doesn't know its opponent's bluff-ness so it has to make it so its opponent doesn't know its level of bluffoonery either.)
The start of the game would attach a certain level of signal too, like the front line is probably not the flag.

It would be tricky to get machine learning involved though, since "this exact same thing was a winning move one game and a losing move the next" is not very good for training.

Stick100
Mar 18, 2003

Mata posted:

The lockstep model, when the whole gamestate can not be synchronized, e.g. when you have 1500 archers. This works by carefully ensuring each client simulates the exact same state, sacrificing latency for improved accuracy. Even the smallest floating point error will eventually cause a desync because the gamestate is never corrected, errors just compound on each other. I think all RTS and Mobas use this model, which hasn't really changed much since the age of empires days, except for some games allowing for stuff like joining in-progress games.

Thanks that explains when you join a game of Heroes of the Storm they make you wait while it catches up. It must be replaying the lockstep game-state before they can letting you play.

OtspIII
Sep 22, 2002

roomforthetuna posted:

It seems like it's not that hard to incorporate bluffing into an AI strategy, it'd just require a layer of "apparent signals about unknown piece type" and "opponent's apparent/expected beliefs about my piece" with weights. Randomizing the AI's level of willingness to bluff would suffice to offset the "know your opponent" problem (the AI doesn't know its opponent's bluff-ness so it has to make it so its opponent doesn't know its level of bluffoonery either.)
The start of the game would attach a certain level of signal too, like the front line is probably not the flag.

It would be tricky to get machine learning involved though, since "this exact same thing was a winning move one game and a losing move the next" is not very good for training.

I'm not sure that bluffing will ever be fully fun against a non-human. I feel like a lot of the fun in bluffing comes from trying to find involuntary clues that the opponent is letting spill about their mental state/strategy/etc, and computers don't have tells.

The stuff you're talking about would probably go a long way towards improving on what's generally done now, though.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

OtspIII posted:

I'm not sure that bluffing will ever be fully fun against a non-human. I feel like a lot of the fun in bluffing comes from trying to find involuntary clues that the opponent is letting spill about their mental state/strategy/etc, and computers don't have tells.

The stuff you're talking about would probably go a long way towards improving on what's generally done now, though.

I would imagine that if you're designing an AI to be fun by having bluffing, having tells would be part of that.

Zaphod42
Sep 13, 2012

If there's anything more important than my ego around, I want it caught and shot now.

Thermopyle posted:

I would imagine that if you're designing an AI to be fun by having bluffing, having tells would be part of that.

Yeah but the problem is generating tells. The system that Otsplll was responding to, that roomforthetuna proposed, would have to have a few hard-coded tells which once you picked up on you couldn't really ignore.

Actually generating unique tells would require a really complex neural-net system.

But I disagree about tells being the end-all-be-all of bluffing. I think tells are really overrated. Just a basic emotional state or sense of 'tilt' would be enough to get basic gambling going okay enough, which is probably what good poker AIs do. That combined with the stuff roomforthetuna said about having a sense of 'what opponent thinks my cards are'.

Zaphod42 fucked around with this message at 20:27 on Aug 17, 2017

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

OtspIII posted:

I'm not sure that bluffing will ever be fully fun against a non-human.
I think that's probably more the reason why people don't make AI that's good at bluffing games. Not because it's hard, but because it's pointless. Not fun, and bluffing games don't tend to have AI contests to enter.
(Sure, bots for online poker are a thing, but they make money already playing just on raw odds so there's not really any incentive to make them capable of bluffing or understanding bluffing.)

Jo
Jan 24, 2005

:allears:
Soiled Meat

Ranzear posted:

Anyone have a stance on Godot 3.x yet? It got mentioned further up and now I'm interested in it having global illumination. The mention pertained to being more code-focused too; Unity is too GUI-centric for me.

Just comments on its physics system would be helpful. I need (most basically) raycast vehicles on terrain meshes. I want to dig into having code-accessible framebuffers for visual tricks too, which UDK always fell short on.

I can share my anecdotes from the 2.x era. Skip to the bottom for the code-centric bits.

The good: they do a lot of things more elegantly than Unity. In particular, I found having nodes as composable and inheritable units really made reasoning about the world a lot easier. FOSS adds a nice bit, too. I like that the editor is built in Godot itself. Means you have a lot of UI tools at your disposal.

The bad: the learning curve is rather peculiar. You'll pick up a lot very quickly but there's usually some nuanced bit of funk in how something is handled and it's not always clear what the right approach is. The UI still has a surprising amount of rough edges and feels more like a very adroit student project than a professional engine.

The ugly: GDscript is all the bad parts of Python and none of the good parts of anything else. It's close enough to Python that someone like me who is cozy in it will try to do poo poo that doesn't compile well. I was able to compile the entire engine in Windows with Visual Studio, but it was a profoundly nontrivial endeavor. I don't know the extent to which framebuffers are exposed in the new version, but I think you'll be SOL as far as any half-way decent shaders are concerned.

Stick100
Mar 18, 2003

roomforthetuna posted:

I think that's probably more the reason why people don't make AI that's good at bluffing games. Not because it's hard, but because it's pointless. Not fun, and bluffing games don't tend to have AI contests to enter.
(Sure, bots for online poker are a thing, but they make money already playing just on raw odds so there's not really any incentive to make them capable of bluffing or understanding bluffing.)

Oh no, absolutely online poker bots totally bluff and understand bluffing. Of course they see it as a percentage/heuristic and make decisions based on it. High level poker requires a % of bluffing no matter what.

The percentage is low among extremely skilled opponents but not matter what you have to occasionally bluff to make your max profit.

Absurd Alhazred
Mar 27, 2010

by Athanatos
If you're using Unity you might want to patch/upgrade/mitigate immediately.

Linear Zoetrope
Nov 28, 2011

A hero must cook

darkpool posted:

Fun Fact: Bullfrog and Lionhead's AI programmer was Demis Hassabis, founder of DeepMind. Peter Molyneux hasn't achieved much without him. DeepMind will be taking on human StarCraft players at some time.

https://deepmind.com/blog/deepmind-and-blizzard-release-starcraft-ii-ai-research-environment/

The fact that the creature guy and the DeepMind guy are the same person is one of those facts I continuously forget and rediscover, thanks.

Xerophyte
Mar 17, 2008

This space intentionally left blank

roomforthetuna posted:

It seems like it's not that hard to incorporate bluffing into an AI strategy, it'd just require a layer of "apparent signals about unknown piece type" and "opponent's apparent/expected beliefs about my piece" with weights. Randomizing the AI's level of willingness to bluff would suffice to offset the "know your opponent" problem (the AI doesn't know its opponent's bluff-ness so it has to make it so its opponent doesn't know its level of bluffoonery either.)

I don't mean to say that it's not possible. Hidden information is just a considerably harder problem in general, since you in principle need to search every possible permutation of the hidden data at every branch of the game tree so the branching factor tends to be very high. You absolutely need heuristics for the likely distribution of the hidden data. You could probably make a stratego AI by doing something like using ML to predict likely piece distributions given the current state + past play, then sampling that distribution and Monte Carlo searching the game tree for minmax lines, for instance. It's just generally going to be difficult to make the program play robustly and without any exploitable weaknesses.

You can contrast with connect6 or arimaa which were intentionally designed to be computationally challenging and have a similar state complexity. Top AIs for them are pulling ahead of humans even with just traditional search techniques, and a decent part of that is just that perfect information games are a lot easier to prune.

Ranzear
Jul 25, 2013

Jo posted:

Dscript is all the bad parts of Python and none of the good parts of anything else.

There's an interface for 3.0 for plain-rear end Python. UI feel is almost nothing to me. If I can code terrain generation and object placement I could probably avoid touching it entirely.

All I really need for my minimalist visual junk is framebuffer access, or even just a little extra renderer access. Mostly I want to light the scene flat-shaded with point lights, render to a texture, threshold that into white/black areas, then reapply it as a saturation mask to the proper scene, so that anything that wasn't lit up by line-of-sight gets grayed out as a 3D fog of war effect.

Workaday Wizard
Oct 23, 2009

by Pragmatica

Ugh I hate it when security advisories say it's there is an RCE but don't mention the RCE source.

e: According to the mitigation tool the vulnerability has to do with opening the asset store from other programs. Probably a bug in the uri parser.

Workaday Wizard fucked around with this message at 18:35 on Aug 18, 2017

BirdOfPlay
Feb 19, 2012

THUNDERDOME LOSER

Shinku ABOOKEN posted:

Ugh I hate it when security advisories say it's there is an RCE but don't mention the RCE source.

e: According to the mitigation tool the vulnerability has to do with opening the asset store from other programs. Probably a bug in the uri parser.

Yeah, they say they want users to have time to patch. Also, why do all of Unity's patches just seem to be full installations? Did I miss something were piecemeal patches went out of vogue? Granted, it could only be patching version, but 500 megs seems a bit heavy for this.

baby puzzle
Jun 3, 2011

I'll Sequence your Storm.

Xerophyte posted:

Solving perfect information games is a lot harder than writing an AI for them.

We're still a very long way away from solving chess, barring weird breakthroughs using quantum computing or similar. Current endgame databases apparently have solutions for up to 7 pieces, and take 140 exabytes of storage. Storing the full game is more bytes than we'll have on Earth for a while. Maybe in a century or so.

Go has a state complexity that is one hundred orders of magnitude larger than chess. There's literally not enough entropy available in the universe (we have around 10^120 bits of universe) to store a solution to all of go, even though AIs will happily be trouncing human players from here to heat death. It's a fundamentally intractable problem for conventional computing techniques, not so much due to be solved fully in a few Moore's generations. It's may be that finding perfect play from an even opening is doable, but even that's probably going to be on the order of "my nanobot swam turned this spare galaxy into a supercomputer".

I think 15x15 gomoku is about the hardest game that's been solved to date, and that's "only" for the typical free opening rather than for every position or the swap2 tournament rules. Gomoku is pretty fun to actually code an AI for: the rules are simple to encode, the game is hard enough that you can't just brute force search to victory and your first AI will be easy to beat, but it's also easy enough that you can feasibly code a mean bastard search-based AI in a couple of days that will beat the poo poo out of you and everyone you know.

If you want a somewhat classic board game that's also monstrously hard to even write competent AI for, try stratego. 19x19 go has around 10^360 possible games, which is a fairly big number. Stratego has around 10^535. It's a bit different from the other games mentioned since it involves hidden information and bluffs so it's not solvable as a result, but the bluffing aspect is also why AIs aren't very good at it.

Tell a dummy why a game has to be "solved". Can't AI beat humans in Go?

dupersaurus
Aug 1, 2012

Futurism was an art movement where dudes were all 'CARS ARE COOL AND THE PAST IS FOR CHUMPS. LET'S DRAW SOME CARS.'

baby puzzle posted:

Tell a dummy why a game has to be "solved". Can't AI beat humans in Go?

It's kind of an academic point, but by solving a game you can say exactly what to do to win from any position (see tic-tac-toe). For an unsolved game like Go or Chess, the computer can calculate a best move for the situation -- better than a human can -- but it's not proveably winning, and it's only done by brute force running through moves.

Xerophyte
Mar 17, 2008

This space intentionally left blank

baby puzzle posted:

Tell a dummy why a game has to be "solved". Can't AI beat humans in Go?

"Solved" in the context of traditional games AI has a very specific meaning of computing who will win the game from a specific position. AIs can beat the tar out of humans in a lot of games that are not remotely solved since AIs suck marginally less than humans at computing 10^120 possible board positions, but that still leaves a lot of game to look at.

Solving a game is not a requirement for anything, it's just that when someone is saying that chess is solved then they are not just saying that chess AIs are really good, which those AIs sure are. They're saying chess AIs are theoretically perfect, which those AIs definitely are not.

munce
Oct 23, 2010

Ranzear posted:

All I really need for my minimalist visual junk is framebuffer access, or even just a little extra renderer access. Mostly I want to light the scene flat-shaded with point lights, render to a texture, threshold that into white/black areas, then reapply it as a saturation mask to the proper scene, so that anything that wasn't lit up by line-of-sight gets grayed out as a 3D fog of war effect.

Pretty sure you could do that in Godot already. You can set a camera to render to texture. Then use the texture as a sampler in a shader on your real scene camera. Do the black/white/saturation whatever in the shader and it should work.

I'm doing something similar using 2.1.2.

Stick100
Mar 18, 2003

Xerophyte posted:

"Solved" in the context of traditional games AI has a very specific meaning of computing who will win the game from a specific position.

Solving is a bit more than that. A solved game means that we know every possible variation, which includes all actions from both players from any possible state. Tick-Tac-Toe was probably the first non-trival game solved and we know that two players trying will always result in a tie. Checkers was solved in 2007.

Wikipedia https://en.wikipedia.org/wiki/Solved_game#Solved_games posted:

Draughts, English (Checkers)
This 8×8 variant of draughts was weakly solved on April 29, 2007 by the team of Jonathan Schaeffer, known for Chinook, the "World Man-Machine Checkers Champion". From the standard starting position, both players can guarantee a draw with perfect play.[10] Checkers is the largest game that has been solved to date, with a search space of 5*1020.[11] The number of calculations involved was 1014, which were done over a period of 18 years. The process involved from 200 desktop computers at its peak down to around 50.[12]

Edit to correct numbers in quote as described below, thanks roomforthetuna!

Stick100 fucked around with this message at 20:36 on Aug 19, 2017

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Stick100 posted:

The number of calculations involved was 1014.
A few more than that. 1014. Copy-paste can be a bitch.
(And 5*1020.)

darkpool
Aug 4, 2014

baby puzzle posted:

Tell a dummy why a game has to be "solved". Can't AI beat humans in Go?

So, "solved" in this context means a formal logical proof exists that the game can always be won from a certain state. For example citizens of Königsberg used to play at trying to find a route to cross each of the cities 7 bridges only once, until Leonhard Euler was tasked with the challenge and developed Graph Theory to show it was impossible.

But finding logical proofs is hard and sometimes impossible, two of the most famous proofs are proofs that not everything can be proven, called Godel's Theorems of Incompleteness. So sometimes the best option is to just check every possible combination to find a winning strategy, it's just that the search space can be really, really large. The first theorem to be proven using computer checking was the Four Color Map Therom, that states no more than four colors are required to color the regions of a map so that no two adjacent regions have the same color. This happened in 1976 when computers became powerful enough for the reasonably small search space.

The problem with the game Go was that the search space is huge, far too many to check in a reasonable time. So what DeepMind does to play the game is prune away as many bad moves as possible based upon analysing past games, this is a heuristic measure and it is not "proven" in the logical sense that the machine will win.

Even naive heuristic approaches at cutting down combinations to check can have excellent results, such as Simulated Annealing. http://katrinaeg.com/simulated-annealing.html

EDIT:

The Wiki page posted states about Go the following... "The 5×5 board was weakly solved for all opening moves in 2002. The 7×7 board was weakly solved in 2015.[27] Humans usually play on a 19×19 board which is over 145 orders of magnitude more complex than 7×7."

darkpool fucked around with this message at 19:01 on Aug 20, 2017

Zaphod42
Sep 13, 2012

If there's anything more important than my ego around, I want it caught and shot now.

baby puzzle posted:

Tell a dummy why a game has to be "solved". Can't AI beat humans in Go?

Its a slight academic difference, basically an argument of semantics but he is correct so I appreciate him being specific and correcting me.

I mean nothing "has" to be. Its just what we choose to do. And like I was laying out, there's degrees. I was putting it at 3 degrees but its more like 4. There's Hard AI, Weak AI, brute force strategies and full on Solutions. All 4 are automatic machine ways to play a game.

I was presenting the brute force strategy as the full solution but he's right that the full solution involves completely mapping the problem space which is even more costly.

But TLDR: nothing has to be 'solved'. Its just a thing we can do. Science!

darkpool posted:

Even naive heuristic approaches at cutting down combinations to check can have excellent results, such as Simulated Annealing. http://katrinaeg.com/simulated-annealing.html

Annealing is a cool strategy, great way to avoid local maxima in hill climbing and such a simple concept.

Zaphod42 fucked around with this message at 18:46 on Aug 20, 2017

Doctor Soup
Nov 4, 2009

I have nothing but confidence in you, and very little of that.
Welp, cert is about to rear its ugly head and we're officially in Next Level Crunch. I saw a bug on Jira today where the game goes partly unresponsive in a menu and my first thought was "well that's lower priority, I'll come back to it."

Our biggest bugs right now are all coming from fixes for long-standing smaller bugs and it feels like we're on some kind of bug treadmill trying to get back to the original list. And that list is pretty sizeable. We've got everything from weird camera moments to weird platform-based shader issues to weird redundant asset loads that hang the game on scene activation.

I honestly have no idea if we're in a death march or not, this is the first real game job I've had so I don't really have a good basis of comparison.

Doctor Soup fucked around with this message at 03:55 on Aug 21, 2017

KillHour
Oct 28, 2007


Unity's 2D text doesn't have an easy way to make it ignore z testing when in world space and Unity's 3D text doesn't have an easy way to do outlines or word wrap. :bang:

Stick100
Mar 18, 2003

KillHour posted:

Unity's 2D text doesn't have an easy way to make it ignore z testing when in world space and Unity's 3D text doesn't have an easy way to do outlines or word wrap. :bang:

Use Textmesh, don't use Unity's built in text. Unity bought it from the asset owner because their own solution was so bad. It's now free on the asset store (was one of the biggest sellers on the store) and will eventually be brought into Unity proper eventually.

KillHour
Oct 28, 2007


Stick100 posted:

Use Textmesh, don't use Unity's built in text. Unity bought it from the asset owner because their own solution was so bad. It's now free on the asset store (was one of the biggest sellers on the store) and will eventually be brought into Unity proper eventually.

Oh God, this is so much better. How did I survive without this?



(looks much better at native resolution, but still needs tweaking for readability)

baby puzzle
Jun 3, 2011

I'll Sequence your Storm.

Doctor Soup posted:

Welp, cert is about to rear its ugly head and we're officially in Next Level Crunch. I saw a bug on Jira today where the game goes partly unresponsive in a menu and my first thought was "well that's lower priority, I'll come back to it."

Our biggest bugs right now are all coming from fixes for long-standing smaller bugs and it feels like we're on some kind of bug treadmill trying to get back to the original list. And that list is pretty sizeable. We've got everything from weird camera moments to weird platform-based shader issues to weird redundant asset loads that hang the game on scene activation.

I honestly have no idea if we're in a death march or not, this is the first real game job I've had so I don't really have a good basis of comparison.

How many people are working on bugs? Many hands can make lighter work in crunch time, assuming everyone is careful. Just worry about one thing at a time.

mbt
Aug 13, 2012

How does one protect their single-player game against cheat engine / save scumming? (without making it online)

is it ultimately futile and one should design their game around that possibility?

The Fool
Oct 16, 2003


Meyers-Briggs Testicle posted:

How does one protect their single-player game against cheat engine / save scumming? (without making it online)

is it ultimately futile and one should design their game around that possibility?

Embrace the fact that people like tweaking things and don't go out of your way to make it hard to do.

OtspIII
Sep 22, 2002

Meyers-Briggs Testicle posted:

How does one protect their single-player game against cheat engine / save scumming? (without making it online)

is it ultimately futile and one should design their game around that possibility?

It depends a bit on the type of game you're making, but I really wouldn't worry about it.

If anything, I'd lean into it a little. I really like the idea of Ironman mode in, like, X-Com. Let people scum pretty easily as their 'default' way of playing the game, but make an opt-in thing that makes it the way you want it to be played. If people are opting into ironman mode then they've build up enough of a self-image of integrity that they're not going to cheat no matter how easy it is.

KillHour
Oct 28, 2007


Add a bunch of cheats like old-school games and don't worry about it. People play games to have fun and everyone has fun their own way. If It's offline singleplayer, who cares? People who want The challenge of playing it the "right" way will do so regardless. I hate when a game gets in the way of me having fun with it.

Linear Zoetrope
Nov 28, 2011

A hero must cook
The only way it makes sense to go out of your way to prevent cheating is

1. You have some weird ARG-level poo poo hidden in there and you don't want people boundary breaking and finding it early.
2. You plan to have some speedrunner or other single player tournament community

(Unless you really care about the purity of your game's achievement distribution I guess, but it's been shown from the existence of Steam achievements that even with cheating possible they follow a realistic curve of difficulty/rarity)

But 1. you're fighting an insane battle against dataminers and crazy hackers here. If anybody cares, the only way to keep any mystery is to keep the content in patches or do encryption shenanigans or make the series of steps so cryptic normal cheats wouldn't come close to revealing the secret (and don't like... actually put the loading zone behind a locked door someone can glitch to until they achieve the nonsense).
2. Speedrunners and other high level players can probably spot a cheater a million miles away, and if your tournament has ~stakes~ of some sort you can require some sort of tournament legality verification process like a video or something which will likely cull 95% of cheating.

baby puzzle
Jun 3, 2011

I'll Sequence your Storm.
I have a spline issue.

I have cubic splines, fourth order, containing 6 dimensions (a 3D position and an a 3D up vector). These splines have ~500 points that are all kinda equidistant from each other.

My problem is that the "length" between percentage points on the spline aren't all equidistant. Like, the positions at location 0.4 and 0.5 will not be the exact same distance apart as the positions at 0.5 and 0.6 (a generalization). It is good enough generally, but there are some parts of the game where these fluctuations become apparent and ugly.

Another way to phrase it is that some parts of the spline are "denser" than others.

Is there a kind of spline where the actual distances between percentage points along the spline are consistent? Or a fast way to get accurate points from my cubic splines?

Doctor Soup
Nov 4, 2009

I have nothing but confidence in you, and very little of that.

baby puzzle posted:

How many people are working on bugs? Many hands can make lighter work in crunch time, assuming everyone is careful. Just worry about one thing at a time.

We're pretty much all hands on deck for bugs, but I'm one of two programmers so that does bottleneck things a fair bit.

In any case, that's a good attitude to take, thanks. I'll just keep working on problems until our publisher says pencils down.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

baby puzzle posted:

I have a spline issue.

I have cubic splines, fourth order, containing 6 dimensions (a 3D position and an a 3D up vector). These splines have ~500 points that are all kinda equidistant from each other.

My problem is that the "length" between percentage points on the spline aren't all equidistant. Like, the positions at location 0.4 and 0.5 will not be the exact same distance apart as the positions at 0.5 and 0.6 (a generalization). It is good enough generally, but there are some parts of the game where these fluctuations become apparent and ugly.

Another way to phrase it is that some parts of the spline are "denser" than others.

Is there a kind of spline where the actual distances between percentage points along the spline are consistent? Or a fast way to get accurate points from my cubic splines?

Are you looking for spherical arcs?

Xerophyte
Mar 17, 2008

This space intentionally left blank

baby puzzle posted:

Is there a kind of spline where the actual distances between percentage points along the spline are consistent? Or a fast way to get accurate points from my cubic splines?

The search term you're probably looking for is "arc length parametrization". The ideal is to redefine the spline f(t) with a parameter substitution s = g(t) where g is some function that maps from time to arc-length traversed. Finding such a reparametrization is in general hard. I'm not sure if the 6D case makes it harder, obviously you'd only care for the arc length in the spatial dimensions.

I've never actually implemented this or looked very hard at the problem. Some googling lead to a lot of people linking to https://www.geometrictools.com/Documentation/MovingAlongCurveSpecifiedSpeed.pdf , maybe that's helpful?

Possible crude approach to dodge doing actual math: can you assume that the curve is approximately linear for a single time step and do a binary search in t? I.e. given a previous sample point P0 = f(t0), search for the next sample point P1 = f(t1) where |P0 - P1| = C for some constant length step parameter C.

Adbot
ADBOT LOVES YOU

baby puzzle
Jun 3, 2011

I'll Sequence your Storm.

leper khan posted:

Are you looking for spherical arcs?

No. At least I don't think so. But think I see how that would make things easier...? If I could actually calculate the distance between control points. But that isn't what I'm doing.

Xerophyte posted:

The search term you're probably looking for is "arc length parametrization". The ideal is to redefine the spline f(t) with a parameter substitution s = g(t) where g is some function that maps from time to arc-length traversed. Finding such a reparametrization is in general hard. I'm not sure if the 6D case makes it harder, obviously you'd only care for the arc length in the spatial dimensions.

I've never actually implemented this or looked very hard at the problem. Some googling lead to a lot of people linking to https://www.geometrictools.com/Documentation/MovingAlongCurveSpecifiedSpeed.pdf , maybe that's helpful?

Possible crude approach to dodge doing actual math: can you assume that the curve is approximately linear for a single time step and do a binary search in t? I.e. given a previous sample point P0 = f(t0), search for the next sample point P1 = f(t1) where |P0 - P1| = C for some constant length step parameter C.

I think that "arc length parameterization" is something I actually tried to invent on my own, but I didn't get good enough results for it to be worth the trouble. I'm not smart enough to follow anything in that link. I think my method was to measure the distance between a lot of samples.

A binary search is something I considered but I think it will be too slow. I'm using a binary search to find closest points on the spline, but I'm not using that anywhere that needs to be fast.

I think I will just continue to kinda hack my way around the issue, wherever it is a problem.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply