|
Sex Bumbo posted:Why are you so worried about race conditions with asset loading? This is a really straightforward scenario. Load stuff in on a thread and let the main thread know it's ready. Everything needs to eventually go to the main thread because that's the only safe place to upload data on GL/DX. I thought it would be straightforward, but I'm getting replies saying it's more complicated than that. I think I would still lock it because it shouldn't be much work and would prevent rare problems.
|
# ? May 16, 2016 19:43 |
|
|
# ? May 18, 2024 19:31 |
|
22 Eargesplitten posted:I thought it would be straightforward, but I'm getting replies saying it's more complicated than that. I think I would still lock it because it shouldn't be much work and would prevent rare problems. Just do it and see what happens. You've got people arguing about it in general terms and specific terms and I think you're getting all confused by that. Just. do. it.
|
# ? May 16, 2016 19:54 |
|
I'd argue that until you're doing something sufficiently complex (which asset loading is not) you actually don't need locks. Use higher level containers like concurrent/thread safe queues written and tested by other folks. Internally they probably use locks of some sort but as long as you write against their interface, you're not going to bungle up your synchronization primitives. This should ideally be like an afternoon project.
|
# ? May 16, 2016 19:56 |
|
Well, it's probably going to be more than an afternoon since this is my first delve into this game's guts. I'll try it, though. It seems like it should work, and I'm getting a lot of people saying it will work, so I'll give it a shot.
|
# ? May 16, 2016 20:12 |
|
If you're going to get in trouble with multithreading, it will usually come in two forms: 1. A data race. 2. The subordinate thread tries to call something only the master thread can call. The first can be reasonably managed by making sure whatever is loaded doesn't make it to the main engine until a point where it can safely use it, and that it only goes there when it is coherent enough to be safely used. So it would be bad news to give the engine some new stuff in unknown parts of the loop, because it might show up for rendering, but not physics, or similar. It might only be an issue for one iteration, but it might be messed up badly from it. Solving this pretty basic synchronization, so long as you can control that part of the engine. Say, when the engine gets some new geometry, it might immediately try to give it to the physics subsystem mid-loop. You'd have to guard against that. The second will come up if, say, the engine tries to load a texture and then ram it down to the video card from something other than the thread that took the device handle. Quite a few APIs especially related to graphics don't like to be used from another thread, so you have to schedule the main thread to run the stuff for you. This is really common in GUI programming, for example. It's just that a lot of APIs hide that away. A good clue you are in this situation is if the main way of using whatever the framework happens to be is to set up some basic stuff and then enter a blocking infinite loop of some kind. It's been a few years since I doodled on a multithreaded engine concept, but I vaguely recalled OpenGL didn't like to be kicked around by multiple threads.
|
# ? May 16, 2016 20:40 |
|
Okay. So if I'm understanding you correctly, the second part is mostly an issue if I try to render something in the second thread, but if I'm just loading stuff and handing it over to the main thread in whole units when it's ready, that shouldn't be a problem. I'm planning on using an eventqueue if I can figure out / find how to implement it in C++. That should keep the second thread limited to loading, since it will be asleep until I feed something into it, and then put back to sleep afterwards. That way I can also make sure that everything is going in and out in the order I want. So far I'm reading through one version of multithreading code that the project lead found. From what I can tell, it seems like it should work, although about 75% of the folder names are in cyrillic, which is a pain in the rear end to navigate. When he gets back I'm going to check out the other code that he found before this stuff. Even if I gently caress this up, that's what a previous commit is for.
|
# ? May 16, 2016 20:54 |
|
Obsurveyor posted:Just do it and see what happens. You've got people arguing about it in general terms and specific terms and I think you're getting all confused by that. Just. do. it. This is probably the best advice for people learning new things in code. Write something that you think should work, test it, then fix whatever breaks. That's basically the common programming loop in a nutshell; few things you write will ever work right on the first try and you pretty much have to deliberately try to break things to screw up your computer in any meaningful way. Generally speaking the worst thing that can happen is your game crashes and you have to figure out what went wrong to cause that.
|
# ? May 16, 2016 23:12 |
|
Yeah if you have a vague idea of how to do whatever you want to do, just do it and ask questions when poo poo starts breaking and ask for code reviews or whatever. Don't spend tons of time in analysis paralysis or you won't get anything done. Just make sure its easy to replace the poo poo you wrote in case it's completely broken (no one ever does this)
|
# ? May 17, 2016 01:19 |
|
I have git. I think my paralysis is because I have never worked with a code base this big. I have spent today looking through the code for where it loads. 15,000 files is a lot.
|
# ? May 17, 2016 01:54 |
|
"at least you have source code"
|
# ? May 17, 2016 06:23 |
|
Stick100 posted:https://www.assetstore.unity3d.com/en/#!/content/32647 Thanks, got it working. Now to figure out the rest
|
# ? May 17, 2016 11:02 |
|
22 Eargesplitten posted:Okay. So if I'm understanding you correctly, the second part is mostly an issue if I try to render something in the second thread, but if I'm just loading stuff and handing it over to the main thread in whole units when it's ready, that shouldn't be a problem. I'm planning on using an eventqueue if I can figure out / find how to implement it in C++. That should keep the second thread limited to loading, since it will be asleep until I feed something into it, and then put back to sleep afterwards. That way I can also make sure that everything is going in and out in the order I want. There are some pretty heavyweight ways of handling this where you synchronize on just about every event queue operation. There are lighter ways that sound like premature optimization, but are lazier to write. One type of event queue just wraps your favorite sequential data structure with a master lock. Keep two of them around. When the primary thread is ready for it, the queue manager hands over the one the secondary thread was just using, and then rotates into the other queue so the secondary thread can keep on chewing on things. Then you only need to synchronize on the queues themselves. As for premature optimization: depending on the usage model, you might even be able to use lockless queues; I think there might be a basic producer-consumer one that might be good enough if you just have one producer and one consumer.
|
# ? May 17, 2016 17:38 |
|
Shalinor posted:Can anyone remember how planar reflections work (or find a writeup)? I'm finding mostly writeups for "all the tech that came way, way after planar reflections that makes it look cooler" when, no, I really do just want a perfectly reflective floor that puts up incredibly unrealistic mirror reflections. Specifically, what is throwing me off is remembering which axis change, when creating a mirrored wall VS a mirrored floor. Most especially though, the distortions/UVs involved in mapping the resultant offscreen RT to the floor look like black magic. Surely there's a simple mathematical explanation for those, right? I mean I know the effect is a bit of a hack, but man. It's just reflection in a plane. If Z is the vertical axis and the floor is the plane defined by Z = 0, then inverting the Z coordinate is exactly right. If the floor is the plane defined by Z = a then you want to replace the Z coordinate with 2a - Z. Non-axis-aligned ones would be more tricky but it's still just reflection in a plane, so you could do it by rotating the world so that the mirror plane is axis-aligned, doing the reflection, and then rotating back. (You get 2a - Z in the same way, it's -(Z - a) + a). seiken fucked around with this message at 19:41 on May 17, 2016 |
# ? May 17, 2016 19:35 |
|
Crosspost from the Doom 2016 thread, but: Is there a comprehensive guide to or review of the original Doom source code, such that a relative newbie (~1year professional software development) can understand it? I've never delved into a large C project before, and the lack of a directory structure is throwing me off on where the endpoint into the application is, what the program flow is, what everything even is, and how it's organized.
|
# ? May 18, 2016 13:28 |
|
The closest you're probably going to get is Fabien Sanglard's Doom source review. It's neat and relatively easy to understand. http://fabiensanglard.net/doomIphone/doomClassicRenderer.php
|
# ? May 18, 2016 13:57 |
|
Cool, thanks! That covers a good chunk of what I was wondering about re: rendering. I'm not sure how much I need to worry about assembly level RAM-cache optimization in modern development these days, but I'm guessing I'd need to worry about it if I'm trying to replicate this in Clojure or something...? I also have a kind of related question. When rendering game state, do you want to do rendering concurrently with updating the game state, or do you want to wait until the rendering is finished before advancing the game state, accepting input, etc. Basically, do you want the update->render->update loop to be sequential, or should rendering be a process that's called separately? Pollyanna fucked around with this message at 15:22 on May 18, 2016 |
# ? May 18, 2016 14:52 |
|
Pollyanna posted:I also have a kind of related question. When rendering game state, do you want to do rendering concurrently with updating the game state, or do you want to wait until the rendering is finished before advancing the game state, accepting input, etc. Basically, do you want the update->render->update loop to be sequential, or should rendering be a process that's called separately? It can be done either way. You would essentially double or triple buffer the game state such that the game thread is working on updating game state 'A while the render thread is working from game state 'B. The tricky part with this is to make sure you aren't stalling either pipeline. If the render thread is done you don't want it to be looking around for a new game state that isn't ready yet.
|
# ? May 18, 2016 17:43 |
|
You can render game state in all sorts of ways. It almost always comes down to a question of latency vs throughput vs frame spikes though. If you render everything asap, that's the lowest latency but also lowest throughput as it helps to batch things together. Batching commands helps throughput, but batches that are too big also create stalls. Adding extra buffered frames can smooth out frame spikes but further increases latency. So a twitchy fps requires low latency and therefore doesn't want to buffer any extraneous frames. A vr game requires extremely low latency and needs to pull tricks to get as little buffering as possible. A walking simulator mostly wants to look nice and have a smooth experience so it would have a bigger buffer -- there's no need to rush draw commands to the gpu.
|
# ? May 18, 2016 19:27 |
Suspicious Dish posted:Threading is also difficult because Direct3D / OpenGL are not thread happy at all. You can rarely upload geo and textures from anything but your GPU thread. So naively throwing threads at it is not going to solve things. I ran into this while working on The Rage of Painting. I tried to preload resources in a different thread from the OpenGL context and couldn't allocate memory for the textures correctly. Might have been a libGDX limitation, but be advised.
|
|
# ? May 18, 2016 23:29 |
|
So for you guys who use Unity, any must have assets? I've already bought Simple Waypoint System because I needed my enemies to follow a path and it is working pretty great ( for 20 bucks I can't code it myself ). All in all I am pretty impressed with Unity and the ease of development. Having C# as a language sure helps. One thing I was wondering though: is there a limit to the amount of scripts I can hang of a game object? Or does all this stuff get compiled into one big script?
|
# ? May 22, 2016 08:38 |
|
Mr Shiny Pants posted:So for you guys who use Unity, any must have assets? Nope! I do everything myself and it serves me well. I primarily use Unity as a graphical engine and mostly just tell it where to put the pretty pictures. The rest of the game logic I just write myself. Mr Shiny Pants posted:One thing I was wondering though: is there a limit to the amount of scripts I can hang of a game object? Or does all this stuff get compiled into one big script? Far as I can tell any amount and I don't think it does. You probably want to keep each object limited to as few scripts as possible to avoid update orders but I'm pretty sure Unity also lets you dictate what order scripts are called in so that's not a huge deal. You can also create classes that aren't inherited from MonoBehaviour and do traditional classes and objects within those scripts just fine.
|
# ? May 22, 2016 09:35 |
|
ToxicSlurpee posted:Nope! I do everything myself and it serves me well. I primarily use Unity as a graphical engine and mostly just tell it where to put the pretty pictures. The rest of the game logic I just write myself. Thanks, even for something as basic as drawing splines? I've looked at doing those myself but it got pretty complicated pretty fast. Well I still need to write the logic, just the movement is taken care off now. Seemed like a basic thing that would already be in the engine. Mr Shiny Pants fucked around with this message at 10:16 on May 22, 2016 |
# ? May 22, 2016 10:14 |
I don't see a Unity-specific thread, so I am going to take a shot in here. I have a scene in Unity, with an orthographic main camera. I have a GUI canvas in screen space. The scaler for the canvas is set to scale with screen size. The canvas has several GUI elements (images, text) as children. The setup is intended to be 2d, but I haven't done anything special with the newish Unity 2d toolset. I just have some sprites in the scene. The GUI is always visible. Imagine a few big buttons down the left-hand side of the screen. I want to find out how far the actual GUI elements extend into the scene, so I can calculate the size and location of the playable area to the right of the GUI. The canvas always reports itself as being its reference resolution, and I get wacky results when trying to grab renderers and transforms from the individual GUI elements. What is the proper way to do what I am trying to do?
|
|
# ? May 22, 2016 13:01 |
|
Centripetal Horse posted:I don't see a Unity-specific thread, so I am going to take a shot in here. What you could do is make an invisible UI Box that is set to auto-scale in the RectTransform component, and then use the dimensions of that RectTransform which the UI will calculate itself to know how much playable area there is. To make it scale to the proper size along with your buttons, you can use Content Size Fitters or other similar auto-scaling components from the Component->Layout menu.
|
# ? May 22, 2016 13:30 |
|
Mr Shiny Pants posted:So for you guys who use Unity, any must have assets? TextMeshPro if you want stylized text (gradients/shaders/what have you). It's generally pretty great and has decent documentation and a bunch of videos. Haven't run into a limit on scripts on an object, but I never tried to go crazy with it. They're still separate class objects, just attached to the same parent in the scene graph. As such, there's some minimum memory overhead per class object, so at some point there's an implicit maximum; you're doing bad things if you get near that though.. Having c# as a language sure would be nice, instead we get unity-c#-3.5. e: ToxicSlurpee posted:Far as I can tell any amount and I don't think it does. You probably want to keep each object limited to as few scripts as possible to avoid update orders but I'm pretty sure Unity also lets you dictate what order scripts are called in so that's not a huge deal. You can also create classes that aren't inherited from MonoBehaviour and do traditional classes and objects within those scripts just fine. leper khan fucked around with this message at 14:14 on May 22, 2016 |
# ? May 22, 2016 14:07 |
|
leper khan posted:Nope, update orders are a persistent pain if you're doing things where they matter. The common solution is to write a TotallyNotUpdate() function and write a UnityIsDumbUpdateManager class that holds an ordered list of classes to call TotallyNotUpdate() in Update(). I've been doing that because it turns out it improves performance if you do manual updates something fierce. It also lets you absolutely, always, totally 100% control which order things happen in which is very useful. Some things still use Update but yeah...that's a good habit.
|
# ? May 22, 2016 16:26 |
|
ToxicSlurpee posted:I've been doing that because it turns out it improves performance if you do manual updates something fierce. It also lets you absolutely, always, totally 100% control which order things happen in which is very useful. Could you expand on this? I guess I get the gist of what it does, but why do you need to do it?
|
# ? May 22, 2016 20:38 |
|
I find threading to be easy so long as its well managed (which isn't always easy). Document ahead of time where threads will interact. Always do this at well-defined points and never just wing it. C# in particular makes this easy due to the TPL and each thread having a SynchronizationContext to allow task continuations to be posted to a specific thread. I also like the dispatcher model from WPF. It forbids direct access to an object from another thread, instead you need to use the object's dispatcher to invoke actions on the object's thread. This is where C#'s async syntax makes this easy. Threading in gaming is pretty simple because most work that would need to be handed out to another thread are simple jobs either for IO or computation that only needs to give you a result.
|
# ? May 22, 2016 20:45 |
|
Mr Shiny Pants posted:So for you guys who use Unity, any must have assets? I don't think there's any 'must have' assets but they can really speed up prototyping / development if you use them well. This might be horrifically unethical but I am a fan of torrenting assets for prototyping and then buying them if I use them in a released game. A person named Zoro uploads hundreds of them to a pretty kickass site you might be able to locate Not sure what sort of game you're making, but Chronos is really fun to mess with (control speed of objects or within areas) and it has really good documentation and the developer is friendly! (YouTube / asset store). Also it sounds like your waypoint system is working but A* Pathfinding Project has a free version that works really well. You can set a collision mesh of objects it ignores, it'll scan your scene to see what's traversable terrain, and you can set waypoints and everything (also it's all shown in the editor scene).
|
# ? May 22, 2016 22:04 |
Mercury_Storm posted:What you could do is make an invisible UI Box that is set to auto-scale in the RectTransform component, and then use the dimensions of that RectTransform which the UI will calculate itself to know how much playable area there is. Edit: Am I on the right track with this? It seems dumb, but at least gives me values I understand. When I position things according to those values, I am still not getting what I expect, but it's better than it was before. Edit again: Nope. This breaks with screen size changes. code:
Do you mean something else, like making a plane object, and slapping UI stuff on that? Edit: Before I gave up and went to bed, I saw some stuff about anchoredPosition, and sizeDelta. I don't know if those are what I'm looking for, either. Ultimately, I would like to just get the far-right coordinate of the element that extends the farthest into the scene. Centripetal Horse fucked around with this message at 00:12 on May 23, 2016 |
|
# ? May 22, 2016 23:30 |
|
Centripetal Horse posted:It's been a long time since I used Unity even semi-seriously. I am not familiar with the UI stuff. I don't even remember it being in there, although it could just be that I never had to use it in this manner. What do you mean by "UI Box?" The only thing I see that sort of fits that description is the Panel, which I already tried. The Panel and the Canvas both have RectTransform components, but they report nonsense values. At least, I don't know how to properly use the values. For example, no matter where the panel is, it reports xMin, xMax, and x as -(half its width). A panel that is on the canvas, and set to 380 pixels wide, reports all three of those values as -190, and reports its center as "0, 0" no matter where I drag it. By a UI box, I just mean make a UI element that doesn't have any visible content in it, and that will denote the play area as being the rest of the screen assigned to the UI screen space as opposed to all the space taken up by your buttons. Here's a function I wrote a while ago that can determine if any given point is inside a RectTransform, it also demonstrates how to get the dimensions of it, but it doesn't support rotations of the Rect or other fancy stuff: code:
Mercury_Storm fucked around with this message at 00:18 on May 23, 2016 |
# ? May 23, 2016 00:07 |
|
Mr Shiny Pants posted:Could you expand on this? I guess I get the gist of what it does, but why do you need to do it? http://blogs.unity3d.com/2015/12/23/1k-update-calls/ Performance is the big one. It also lets you explicitly and tightly control what gets called, where, and in what order. Unity can get a bit finicky at times when it comes to what order things happen in. If your game is small this might not matter but once you start getting piles and piles of objects it can matter a great deal.
|
# ? May 23, 2016 00:34 |
Mercury_Storm posted:By a UI box, I just mean make a UI element that doesn't have any visible content in it, and that will denote the play area as being the rest of the screen assigned to the UI screen space as opposed to all the space taken up by your buttons. Thanks for the explanation. Does this look right? code:
Edit: Nope. poo poo. Centripetal Horse fucked around with this message at 00:59 on May 23, 2016 |
|
# ? May 23, 2016 00:41 |
|
Centripetal Horse posted:Thanks for the explanation. Does this look right? Yeah that looks right. I don't think you need to set the pivot to the top left, but I think my code might be too specific to my UI to be used for any variation.
|
# ? May 23, 2016 01:40 |
Mercury_Storm posted:Yeah that looks right. I don't think you need to set the pivot to the top left, but I think my code might be too specific to my UI to be used for any variation. It was giving me some odd results until I moved the pivot the the top left. Specifically, it was giving me number like, "-104.20802," which is not half of the width, not half of the height, not the entire width, not the width - the screen position, and so on. I don't know where the number was coming from. I found a solution, based loosely on what you posted. It seems stupidly verbose for what I am trying to do, but it works. code:
|
|
# ? May 23, 2016 05:46 |
|
I started writing shaders in unity, and I'm trying to learn more about the information that is flowing inside of them by debugging the shader. My problem is that the source for the shader doesn't load correctly. This question gets a little blurry because I can't tell if it's a Unity thing or a VC++ thing, or a combo. So I followed this guide http://forum.unity3d.com/threads/debugging-shaders-in-visual-studio.322186/ did all the stuff it says and I can debug, but it loads the source wrong. It's the source combined with assembly. Have any of you had similar problems debugging shaders in vc++? How do I get it to work right? e: Oop, sorta figured it out. I have to re-name the auto-genned file in the temp directory. KoRMaK fucked around with this message at 18:39 on May 23, 2016 |
# ? May 23, 2016 18:17 |
|
Mr Shiny Pants posted:So for you guys who use Unity, any must have assets? e: There's a Pro version that has some features that are annoyingly missing from the free version, but you can get a lot of mileage out of the free version without committing / while you wait for a sale.
|
# ? May 23, 2016 22:16 |
|
RoboCicero posted:Probuilder is a free tool that's really amazing for quickly blocking out level geometry. If you do 3D games it's way easier to use than slamming together primitives. Just to be pedantic the free version of Probuilder is now referred to as Probuilder Basic, used to be called Prototype.
|
# ? May 24, 2016 15:58 |
|
Cool, thanks for the replies guys. Seems like a have some "researching" to do. Being able to write stuff in .Net is so drat handy. Need to store something, Sqlite etc.
|
# ? May 24, 2016 17:45 |
|
|
# ? May 18, 2024 19:31 |
|
Xposting from games forum since this is probably a better place to ask? Anyone have any idea why my player is moving when it shouldn't be? In Unity, I'm trying to keep them at the origin. I have on every FixedUpdate() player.transform.position = Vector3.zero (or player.transform.position = player.transform.position - player.transform.position) after my actual movement code (which uses Rigidbody.Addforce) so, the player should stay at the origin, right? But it doesn't, it still moves, just slower. I've tried searching but haven't really come up with any answers. There are no animations. I read about some discrepancies with an object's transform and it's rigidbody position, but setting playerrigidbody.position = Vector3.zero instead makes no difference. Before and after that line I print out the value of the transform, they're both (0,0,0). I know that physics are applied after FixedUpdate so what I would expect (aside from the transform that I print to match what the editor shows me) is that the first print statement will show me some non-zero vector (due to the movement being applied in the previous loop's physics step) and the second print statement to show (0,0,0) because I just zero'd it out in the line before. Also, if I comment out the movement code entirely or fix the players position, when I press play, the players position gets slightly offset from the origin for no apparent reason. What the heck is going on? edit: I simplified the movement code. Previously my player was moving under the effects of gravity and thrust. I got rid of the gravity component. My print statements are showing what I expected now, but my player is still moving when it should be being reset to (0,0,0) every FixedUpdate(). Subyng fucked around with this message at 01:24 on May 26, 2016 |
# ? May 26, 2016 01:20 |