|
roomforthetuna posted:The first block is just warnings (that don't appear to be related), so can probably be ignored. I tried linking against the 32-bit GTK when I downloaded other versions, but that just caused the linker to crash right out, so I don't think that's the case. Also yeah, for some reason it still names them all with 32 even though it's the 64-bit libraries. Those link arguments are actually generated by winGTK itself - early in the compile process, the makefile has it run pkgconfig, which spits out all this stuff: code:
|
# ? Sep 20, 2011 20:35 |
|
|
# ? Jun 9, 2024 13:12 |
|
Could it be something really simple like you need to do a make clean after having tried it with the wrong libraries?
|
# ? Sep 20, 2011 22:40 |
|
roomforthetuna posted:Could it be something really simple like you need to do a make clean after having tried it with the wrong libraries? Nope, I've already run "make spotless" (which essentially gets rid of everything that's not one of the basic .c or .h files) a bunch of times before running a compile - same thing happens every time. It gets through fine up until that point, but falls apart when it tries to link GTK. *edit* Okay, I've solved the problem - it DOES manage to link if I use the 32-bit GTK libraries, and the reason it was crashing before was because it's the 64-bit binaries that are on my PATH, so when it tried to run one of the utilities created by the compiler with the 32-bit library, they were incompatible (I managed to get around this by just dumping all the 32-bit GTK binaries in the source folder while it compiled). I do have a slight issue now that isn't THAT serious since I already have a workaround, but I'm wondering if there's a more elegant solution: Essentially, the slashem.exe has the same problem as the utilities used by the makefile, that is that it will crash upon startup because it's trying to load the wrong .dlls. Is there a way to direct it to the correct folder without adding it to my PATH? The reason I can't just add it to the path is that, because all the .dlls have the same filenames, I'm thinking it will probably confuse them and mess up other GTK applications. I can use the same workaround as before, that is dumping all the binaries in the slashem folder, but it uses quite a few of them so it really clutters up that folder, and seems pointless since it's just duplicating stuff that's already on my machine elsewhere. Is there a way to compile it so it knows where to look for the binaries? Even better, is there a way to modify the executable header to just point it in the right direction? (I have software that lets me do this). Alternatively, what would be required to make it so that the program will compile correctly as a 64-bit application? Would it require changing the code any, or could I do it entirely within the makefile, through gcc flags? The Cheshire Cat fucked around with this message at 00:09 on Sep 21, 2011 |
# ? Sep 20, 2011 22:45 |
|
The easiest way is still putting the executable and all of the DLLs in the same directory as the libraries. This may or may not be a good idea with system-installable frameworks, but Gtk has been known to break compatibility with version updates.
|
# ? Sep 21, 2011 00:41 |
|
I just recently started trying to use GLUT and I just got everything set up in Visual Studio 2010 using the instructions I found here, and to test everything out I just downloaded a sample program from OpenGL's site (Namely aaindex.c) but now when I try to run it it tells me "pixel format with necessary capabilities not found". Does anyone know what that means?
|
# ? Sep 21, 2011 00:56 |
|
Puppet Pal Claudius posted:I just recently started trying to use GLUT and I just got everything set up in Visual Studio 2010 using the instructions I found here, and to test everything out I just downloaded a sample program from OpenGL's site (Namely aaindex.c) but now when I try to run it it tells me "pixel format with necessary capabilities not found". Does anyone know what that means? Amateur answer, take with grain of salt. Quick googling indicates that this error crops up when you use color index mode, which has been deprecated for a good number of years. So on the line with "glutInitDisplayMode (GLUT_SINGLE | GLUT_INDEX);" You might want to instead try "glutInitDisplayMode(GLUT_RGB);" (Switching your computer's display mode to 16 bit color might let you use index mode though). Oh, and maybe find samples newer than 1997, even though any opengl resources seem to be pretty hard to find (or were, when I was trying to dabble with it). Che Delilas fucked around with this message at 02:18 on Sep 21, 2011 |
# ? Sep 21, 2011 02:15 |
|
Che Delilas posted:Amateur answer, take with grain of salt. Oh hey, now the console has hella warnings saying "glutSetColor: current window is RGBA", but it works! Thanks dude!
|
# ? Sep 21, 2011 02:21 |
|
GLUT has some major glass ceiling problems and will teach you bad habits to the point that I'd recommend using SDL instead. Basically the GLUT utility functions are not something you should rely on since they only work in the GLUT framework, the GLUT framework sucks, SDL doesn't suck and is barely any more difficult.
|
# ? Sep 21, 2011 02:36 |
|
If you must use something like Glut, GLFW is a quite good substitute. Does exactly the same job but in a much better way.
|
# ? Sep 21, 2011 04:13 |
|
Alright. I got my take on a basic Entity/Component Model running in C#. I would like to know what you guys think of it. It works as follows: I have a Entity class that does nothing but manage it's Components. After it's creation, it gets Components added to it. Once all Components are added a finalize() Method is called in which it checks for dependencies between components. The Components are derived from an abstract Component class that defines a few Methods and Attributes. This is all the Entity will ever know. Communication This is of course the most important part of an Entity. For all communication so far I use delegates that are defined in the Entity. When a component gets added to an Entity, it registers some of its functions with those delegates. I classify the delegates in Entity in 3 groups: Out of Entity calls These are calls from outside the Entity that may or may not be of interest to some of its components. Examples for those are: onUpdate ans onRender. A Renderer Component will want to register with the onRender delegate. Current OEC delegates: onUpdate onRender In Entity Requests These are for Communication inside an Entity. A renderer Component needs a position and rotation to render it's model. A position Component will register with a position request delegate that is then called by the renderer Component to get the current Position. Current IER delegates: reqGetPosition reqGetRotation In Entity Notifications Sometimes a Component will want to let other Components know about something. These delegates are for sending information to who it may concern. I have none of those for now. Out of Entity Events These delegates to not reside in an Entity but rather in some outside System. For example the Input Manager could expose delegates that a movement Component could subscribe to. It would thusly get Input Messages without the need to go by the Entity first. That is actually what i plan to implement next. For Components that could do with a more direct coupling I plan to implement functions to allow Components to receive other Components from the Entity directly, by requesting it by role. This should stay the exception thou. Dependencies Certain Components need other Components to work. To make sure they are all present, every Component declares it's own role in form of a string, and string array with the roles it depends on. The Entity checks during finalize() if it is consistent. If not an exception is raised. Currently all this renders a rotating chair by the use of these Components: PositionComponent RotationComponent RenderComponent MoverComponent The obvious drawback of all these delegates is an increased memory footprint per Entity and a little overhead per call. The bigger problem is that i will probably be drowning in delegates by the time this is anything like a game. So what do you guys think?
|
# ? Sep 21, 2011 09:51 |
|
I wonder about your finalize system since I pondered something similar for dependency resolution. Can you think of any situations where you might want to dynamically add or remove components? I imagine that would give you some grief if you're baking the entities. Maybe it won't come up for you, so you're fine and good. Edit: An example I had where I could imagine myself adding or removing components was with having more than one playable character between which I might switch. So there I'd switch out an AI component for a human control, or vice versa.
|
# ? Sep 21, 2011 19:17 |
|
Rocko Bonaparte posted:I wonder about your finalize system since I pondered something similar for dependency resolution. Can you think of any situations where you might want to dynamically add or remove components? I imagine that would give you some grief if you're baking the entities. Maybe it won't come up for you, so you're fine and good. For what it's worth, it's not uncommon for me to add and remove components on the fly, after object creation, in Caves of Qud. When I do it's often to specialize a base object. For instance I might clone an object and then add a "temporary" component that manages destroying the object after a few turns, as an example for how I actually implement mutations like "temporal fugue" that copies the mutant for a few turns. Another example is skills and mutations, which are each just components I add to objects that have that skill. So to give something flaming hands, I just add the flaming hands component. Since you can gain mutations or skills on the fly, it's important to for me to have a mutable component system that doesn't require pre-baking. I use string based event IDs, since I didn't want to be drowning in delegates. One thing I wish I had added was a way to prioritize message handling between components. It'd be nice to be able to tell a component to handle an event before a different component, without having to add nests "BeforeX", "BeforeBeforeX", "AfterX", "AfterAfterX" (etc) messages/delegates. Unormal fucked around with this message at 19:32 on Sep 21, 2011 |
# ? Sep 21, 2011 19:26 |
|
Relaxodon posted:So what do you guys think? Conceptually it seems overengineered. For example, why are Position and Rotation separate components? For that matter, why are they components at all, when you have a getRotation/getPosition delegates in the entity? Is it useful to have an entity without a position and rotation? Second, you use of delegates for inter-component communication means any time you're adding a new type of component you'll probably also need to expand the entity class to accommodate it. Lets say, for example, you wanted to add a scaling component? You'll also have to add a getScaling delegate to the entity. Finally, is all of this actually helping to solve any problem in your program, or is it creating new ones? If you're just playing around because you enjoy the engineering problem and this isn't part of a real project, ignore this - but if this is a real project I do have to wonder if any of this is helping you, or if it's just an advanced form of procrastination. I definitely have a tendency to do this myself.
|
# ? Sep 21, 2011 19:39 |
|
Does anyone have experience distributing games, in Python (using OpenGL) on multiple platforms (Linux, OS X, Windows XP/7)? I'd like for each platform to have a one-click application that simply runs the app and doesn't gently caress with the user's system; and for the Python to be compiled as at least one level of obfuscation preventing users from simply seeing the source code.
|
# ? Sep 21, 2011 19:55 |
|
Personally I have found that most pure Entity/Component models degrade into the typical OOP hierarchy and inevitable blob class that you are trying to avoid in the first place. Once you start putting logic into components, you want to start subclassing those components to extend the logic for special cases. Instead I prefer the Entity/Component/System model where all game logic is in System classes (RenderSystem, PhysicsSystem, etc) which operate on Components that function mostly just as Plain Old Data containers. So event ordering problems are much easier to manage since you can guarantee that every system exists and can handle an event for a given entity in any way that you deem necessary, without worrying about which components it has and what state they are in. Unfortunately I don't have any good resources explaining this approach beside a few over-engineered blog posts from T=Machine. http://t-machine.org/index.php/2007/11/11/entity-systems-are-the-future-of-mmog-development-part-2/ http://entity-systems.wikidot.com/rdbms-with-code-in-systems
|
# ? Sep 21, 2011 19:59 |
|
Rocko Bonaparte posted:I wonder about your finalize system since I pondered something similar for dependency resolution. Can you think of any situations where you might want to dynamically add or remove components? I imagine that would give you some grief if you're baking the entities. Maybe it won't come up for you, so you're fine and good. Yes, I have thought about that and I find it to be very likely that i will have to change the Setup of an Entity. Of course if only a single component is being added/removed, I can simply check for dependency right then. The only problem is when multiple interdependent Components must be added. For that I was thinking about a mechanism to temporarily open up an Entity again and check for dependency afterwards. Single threaded, this should be no Problem. Paniolo posted:Conceptually it seems overengineered. For example, why are Position and Rotation separate components? For that matter, why are they components at all, when you have a getRotation/getPosition delegates in the entity? Is it useful to have an entity without a position and rotation? Generally yes. Lightsources may only have a rotation or position, scripts and other logic Entites may have neither. I agree thou that it may be more trouble then it's worth. I could just slap two Properties onto Entity and be done with it. But for now I want to be strict, to see how far I can take the Concept. Paniolo posted:Second, you use of delegates for inter-component communication means any time you're adding a new type of component you'll probably also need to expand the entity class to accommodate it. Lets say, for example, you wanted to add a scaling component? You'll also have to add a getScaling delegate to the entity. This is what I meant by "drowning in delegates". I will probably implement delegates for the most common cases and use something more general like string massaging for the rest. For script components I will have to do this anyway. To strike the right balance here will be an issue. Paniolo posted:Finally, is all of this actually helping to solve any problem in your program, or is it creating new ones? If you're just playing around because you enjoy the engineering problem and this isn't part of a real project, ignore this - but if this is a real project I do have to wonder if any of this is helping you, or if it's just an advanced form of procrastination. I definitely have a tendency to do this myself. No this is just a hobby. I have no problems to solve. I have never made a serious game architecture so I have never run into any problems. I was looking into game engine design to get started on it. When I read about Entity/Component here on the forum it appealed to me, so I decided to implement it and see how it goes. Btw, I have just implemented the functionality to serialize a whole scene with all Enities, Components and their Properties into an XML File. It worked out surprisingly smoothly. I could have just let .Net serialize the whole thing, but I opted for more control by letting the classes write themselves out manually. This gives me complete control, but also seems to be pretty cumbersome and error prone on the long run. Does anyone have any experience with this? Also: Goddamn is this fun, I am such a nerd.
|
# ? Sep 21, 2011 20:35 |
|
Fren posted:Does anyone have experience distributing games, in Python (using OpenGL) on multiple platforms (Linux, OS X, Windows XP/7)? I'd like for each platform to have a one-click application that simply runs the app and doesn't gently caress with the user's system; and for the Python to be compiled as at least one level of obfuscation preventing users from simply seeing the source code. py2exe As for the component stuff, I agree with Unormal. Have components be completely independent of one another and mutable and attached to the Entity in whatever way you wish. Personally, I also register actions with the Entity where all of the logic sits. Any dependencies I have are within the Action and it's my responsibility that I have the correct components attached for any actions I attach.
|
# ? Sep 21, 2011 22:04 |
|
poemdexter posted:py2exe A good safety net here would be that in your EntityFactory, the final step could be to scan through the list of actions you've attached to the entity, and have it automatically add any required components for those actions that the entity is missing, whether or not those components are specified in the entity description (you could document it as those actions imply those components, so any entity that performs those actions can implicitly be said to have those components - just make sure you set up your dependencies so that it always makes sense). This could also save you the trouble of manually specifying components in your entity descriptions when the actions imply them anyway. Then when you design a new kind of entity, you only need to designate A) its actions and B) any components not implied by those actions, but which make sense for the particular entity.
|
# ? Sep 21, 2011 23:54 |
|
Unormal posted:One thing I wish I had added was a way to prioritize message handling between components. It'd be nice to be able to tell a component to handle an event before a different component, without having to add nests "BeforeX", "BeforeBeforeX", "AfterX", "AfterAfterX" (etc) messages/delegates. This somehow makes me feel better. I did post earlier but my problems turning control messages into actionable activities. This was because the player may want to walk to the right, but they're in the middle of an attack animation. I kind of did the same thing of implying precedence in the event IDs. Since I now have two data points of this, I might put more thought into how that could be done better.
|
# ? Sep 21, 2011 23:56 |
|
The Cheshire Cat posted:A good safety net here would be that in your EntityFactory, the final step could be to scan through the list of actions you've attached to the entity, and have it automatically add any required components for those actions that the entity is missing, whether or not those components are specified in the entity description (you could document it as those actions imply those components, so any entity that performs those actions can implicitly be said to have those components - just make sure you set up your dependencies so that it always makes sense). Oh that's a good idea. I'll definitely do this!
|
# ? Sep 21, 2011 23:57 |
|
GPUs are loving amazing. If any of you remember the procedural level generator I was working on a few months ago, I recently ported it to run on the GPU using D3D10 geometry shaders + stream out. Previously, generating the entire level on load took around 7 seconds, and that was using threadpools and every optimization I could squeeze out. Right now, I'm generating the entire level every frame on the GPU and getting 20 FPS.
|
# ? Sep 25, 2011 21:30 |
|
Paniolo posted:GPUs are loving amazing. If any of you remember the procedural level generator I was working on a few months ago, I recently ported it to run on the GPU using D3D10 geometry shaders + stream out. As someone who has written a terrain generation algorithm, and wondered how I might make it more realtime myself, can you point me to the resources you used to learned how to do this?
|
# ? Sep 25, 2011 21:34 |
|
To be honest, just a lot of trial and error. The most frustrating thing is that PIX does not work with stream out geometry shaders, so debugging was a nightmare. Documentation on stream out is also poor at best. However porting marching cubes to the GPU was straightforward enough, and required very little modification. In fact I stripped out a lot of the optimizations which weren't necessary on the GPU, so the code ended up being simpler and more readable as a shader.
|
# ? Sep 25, 2011 21:47 |
|
How are you generating your normals on the GPU? A couple of months ago you posted some screenshots of a marching cubes world, and when I asked about normals you mentioned that you were using the local derivative of the density function to generate the components and then normalizing the result. In a GPU-generated version it doesn't seem like you could do that since you wouldn't be able to look at the nearest-neighbor density values (or is this possible in a geometry shader?) Also, just while I'm on the subject - when I made my own marching cubes world I tried that local derivative method to generate the normals and wasn't super happy with the result. It worked fine in regions with relatively smooth geometry, but if you had a sharp bend or a point the resulting normal could get really strange and the resulting shading would get ugly. Instead I ended up compressing the triangle soup from the marching cubes algorithm into reduced index/vertex buffers with no duplicate entries, and then generated normals for each vertex entry from a weighted average of the normals of all triangles connected to that vertex. It looked a lot better, even for pointy or sharply-bent geometry.
|
# ? Sep 25, 2011 22:18 |
|
Normals are generated exactly the same way they were on the CPU. The voxel field which specifies the basic layout of the world (or at least the subset being processed) is stored in a 3D texture, which the sampling function combines with noise to generate fine detail. So there's no problem with the geometry shader sampling neighbor points. I haven't had too many issues with normal artifacts. One thing that's really important is how much you offset the vertex's position by in each dimension, if it's too little or too much you can run into issues.
|
# ? Sep 25, 2011 22:57 |
|
Fren posted:Does anyone have experience distributing games, in Python (using OpenGL) on multiple platforms (Linux, OS X, Windows XP/7)? I'd like for each platform to have a one-click application that simply runs the app and doesn't gently caress with the user's system; and for the Python to be compiled as at least one level of obfuscation preventing users from simply seeing the source code. Py2Exe as mentioned. Just a warning though, it can be a bit finicky with pygame. Offhand I know you have to do something with the script to force it to include the pygame.mixer library, and there was something else I had to do to get it to compile.
|
# ? Sep 26, 2011 03:29 |
|
So I'm going back and cleaning up Psychopomps for the purposes of making a real and proper game from it, and I'm discovering that the code written when under a time crunch like that isn't very pretty. And it's super horrifyingly coupled. Step one? Tear out everything. Seriously. The level loader I had in place? It touched nearly everything. Enemies, player, it even controlled the main game loop. What a clusterfuck. Good news is I almost have it completely ripped out and replaced with a much cleaner DAME solution so SyntheticOrange doesn't have to fight with level format. Also, Paniolo, I actually meant resources for getting started with the GPU. You are talking with someone who has very limited knowledge of shaders, let alone doing anything fancy. Can you suggest some search terms to investigate further?
|
# ? Sep 26, 2011 05:51 |
|
Oh man, that's kind of tough since that's a pretty broad question. I really did learn almost everything via trial and error. Best advice I can give is learn how to use a debugger like PIX and take it one step at a time - a shader doesn't have to be anything super complicated, it can be as simple as passing through values without modifying them at all. So start from there and then add stuff and when it breaks use the debugger to figure out why. That's how I learned anyway. I would highly recommend working with DirectX over OpenGL when you're learning, as it has much, much better documentation.
|
# ? Sep 26, 2011 06:19 |
|
Anyone else trying to make games for Metro/Windows Developer Preview? I managed to implement Pong using C#/WinRT and it works alright, though it's a little jerky because there isn't an obvious way to hook into the underlying graphics rendering and do double buffering or match to vsync. I guess I'm better off waiting for the Xbox API they're hinting at. I'm not trying to do anything 3D though, so it seems kinda overkill. I'm handling animations right now by having a timer generate an event every 33 milliseconds, and then using that event to move all the objects on screen and update the game state. It works as expected, but it visibly drops frames because the timer isn't synchronized with vsync at all. Am I missing something obvious here? The other approach I see: WinRT offers an advanced animations library for window transitions, etc. which is very smooth. However it's based on keyframes and/or endpoints, so it's more suited to static games. I'm not quite sure how to implement Pong with it. Unless I did something like this:
I guess there's some benefits if I could get this to work, it'd be framerate independent for one thing. Please tell me if I'm making any sense here.
|
# ? Sep 26, 2011 09:29 |
|
I don't have access to any of my old code for it, but I know that Silverlight had a method that would let you register a callback that will be polled for updates at a regular 60fps, and my experiments with it came out silky smooth. Isn't Metro supposed to be based on the WPF/Silverlight stack?
|
# ? Sep 26, 2011 13:52 |
|
Aaaah thanks, that gave me enough info to find the Metro equivalent. Onward to smooth graphics!
|
# ? Sep 26, 2011 17:30 |
|
Awesome! For the benefit of the thread, could you show a short example of how you're doing it?
|
# ? Sep 26, 2011 17:50 |
|
Here's the short version:code:
Everything is defined in the XAML file and uses a TranslateTransform to move it to the correct spot on the screen. I didn't know how to get/change the Canvas.Left/Canvas.Top properties when I knocked this out, now I do, so that might be a better way to manipulate everything. There's still some form of slowdown that's causing it to drop a frame every second or so, but it's definitely a lot less jagged this way. Edit: Oh I still get that slowdown with a dead simple app that uses the official animation methods, so it's something to do with my computer/the underlying framework, oh well. Silver Alicorn fucked around with this message at 18:08 on Sep 26, 2011 |
# ? Sep 26, 2011 18:04 |
|
Silver Alicorn posted:Here's the short version: Is this something that has be be run like an application or is it something that can be modified to just run as sorta a background image that plays itself?
|
# ? Sep 26, 2011 18:14 |
|
poemdexter posted:Is this something that has be be run like an application or is it something that can be modified to just run as sorta a background image that plays itself? The source zip I linked compiles as a full Metro application for Windows 8/Developer Preview.but not a very good one
|
# ? Sep 26, 2011 18:17 |
|
I am going to throttle Flixel. Seriously. I love you Flixel, but why oh why do you require me to set my world bounds independently of my camera bounds. code:
code:
Psychopomps+, you better be worth it...
|
# ? Sep 27, 2011 04:12 |
|
Pfhreak posted:I am going to throttle Flixel. Seriously.
|
# ? Sep 27, 2011 11:57 |
|
All of a sudden layers aren't working as they seem they should be in XNA -- at the moment I'm calling SpriteBatch.Draw with a layer that I know is on top of everything else on the field , but the sprite is still being drawn behind some elements, but in front of other elements. I've made sure this is all within one SpriteBatch.Begin/End call, that I'm calling the right sort method for the system I'm using (BackToFront), and that I'm not accidentally drawing them at the wrong depth. The only thing left I can think of is some kind of bug with alpha blending, but googling doesn't seem to turn up a solution to this particular problem. Does anyone have any idea where I might be going wrong?
|
# ? Oct 1, 2011 01:27 |
|
When I had behavior like that ages ago it was because I'd set the z-range to "between 0 and 100" - apparently the zero causes a problem. Might be something like that if you're using z-buffers.
|
# ? Oct 1, 2011 02:23 |
|
|
# ? Jun 9, 2024 13:12 |
|
I've tried it, but that doesn't seem to work. No problem, I'm switching the mode to Deferred and it seems to be working fine since I consolidated all the draw calls in one place and I can control draw order pretty easily. Weird workarounds is something I'm not a stranger to
|
# ? Oct 1, 2011 08:08 |