|
I'm working on a pseudo-roguelike, and I need a map generator. There's lots of them out there, but they all seem to generate dungeons or wilderness. What I actually need is something to generate modern buildings, like a warehouse or shopping mall. Since buildings are often vaguely symmetrical, I'm thinking of taking a rectangle, and splitting it down the middle. Then each "room" would have a chance of being split down the middle, recursively. That would end up with a sort of random perversion of the fibbonacci sequence. Then it would just be a matter of making sure there's enough doors that no room is isolated. It doesn't account for hallways very well, though. If anybody has any tips on better ways of going about it, I'd love to hear them.
|
# ? Aug 6, 2008 04:18 |
|
|
# ? May 11, 2024 13:13 |
|
Suburban Krunk posted:I want to learn a programming language and learn about game development at the same time. My personal opinion is that Flash is the best for beginners. One main reason is that the Flash IDE allows you to create named graphical assets instantly. In Flash, you can literally open the IDE, draw a circle with the pen tool, right-click the circle, name it "circle_lol", then type the code: code:
Conversely, in OpenGL, Direct3D, SDL, or XNA, you will have to spend hours setting up confusing libraries and write several lines of cryptic code just to get a graphic to appear on the screen. Also, what you learn about programming and game programming in ActionScript will be transferable to C# and even C++. However, if you don't want to use Flash, XNA and SDL are the easiest of the C++/C# graphical APIs. C# is also far easier to learn than C++. (XNA uses C#)
|
# ? Aug 6, 2008 08:06 |
|
Suburban Krunk posted:I want to learn a programming language and learn about game development at the same time. I know a small amount of Java, and feel I have a basic understanding about some basic programming fundamentals. On the plus side, I have a strong background in math (Up to Vector Calculus). Can anyone recommend me a book or site that would teach me a programming language and game development at the same time? From reading around a little bit, it seems as if C# is the language to learn, and something about XNA (Not sure what this is), is this correct? I still going to recommend modding for first timers, because it means there are design decisions, content, and mistakes you don't have to make in order to hit the ground running. Modding something like UT2k4 or Source comes with the advantage of a massive number of tutorials. SheriffHippo posted:Also, what you learn about programming and game programming in ActionScript will be transferable to C# and even C++.
|
# ? Aug 6, 2008 18:10 |
|
tripwire posted:Has anyone had any experience/luck combining Box2d (specifically the python bindings, pyBox2d) with pyglet? The guy who did the python bindings for box2d included a testbed that uses pygame, but since everyone keeps mentioning the various limitations with pygame I'm wondering if I should just use pyglet instead. What would I have to do differently if I wanted to package the binary for windows/osx/linux? I've not tried getting Box2d to work, but as an alternative I do use pymunk (which is a binding for the Chipmunk physics library) with pyglet. I've been able to build binaries for Windows/OS X/Linux with the pyglet+pymunk combo.
|
# ? Aug 6, 2008 19:26 |
|
I am writting a small deferred renderer using GLSL. I want to be able to use the depth buffer values to extrapolate the eye-space coordinates of each texel of the depth buffer. I intrinsically understand how this is possible, but i cant for the life of me figure out how do it inside a shader. My thinking was that: A) i get a vec3(x,y,z) where x,y are the texture coordinates and z is the depth buffer value. B) i multiply by 2 and subtract 1 to bring the values to [-1,1] C) then i make a vec4(x,y,z,1.0) and multiply that by the inverse of the projection matrix to bring the clip space coordinates into eye-space. And yet this does not work, and i am clueless as to why....Any ideas?
|
# ? Aug 7, 2008 18:11 |
|
shodanjr_gr posted:I want to be able to use the depth buffer values to extrapolate the eye-space coordinates of each texel of the depth buffer. I'm not good enough with matrix math to give you the full answer, but you can recalculate the HPOS value's W coordinate, and that's easy enough: The HPOS Z value (a.k.a. the depth) is rescaled from nearplane-farplane to 0-1, so just denormalize that for the absolute depth value. The HPOS W is just calculated by negating that value. OneEightHundred fucked around with this message at 23:23 on Aug 7, 2008 |
# ? Aug 7, 2008 23:18 |
|
Thanks for the answer, it turns out that i actually (among many other things) wasnt dividing by w when i was trying to use the coordinates, and this was screwing up my results.code:
Ive had quite a bit of progress today, getting multiple lights to work (with minimal overhead, since the rendering process is deferred ). What i am now looking into is some guide/paper/words of wisdom, on how to set up my lights in such a way that my final scene does not end up looking WAY too bright once i have done all my lighting passes. Right now i am just tweaking the numbers by hand, and still cant get nice results. Any ideas? Also, is there a place i can pull some proper gl_material properties off of, instead of ballparking them? (diffuse, ambient, specular, shininess factors for materials like metal/wood/etc)
|
# ? Aug 8, 2008 22:37 |
|
I was just wondering if anyone could spread some light onto Unity for me. Costing aside, I was really blown away by what they were able to create inside a browser at reasonable download speeds. The engine supports multi-platform deployment, physics and all kinds of amazing stuff. That said however, I haven't come across a single site which uses the unity web player or even seen any indie projects which utilize the engine. I was wondering if someone help shed some light onto why no one has really adopted it yet, or is it literally because it's so darn expensive?
|
# ? Aug 8, 2008 23:56 |
|
Hanpan posted:I was just wondering if anyone could spread some light onto Unity for me. Costing aside, I was really blown away by what they were able to create inside a browser at reasonable download speeds. The engine supports multi-platform deployment, physics and all kinds of amazing stuff. That said however, I haven't come across a single site which uses the unity web player or even seen any indie projects which utilize the engine. Can only speak from my own experience, but: a) it's closed-source b) it's expensive c) you can only do Mac and web-based deployments unless you spring for the Pro license (which is even MORE expensive) d) apparently there's some issues with source control on the objects it generates, so that you have to use their source control product or go through some hoops e) development environment is Mac-only f) networking support is there but feels incomplete for some reason That said, it's generally a pretty polished development environment and if it wasn't $1500 I'd certainly consider it for game development. (Compare with, say, Torque, which while not looking as quite as polished, is only $300 and comes with complete source, or the C4 engine which is $350 and also comes with source.) This amused me in the Unity marketing page for the new version (2.1): "When Funcom, makers of Anarchy Online and Age of Conan, decide to use Unity for their upcoming browser based MMO project, you know it has to be good." Hypnobeard fucked around with this message at 00:11 on Aug 9, 2008 |
# ? Aug 9, 2008 00:04 |
|
Tolan posted:Can only speak from my own experience, but: Thanks a lot for this detailed response Tolan. It does seem that the major issue is with the price. (I am Mac based, so no problems there. Source control doesn't really phase me that much either.) It's certainly worth downloading the trial version at least I suppose, thanks again for you reply!
|
# ? Aug 9, 2008 00:12 |
|
Hanpan posted:Thanks a lot for this detailed response Tolan. It does seem that the major issue is with the price. (I am Mac based, so no problems there. Source control doesn't really phase me that much either.) Yeah, if you're not worried about stand-alone Windows deployment, it's definitely worth a look. The closed-source might be a bit worrying, but the product is pretty polished for all that. Enjoy. I'm going to be reviewing it again myself, I think. Just to follow up--there's also some features missing from the indie version; be sure to read the page about the difference between the indie and the pro version carefully. If you can live without those features (realtime shadows, I think, plus a few other fairly common features) then the indie will work for you. Hypnobeard fucked around with this message at 00:55 on Aug 9, 2008 |
# ? Aug 9, 2008 00:36 |
|
I'm working on generating a PVS set for a portal engine. The calculations I'm doing are for load-time, so each sector will have a set of potentially visible sectors that can be used during run-time. My algorithm is pretty standard I think: For the particular sector we're generating a PVS for, clear out the clip plane, then recurse for each portal in that sector. In the next step we form a clipping volume made of a bunch of planes, which is created from the portal we came from and every portal in that new sector. Then once we have a clip volume, we clip that to each successive portal so the clip volume gets smaller and smaller as we recurse more. My explanation might not be very clear, but the part I'm having trouble with isn't too interconnected with it. I'm having trouble with a good general way to generate and modify that clipping volume. I have a general idea of how to generate the clipping volume, but it's not complete. Say we're generating a clip volume between A and B: code:
Basically, am I at all in the right direction? There has to be a nicer way to generate and clip a clipping volume. Alternatively, is there a more effective portal PVS generation algorithm that I haven't found? My method is the method I've seen in a lot of places, except all of those tutorials show it in 2D, which is way easier. schnarf fucked around with this message at 05:21 on Aug 12, 2008 |
# ? Aug 12, 2008 05:12 |
|
schnarf posted:I'm working on generating a PVS set for a portal engine. The calculations I'm doing are for load-time, so each sector will have a set of potentially visible sectors that can be used during run-time. My algorithm is pretty standard I think: For the particular sector we're generating a PVS for, clear out the clip plane, then recurse for each portal in that sector. In the next step we form a clipping volume made of a bunch of planes, which is created from the portal we came from and every portal in that new sector. Then once we have a clip volume, we clip that to each successive portal so the clip volume gets smaller and smaller as we recurse more. I can tell you how Quake's vis tool solves sector-to-sector, which is first solving portal-to-portal visibility, and then just making a visibility bitfield. A sector is considered visible from another sector if any portal in one can see any portal in another. Portal-to-portal I'm a bit hazy on, but the jist is to find a plane such that both portals are on the plane, and any occluding portals are either behind one of the portals, or are completely on either side of that plane. The candidate planes come from essentially every edge-point combination in the list of offending portals and the portals themselves. Needless to say, this is SLOOOOW. If you can find such a plane, then the two portals can see each other. (Note that a "portal" in this sense isn't just from open areas into open areas, but also from open areas into solids, i.e. walls, which are considered occluding portals) q3map uses two systems, the old portal system, a new "passage" system that I know nothing about, and a third method that uses both. You can check out visflow.c in the Quake 3 source code for details, but it really doesn't explain the implementation at all so it's extremely hard to read. OneEightHundred fucked around with this message at 06:57 on Aug 12, 2008 |
# ? Aug 12, 2008 06:37 |
|
OneEightHundred posted:Portal engines normally work by casting the view frustum through portals, clipping them when they go through, and they do this at runtime. My approach is pretty similar to clipping the view frustum though, but instead of starting out with a view frustum I start with an empty or infinite frustum. So your portal to portal method should work, correct? When you say a plane such that both portals are on it, do you mean it meets a vertex or two of theirs incidentally?
|
# ? Aug 12, 2008 07:31 |
|
Okay, the way vis works is a bit different than I thought (and I'm still trying to figure it out...), but here's my current understanding: Portals build a list of other visible portals by starting in one sector, and trying to see through a chain of other portals to other sectors. The separating plane is used to determine how much of the portal is visible, if any. Say you had a chain of sectors/portals as follows: A -> pAB -> B -> pBC -> C -> pCD -> D -> pDE -> E ... and you're trying to determine what portals are visible from pAB. pAB can always see pBC because they share a sector. To determine how much of pCD is visible, it tries creating separator planes using edges from pAB, and points from pBC. A separator plane is valid if all points on pAB are on one side, and all points of pBC are on the other. pCD is then clipped using that plane, so that everything remaining is on the same side as pBC. If there is nothing remaining, then that portal can't be seen. To determine how much of pDE is visible, the same process is done, except the separating planes are calculated using whatever's left of pCD instead of pBC. Hope that makes sense. OneEightHundred fucked around with this message at 18:46 on Aug 12, 2008 |
# ? Aug 12, 2008 15:24 |
|
Trying to make a 3d person camera in opengl that can be attached to objects and will stay m_zoomLevel distance behind it, no matter the orientation. I have it set up so that using the cursor keys will turnleft/right/pitchup/down the player object. The camera object is attached to the player object and should just follow it around, but I'm running into problems related to my dumbness and opengl's matrices. Some very rough psuedocode explaining how I have it; (note that the class variable layout is not actually how I have it, but is indicative of the variables available to the classes) code:
Is this the right way to do it and I'm missing something in the player's transformation step, or am I retarded for doing it like this? Murodese fucked around with this message at 19:08 on Aug 12, 2008 |
# ? Aug 12, 2008 18:55 |
|
OneEightHundred posted:Okay, the way vis works is a bit different than I thought (and I'm still trying to figure it out...), but here's my current understanding:
|
# ? Aug 12, 2008 19:50 |
|
Murodese posted:Is this the right way to do it and I'm missing something in the player's transformation step, or am I retarded for doing it like this? code:
OneEightHundred fucked around with this message at 20:11 on Aug 12, 2008 |
# ? Aug 12, 2008 20:05 |
|
Is there a place where i can download ALL the official tutorial material for XNA game development (D3D stuff, HLSL stuff etc)? I am going on a vacation at a barbaric place with no high speed internet, and id like to have all the necessary material with me.... In other news, the OpenGL 3.0 spec is out, and it looks crappy...
|
# ? Aug 12, 2008 22:16 |
|
shodanjr_gr posted:In other news, the OpenGL 3.0 spec is out, and it looks crappy... Might as well just call it OpenGL 2.2.
|
# ? Aug 12, 2008 22:25 |
|
Weren't they planing at some point to switch to a more object oriented approach, instead of this whole freaking "bind identifiers to stuff then switch states using those identifiers" mentallity?
|
# ? Aug 12, 2008 22:42 |
|
shodanjr_gr posted:Weren't they planing at some point to switch to a more object oriented approach, instead of this whole freaking "bind identifiers to stuff then switch states using those identifiers" mentallity? http://www.opengl.org/registry/specs/EXT/direct_state_access.txt ... which is core in OpenGL 3. As far as the evolutionary fixes, what they did was introduce a deprecation model. A context can be made as "full" or "forward compatible", with deprecated features not being available in "forward compatible" contexts. A lot of the legacy cruft is gone in "forward compatible," but the immutability and asynchronous object stuff isn't there yet. OneEightHundred fucked around with this message at 23:50 on Aug 12, 2008 |
# ? Aug 12, 2008 23:41 |
|
SheriffHippo posted:(I used a C# class I found on CodeProject.com which uses a system hook to trap all mouse input) Just a warning, but I've noticed some [over-]zealous Anti-Virus applications will freak out at system wide mouse/keyboard hooking because it thinks they might be keyloggers (even if only the mouse is being caught.)
|
# ? Aug 13, 2008 00:12 |
|
OneEightHundred posted:They added that as an extension: Sure, it saves you switching active Texture Units/Matrices etc but you still have to screw around with uint identifiers if you cant be bothered to write wrapper classes/methods for everything... I hoping for something like: Texture my_texture; my_texture.setParameter(GL_TEXTURE.GL_MIN_FILTER, GL_TEXTURE.GL_LINEAR); I dont know...maybe Java has spoiled me...
|
# ? Aug 13, 2008 01:19 |
|
shodanjr_gr posted:I hoping for something like: It's not like there are really that many object types in OpenGL either, and you'd probably want to write a wrapper anyway if you ever wanted to port to D3D. Aside from that, it's not really much more work to do this: code:
code:
code:
What's ironic is that they STILL don't let you update textures directly, but they did find a way to work in stuff like deprecating passing anything other than zero to the border parameter of glTexImage2D. Yeah, good work Khronos, way to design that technology. OneEightHundred fucked around with this message at 02:25 on Aug 13, 2008 |
# ? Aug 13, 2008 02:20 |
|
welp I guess long live direct3d
|
# ? Aug 13, 2008 02:47 |
|
OneEightHundred posted:Generally you want to do zoom in the projection matrix by changing the FOV. This isn't actually zoom, it's just indicative of the number of units the camera sits behind the attached object in third person mode.
|
# ? Aug 13, 2008 04:05 |
|
Shazzner posted:welp I guess long live direct3d Direct3D 11 looks like it's set to break new ground in underwhelming, so there's room to maneuver. Regardless, D3D is going to stay on top in Windows development just because of inertia. OpenGL had time to capitalize on D3D's overpriced draw calls long before D3D10 fixed it, they didn't do it then, they're not going to beat D3D to the punch tomorrow, so it's pretty much doomed to second place forever on Windows at this point. OneEightHundred fucked around with this message at 05:16 on Aug 13, 2008 |
# ? Aug 13, 2008 05:05 |
|
Do directx 10 and 11 still continue the COM-style? If so then
|
# ? Aug 13, 2008 06:57 |
|
captain_g posted:Do directx 10 and 11 still continue the COM-style? 11's features are basically: - Multithreaded rendering. Who gives a poo poo, sending things to D3D is not expensive any more, you don't need to parallelize your D3D calls. - Tessellation. Because developers were gladly willing to surrender control for the sake of renderer speed in 2001, I'm sure they'll do it this time around, it's not like relief mapping actually works or anything. - Compute shaders, which OpenGL has mechanisms for already. - Order-independent transparency. Yay, state-trashing. OneEightHundred fucked around with this message at 07:34 on Aug 13, 2008 |
# ? Aug 13, 2008 07:28 |
|
OneEightHundred posted:11's features are basically:
|
# ? Aug 13, 2008 11:47 |
|
D3D11's multithreading features have a lot more to do with thread safety than they do with performance. Everything I've heard says that render state manipulation still involves a critical section, but device access and resource creation do not. What this really means is that your content management thread won't need to be so chatty with your rendering thread, which is a pretty huge thing unless you have an inexplicable hard-on for loading screens and stuttering. Also, the last I heard the order-independent transparency was being billed as a hardware feature, which makes me quite skeptical that they're using an algorithm that will trample state or murder performance like depth peeling. It's a lot more likely that they're using an accumulation buffer with a modest number of samples - something in the area of 8-16 fragments per pixel. Order-independent transparency is interesting for having fluid and aerosol simulations on the GPU, not because of a problem a BSP already solves. TSDK posted:Submitting batches is actually pretty drat expensive CPU wise (i.e. framerate limiting). Being able to thread off calls is going to be handy, especially as more engines start threading more stuff off in order to take advantage of multicore systems.
|
# ? Aug 13, 2008 16:12 |
|
Null Pointer posted:Order-independent transparency is interesting for having fluid and aerosol simulations on the GPU, not because of a problem a BSP already solves.
|
# ? Aug 13, 2008 16:44 |
|
TSDK posted:Submitting batches is actually pretty drat expensive CPU wise (i.e. framerate limiting). Being able to thread off calls is going to be handy, especially as more engines start threading more stuff off in order to take advantage of multicore systems. quote:It's a lot more likely that they're using an accumulation buffer with a modest number of samples - something in the area of 8-16 fragments per pixel. quote:What this really means is that your content management thread won't need to be so chatty with your rendering thread Edit: StickGuy posted:It will also be helpful for rendering intersecting transparent geometry without having to resort to tesselation to get the right result. OneEightHundred fucked around with this message at 16:53 on Aug 13, 2008 |
# ? Aug 13, 2008 16:45 |
|
http://www.devmaster.net/news/index.php?storyid=2062 Truespace 7.6 is now free. This is good news for those of us who can't wrap our heads around blender.
|
# ? Aug 15, 2008 06:12 |
|
Can anyone recommend a good book or online resource for game engine design and best practices? Thanks goons.
|
# ? Aug 17, 2008 20:01 |
|
I'm trying to implement Quake 3's .shader scripts into a HLSL shader and I'm not quite sure how I should do it. Here's an example of one of the scripts. http://pasteall.org/2104 The first line is the name of the shader and each block within the shader is a "stage" that describes how and in what order a particular texture is used in the shader. What I want to know is how I should make a generic shader that can have data filled in by these scripts. This particular shader has 5 stages, and I'm not sure what the limit is. I'd rather not even use a limit, but don't I have to make a different shader for every possible number of stages? Or is there someway I can make a shader that uses an arbitrary number of passes and I just set the texture and the other stage variables every pass? Big Woman posted:Can anyone recommend a good book or online resource for game engine design and best practices? Thanks goons. Not sure what exactly your looking for, but if you have no experience with 3D engines, Frank Luna's Introduction to 3D Game Programming with Direct X 9.0c: A Shader Approach is the best I've ever read.
|
# ? Aug 19, 2008 06:10 |
|
MasterSlowPoke posted:I'm trying to implement Quake 3's .shader scripts into a HLSL shader and I'm not quite sure how I should do it. - If a stage is using rgbgen or alphagen with a non-static value, then you'll need to allocate a texcoord to the color. If it's static, you can bake it into the pixel shader. - If a stage is using a tcgen, then you need to allocate a texcoord to passing the generated texcoords. - Doing tcmod transforms in the pixel shader may allow you to save texcoords, but is slower and should be a last resort. All tcmods except turb can be combined as a 2x3 matrix, so it's best to just do that in the front-end and pass it as a uniform. tcmod turb fucks everything up and you should probably just ignore it. Realistically, the way Quake 3 does shaders is a bad template for a material system. You shouldn't have to tell the engine exactly how to render a lightmapped wall, for example, it ruins your scalability. The best way I've heard it describes is that ideally, your material system should not define how to render a specific material, it should define what it looks like. Or at least, you shouldn't be defining both in the same spot. Scalable engines have multiple ways to render a simple wall, so all you should really need to do is give it a list of assets and parameters to render a "wall" (i.e. the albedo, bump, reflectivity, and glow textures), tell it that it's a "wall," and have it figure out the rest. Build complex materials by compositing those simple types together. The system I'm using is a data-driven version of this and has three parts for a simple diffuse surface: The default material template, which tries loading assets by name and defines how to import them, and references "Diffuse.mrp" as the rendering profile: http://svn.icculus.org/teu/trunk/tertius/release/base/default.mpt?revision=177&view=markup Diffuse.mrp, which is a rendering profile that defines how to render a simple diffuse-lit surface. This is the branch-off for world surfaces: http://svn.icculus.org/teu/trunk/tertius/release/base/materials/profiles/DiffuseWorldSurfaceARBFP.mrp?revision=152&view=markup The shader and permutation projects: http://svn.icculus.org/teu/trunk/tertius/release/base/materials/cg/world_base_lightmap.cg?revision=185&view=markup http://svn.icculus.org/teu/trunk/tertius/release/base/materials/fp/p_world_base_lightmap.svl?revision=183&view=markup http://svn.icculus.org/teu/trunk/tertius/release/base/materials/vp/v_world_base_lightmap.svl?revision=177&view=markup There are simpler ways to do this, of course. Half-Life 2 just defines the surface type for every material (which does wonders for its load times!) and those surface types have various ways of rendering defined for different hardware levels and configurations. OneEightHundred fucked around with this message at 09:04 on Aug 19, 2008 |
# ? Aug 19, 2008 08:38 |
|
OneEightHundred posted:I'm not quite sure I asked my question right. Best case scenario, I'd like to just have one HLSL .fx for all possible Quake 3 .shaders. Having to hand make a different .fx for every .shader I want to use is way too annoying to do. Also, I could be understanding your answer wrong. edit: I'm writing an XNA library to load and use Quake 3 assets so I'm kind of stuck with having to use .shaders as I don't really feel like making my own fork of gtkradiant. Really, even if .shaders describe how a material looks and how to render it, can't I just pick out the look part and toss the "how to render" part? MasterSlowPoke fucked around with this message at 09:19 on Aug 19, 2008 |
# ? Aug 19, 2008 09:07 |
|
|
# ? May 11, 2024 13:13 |
|
You can't even guarantee that you'd be able to single-pass it because of specific combinations. For example, suppose you had an additive layer followed by an alpha blend layer (these exist in the game!!), you CAN'T single-pass that because whatever your result is can still only be drawn to the framebuffer with one blend function and there's no such thing as a "blend shader" yet. Even with just one, you'd have to permute the poo poo out of it so you get results that use the proper tcgen (on the vertex shader) and blendfunc (on the pixel shader). My suggestion isn't to hand-code them, but rather, to generate them at runtime and compile them then. The alternative is to do all possible permutations in advance, which takes a LONG time, but will speed up load times a lot. quote:Really, even if .shaders describe how a material looks and how to render it, can't I just pick out the look part and toss the "how to render" part? OneEightHundred fucked around with this message at 09:27 on Aug 19, 2008 |
# ? Aug 19, 2008 09:21 |