|
UraniumAnchor posted:Lovely helpfulness Oh god, I'm such a twat. For some reason I thought I couldn't #include in a header file. That solves my problem! Thanks dude.
|
# ? Apr 9, 2010 08:02 |
|
|
# ? Jun 9, 2024 01:20 |
|
CaptainMcArrr posted:Oh god, I'm such a twat. For some reason I thought I couldn't #include in a header file. That solves my problem! Thanks dude. I think the preprocessor does multiple passes on the code, repeating until there's nothing left to process. #include pretty much just means "put the contents of that file here". So after it processes that and the other directives in the C file, it should go over everything over again to see if any more directives are left to be processed. If there were directives inside the included file, it should be processed during the subsequent passes. Is that how it works? EdIT: I only use C compilers, I don't design them BattleMaster fucked around with this message at 16:46 on Apr 9, 2010 |
# ? Apr 9, 2010 16:37 |
|
BattleMaster posted:I think the preprocessor does multiple passes on the code, repeating until there's nothing left to process. #include pretty much just means "put the contents of that file here". So after it processes that and the other directives in the C file, it should go over everything over again to see if any more directives are left to be processed. If there were directives inside the included file, it should be processed during the subsequent passes. Is that how it works? No. The C and C++ preprocessors are single-pass. Though the effect of what you describe specifically here is correct, everything in the included file is processed during the same pass.
|
# ? Apr 9, 2010 18:18 |
|
I'm working on a research project that is basically trying to rewrite the TCP/IP stack to handle hundreds of thousands of connections. Currently I'm working on improving the efficiency and have been doing some benchmarking for various parts of the code. I'm using queues to pass packets between threads and synchronizing access with semaphores and mutexes. WaitForSingleObject, ReleaseMutex, and ReleaseSemaphore are limited to about one million calls per sec each on our server. Mutexes were already replaced with CRITICAL_SECTIONs, which seem to be about 15x faster. But I'm having trouble with replacing the semaphores... Currently I use WaitForSingleObject() to acquire a semaphore and subsequently dequeue the packet. Would a "semi"-busy wait implementation me more efficient? My plan is to continuously test the queue size and extract a packet if it's not empty. If it is empty, it will sleep for a few milliseconds and try again. Is this a bad way to design a program?
|
# ? Apr 9, 2010 19:21 |
|
How much faster does C/C++ code actually run than C#? I notice that I've ported some of my algorithms that I made for project euler originally in C# into C++ and they executive at almost exactly the same speed?
|
# ? Apr 10, 2010 01:56 |
|
wwqd1123 posted:How much faster does C/C++ code actually run than C#? I notice that I've ported some of my algorithms that I made for project euler originally in C# into C++ and they executive at almost exactly the same speed? C++ isn't really that much faster than C# unless you really know what you're doing, and even then it might be a wash.
|
# ? Apr 10, 2010 02:23 |
|
When C# first came out the number I heard was 90% as fast as C++, so I wouldn't be surprised if it's close to parity right now. However if that's the case why aren't we seeing more games for Windows using C#?
|
# ? Apr 10, 2010 02:50 |
|
Shumagorath posted:When C# first came out the number I heard was 90% as fast as C++, so I wouldn't be surprised if it's close to parity right now. However if that's the case why aren't we seeing more games for Windows using C#? Garbage collection is pretty much a no-go for AAA games*, since it generally relies on the assumption that it will defer cleanup of resources to a point when not much is happening. That never actually occurs during games, so either cleanup is delayed inordinately, and/or batches of objects get cleanup causing framerate stutter. * Unless the GC is tuned for game environments, as is (or may be) the case with UE3's scripting engine, but then the GCed nature of UnrealScript is one of the big things I hear people bitch about regarding UE3.
|
# ? Apr 10, 2010 03:04 |
|
Avenging Dentist posted:Garbage collection is pretty much a no-go for AAA games*, since it generally relies on the assumption that it will defer cleanup of resources to a point when not much is happening. That never actually occurs during games, so either cleanup is delayed inordinately, and/or batches of objects get cleanup causing framerate stutter. I've sorta been expecting it to catch on more over the past few years as parallel, concurrent GC algorithms have gotten to be production ready.
|
# ? Apr 10, 2010 03:12 |
|
Can't you adjust the level of objects for garbage collections? ie higher level objects won't be collected before lower level objects? I think Microsoft has poured millions into it's GC so it wouldn't surprise me if you couldn't customize it for games.
|
# ? Apr 10, 2010 03:33 |
|
How do i get around the problem of using functions that take arguments by reference when I pass some intermediate result to them? something like... code:
|
# ? Apr 10, 2010 03:42 |
|
wwqd1123 posted:Can't you adjust the level of objects for garbage collections? ie higher level objects won't be collected before lower level objects? I think Microsoft has poured millions into it's GC so it wouldn't surprise me if you couldn't customize it for games. What does "level" mean? Also, just because someone invested a lot of money in a problem doesn't make it suitable for every application. OpenMP has a lot of funding, but that doesn't make it the best parallelism library in all cases. Besides that, the bottom line is that most garbage collectors force a layer of indirection between code and execution, making it even harder to predict what the CPU is going to be doing at any given time. When you are writing a system that demands high performance essentially 100% of the time, this is actually a significant problem.
|
# ? Apr 10, 2010 04:28 |
|
LockeNess Monster posted:How do i get around the problem of using functions that take arguments by reference when I pass some intermediate result to them? It will work if foo takes a reference to a const (e.g. const string&). (And g++ is surprisingly helpful with its error message on your example: code:
|
# ? Apr 10, 2010 04:50 |
|
Avenging Dentist posted:What does "level" mean? Also, just because someone invested a lot of money in a problem doesn't make it suitable for every application. OpenMP has a lot of funding, but that doesn't make it the best parallelism library in all cases. My bad. According to Microsoft's C# documentation objects are divided into 3 generations. It looks like if objects survive a garbage collection they are promoted to generation 1, and if they survive another they are promoted to generation 2 which I suppose is Microsoft's way of trying to optimize garbage collection. I guess that still doesn't help with game development though
|
# ? Apr 10, 2010 05:01 |
|
wwqd1123 posted:My bad. According to Microsoft's C# documentation objects are divided into 3 generations. It looks like if objects survive a garbage collection they are promoted to generation 1, and if they survive another they are promoted to generation 2 which I suppose is Microsoft's way of trying to optimize garbage collection. Generational garbage collection is pretty common. Java and Python both use it as well. wwqd1123 posted:I guess that still doesn't help with game development though Honestly, one of the really useful things about C++ is that it doesn't mandate a memory management scheme. You're perfectly free to write your own garbage collector (or use someone else's) and use it via a smart pointer class, giving you the benefits of GC where you want it without the costs where you don't want it (by using regular local variables or other kinds of pointers).
|
# ? Apr 10, 2010 05:18 |
|
Avenging Dentist posted:Generational garbage collection is pretty common. Java and Python both use it as well. Uhh what Python? Certainly not CPython. PyPy has a generational collector, and IronPython and JYthon use host GC (which is generational), but the "common" Python isn't. Except for the cycle collector, which is psueo-generational, but the primary GC is refcounting.
|
# ? Apr 10, 2010 05:23 |
|
king_kilr posted:Except for the cycle collector, which is psueo-generational, but the primary GC is refcounting. That's what I was talking about. I don't really consider reference-counting to be "garbage collection". If I did, I'd probably have to consider C++'s destruction of local variables as "garbage collection" too.
|
# ? Apr 10, 2010 05:29 |
|
wwqd1123 posted:Can't you adjust the level of objects for garbage collections? ie higher level objects won't be collected before lower level objects? I think Microsoft has poured millions into it's GC so it wouldn't surprise me if you couldn't customize it for games. In addition to what's already been said, very few people develop games just for Windows -- and C# is not cross-platform (even on the 360 only the XNA games are allowed to use C#, and XNA is a *lot* slower than direct access to the graphics card).
|
# ? Apr 10, 2010 07:29 |
|
cronio posted:XNA is a *lot* slower than direct access to the graphics card citation required. Seriously, of all the things for XNA to be slow at on the 360, I really can't see why it would be the rendering. Do you have evidence for this? (Same applies for on Windows tbh)
|
# ? Apr 10, 2010 10:27 |
|
digibawb posted:citation required. I guess that depends on what you mean by direct access to the "graphics card". We used to write stuff directly to the command buffers for the 360 graphics card where I used to work, but I doubt this is the common case. Comparing XNA to C++ it might turn out to be slower, but graphics performance is probably just the same as C++, given the same API access of course.
|
# ? Apr 10, 2010 12:21 |
|
Yeah, unless you are writing your own command buffers, which you don't have access to in XNA, then it's going to be pretty much identical. No VMX support seems like a much bigger deal, to me anyway. EDIT: And the limited control of memory... digibawb fucked around with this message at 12:48 on Apr 10, 2010 |
# ? Apr 10, 2010 12:40 |
|
I have a C for microcontrollers question. What I'm trying to do is write to an unsigned char variable using real-time input. Using one function it works, but when I add the rest of the functions I need, it doesn't work. Code follows: code:
If all the else if statements are causing trouble, I'm wondering if a case statement will fix it. If not, then its probably hardware and I should check the wiring of my selector switch. System is an Atmel ATmega16L using the CodeVision libraries. e: Update: looks like its most definitely hardware. Full Collapse fucked around with this message at 05:24 on Apr 11, 2010 |
# ? Apr 10, 2010 22:46 |
|
So does clang parse conversion functions to references to arrays yet? Like, for gcc struct A { (&operator char())[4] { return data; } char data[4]; }; No?
|
# ? Apr 10, 2010 23:20 |
|
Vanadium posted:So does clang parse conversion functions to references to arrays yet? Like, for gcc struct A { (&operator char())[4] { return data; } char data[4]; }; No? Yes.
|
# ? Apr 10, 2010 23:26 |
|
wwqd1123 posted:How much faster does C/C++ code actually run than C#? I notice that I've ported some of my algorithms that I made for project euler originally in C# into C++ and they executive at almost exactly the same speed? txrandom posted:Currently I use WaitForSingleObject() to acquire a semaphore and subsequently dequeue the packet. Would a "semi"-busy wait implementation me more efficient? My plan is to continuously test the queue size and extract a packet if it's not empty. If it is empty, it will sleep for a few milliseconds and try again. Is this a bad way to design a program?
|
# ? Apr 10, 2010 23:49 |
|
Vanadium posted:So does clang parse conversion functions to references to arrays yet? Like, for gcc struct A { (&operator char())[4] { return data; } char data[4]; }; No? I take back everything I've said about loving C++ in those template metaprogramming posts I've been making. That syntax is just raunchy.
|
# ? Apr 11, 2010 00:46 |
|
I was told that syntax is a gcc extension, anyway.
|
# ? Apr 11, 2010 01:11 |
|
Vanadium posted:So does clang parse conversion functions to references to arrays yet? Like, for gcc struct A { (&operator char())[4] { return data; } char data[4]; }; No? What's the usefulness of this?
|
# ? Apr 11, 2010 01:17 |
|
Contero posted:What's the usefulness of this? It returns an array by reference, that is what is useful. Sometimes you do not want to deal with array-pointer degradation. Example: You can overload sprintf in C++ to work like snprintf when you pass in an array instead of a pointer.
|
# ? Apr 11, 2010 01:19 |
|
This is definitely a syntax extension, and no, clang doesn't parse it. The standards-compliant way to write conversions to types with complex declarators is to use a typedef.
|
# ? Apr 11, 2010 01:43 |
|
rjmccall posted:This is definitely a syntax extension, and no, clang doesn't parse it. The standards-compliant way to write conversions to types with complex declarators is to use a typedef. No, it's not a syntax extension, that's just standard code, and yes clang does parse it. Edit: Parse != works That Turkey Story fucked around with this message at 02:22 on Apr 11, 2010 |
# ? Apr 11, 2010 02:05 |
|
digibawb posted:citation required. Looks like I was partially wrong -- the Direct3D implementation on the 360 is much lower-level than on Windows. In the native API you do get direct access to the hardware though, i.e. you can bypass the Direct3D API or even write directly to the command buffer (and many do). I can't cite anything but this is directly from people who work on the 360. Direct3D on the 360 is at least a lot better than OpenGL on the PS3 though, which virtually no one uses (that's direct from experience). The same does not apply on Windows because you don't have the same level of hardware access.
|
# ? Apr 11, 2010 04:28 |
|
digibawb posted:Yeah, unless you are writing your own command buffers, which you don't have access to in XNA, then it's going to be pretty much identical. Yeah, I was assuming the difference was more along the lines of OpenGL vs. libgcm on the PS3. Since that's not the case the differences for the average coder are less, but game developers still do a lot of asm-level optimization (even if they're not writing assembly, analyzing the assembly generated by the compiler). Also, game developers are dinosaurs, and don't like to change . They also don't like anything unexpected, which managed languages sometimes provide. Oh, and they already have millions of lines of asm/C/C++ code. I wouldn't be surprised if a lot of things go the way of Unity (C# as a scripting language -- a lot of developers use Lua now), with the engine still in C/C++.
|
# ? Apr 11, 2010 04:39 |
|
cronio posted:Oh, and they already have millions of lines of asm/C/C++ code. I wouldn't be surprised if a lot of things go the way of Unity (C# as a scripting language -- a lot of developers use Lua now), with the engine still in C/C++. It's usually less about legacy code and garbage collection, and more about speed and portability. Interpreted languages tend to rely on JITting for their speed but that doesn't work on consoles (all executable code must be signed), and statically compiling them (like Unity for the iPhone does) is pretty variably supported (Unity's solution is very customized and specialized). And few people are gonna write an engine that confines you to desktop PCs . Until there pops up a language that all the console manufacturers support out of the box, C++ is gonna remain as the only viable language for game engines. But yeah, the high-level behavior has been done in scripting languages since, well, forever. Usually Lua these days (Grim Fandango pioneered that as early as '98).
|
# ? Apr 11, 2010 05:09 |
|
cronio posted:Looks like I was partially wrong -- the Direct3D implementation on the 360 is much lower-level than on Windows. In the native API you do get direct access to the hardware though, i.e. you can bypass the Direct3D API or even write directly to the command buffer (and many do). I can't cite anything but this is directly from people who work on the 360. I work on both the 360 and PS3! I was making the assumption that most 360 developers don't actually end up bypassing the API and rolling their own command buffers, perhaps this is a faulty assumption however. I can certainly see where you are coming from on the PS3 though The Windows comment was based on this assumption, in that the managed wrapper around D3D is unlikely to be a bottleneck. cronio posted:Yeah, I was assuming the difference was more along the lines of OpenGL vs. libgcm on the PS3. Since that's not the case the differences for the average coder are less, but game developers still do a lot of asm-level optimization (even if they're not writing assembly, analyzing the assembly generated by the compiler). I'd like to think I'm quite open to change, thank you very much :P I'd certainly like to see C# in use for game code, though I don't think it's quite practical yet (for us anyway). I muddle around with XNA at home, and not being able to see what it's doing under the hood will probably become a pain for me at some point, though I haven't hit that yet. Knowing that the JIT will never generate VMX output annoys me, more so than the fact that I just can't write it in intrinsics in the first place. The instruction set for SSE is rather different though, so I can see why this isn't practical either. Vinterstum posted:Interpreted languages tend to rely on JITting for their speed but that doesn't work on consoles (all executable code must be signed, and statically compiling them (like Unity for the iPhone does) is pretty variably supported (Unity's solution is very customized and specialized). I'm pretty sure XNA is JIT'ed even on the 360, fairly certain I saw a post by Shawn Hargreaves stating it was only a couple of days ago in fact. We might be getting a little off topic here though
|
# ? Apr 11, 2010 14:48 |
|
digibawb posted:I work on both the 360 and PS3! I believe AIWar http://arcengames.com/ was written entirely in C#.
|
# ? Apr 11, 2010 15:36 |
|
That Turkey Story posted:No, it's not a syntax extension, that's just standard code Do you have some sort of argument for this? Because [class.conv.fct] pretty clearly (1) precisely limits the form of a conversion-type-id, (2) describes the type of the conversion function as conversion-type-id(), and (3) forbids the specification of a return type. This really isn't consistent with the idea that the conversion-type-id magically picks up extra type information from the rest of the declarator. That Turkey Story posted:and yes clang does parse it. Sure, because we don't special-case the declarator parser just to break this. But AFAIK it's just an oversight that we don't diagnose the spurious bits of declarator.
|
# ? Apr 11, 2010 22:51 |
|
Hopefully a simple question: In my class declaration, I have the variable int NumPeople and I want an array that has enough space to store the numeric of all the people. Right now I am trying to accomplish this with: code:
|
# ? Apr 12, 2010 02:09 |
|
Jose Cuervo posted:Hopefully a simple question: Yes, in that you can use a std::vector to hold your data, and any decent implementation will be the same performance. No, in that what you have cannot possibly be made to work. The closest thing is an int pointer that allocates NumPeople ints.
|
# ? Apr 12, 2010 02:11 |
|
|
# ? Jun 9, 2024 01:20 |
|
Jose Cuervo posted:Hopefully a simple question: What you want to do here is this: code:
Read here for more detail.
|
# ? Apr 12, 2010 02:29 |