Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
CaptainMcArrr
Oct 4, 2003
PACWANKER HAS HORSE COCK

UraniumAnchor posted:

Lovely helpfulness

Oh god, I'm such a twat. For some reason I thought I couldn't #include in a header file. That solves my problem! Thanks dude.

Adbot
ADBOT LOVES YOU

BattleMaster
Aug 14, 2000

CaptainMcArrr posted:

Oh god, I'm such a twat. For some reason I thought I couldn't #include in a header file. That solves my problem! Thanks dude.

I think the preprocessor does multiple passes on the code, repeating until there's nothing left to process. #include pretty much just means "put the contents of that file here". So after it processes that and the other directives in the C file, it should go over everything over again to see if any more directives are left to be processed. If there were directives inside the included file, it should be processed during the subsequent passes. Is that how it works?

EdIT: I only use C compilers, I don't design them :(

BattleMaster fucked around with this message at 16:46 on Apr 9, 2010

That Turkey Story
Mar 30, 2003

BattleMaster posted:

I think the preprocessor does multiple passes on the code, repeating until there's nothing left to process. #include pretty much just means "put the contents of that file here". So after it processes that and the other directives in the C file, it should go over everything over again to see if any more directives are left to be processed. If there were directives inside the included file, it should be processed during the subsequent passes. Is that how it works?

EdIT: I only use C compilers, I don't design them :(

No. The C and C++ preprocessors are single-pass. Though the effect of what you describe specifically here is correct, everything in the included file is processed during the same pass.

txrandom
Aug 3, 2007
I'm working on a research project that is basically trying to rewrite the TCP/IP stack to handle hundreds of thousands of connections. Currently I'm working on improving the efficiency and have been doing some benchmarking for various parts of the code.

I'm using queues to pass packets between threads and synchronizing access with semaphores and mutexes. WaitForSingleObject, ReleaseMutex, and ReleaseSemaphore are limited to about one million calls per sec each on our server. Mutexes were already replaced with CRITICAL_SECTIONs, which seem to be about 15x faster. But I'm having trouble with replacing the semaphores...

Currently I use WaitForSingleObject() to acquire a semaphore and subsequently dequeue the packet. Would a "semi"-busy wait implementation me more efficient? My plan is to continuously test the queue size and extract a packet if it's not empty. If it is empty, it will sleep for a few milliseconds and try again. Is this a bad way to design a program?

wwqd1123
Mar 3, 2003
How much faster does C/C++ code actually run than C#? I notice that I've ported some of my algorithms that I made for project euler originally in C# into C++ and they executive at almost exactly the same speed?

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh

wwqd1123 posted:

How much faster does C/C++ code actually run than C#? I notice that I've ported some of my algorithms that I made for project euler originally in C# into C++ and they executive at almost exactly the same speed?

C++ isn't really that much faster than C# unless you really know what you're doing, and even then it might be a wash.

Shumagorath
Jun 6, 2001
When C# first came out the number I heard was 90% as fast as C++, so I wouldn't be surprised if it's close to parity right now. However if that's the case why aren't we seeing more games for Windows using C#?

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh

Shumagorath posted:

When C# first came out the number I heard was 90% as fast as C++, so I wouldn't be surprised if it's close to parity right now. However if that's the case why aren't we seeing more games for Windows using C#?

Garbage collection is pretty much a no-go for AAA games*, since it generally relies on the assumption that it will defer cleanup of resources to a point when not much is happening. That never actually occurs during games, so either cleanup is delayed inordinately, and/or batches of objects get cleanup causing framerate stutter.

* Unless the GC is tuned for game environments, as is (or may be) the case with UE3's scripting engine, but then the GCed nature of UnrealScript is one of the big things I hear people bitch about regarding UE3.

king_kilr
May 25, 2007

Avenging Dentist posted:

Garbage collection is pretty much a no-go for AAA games*, since it generally relies on the assumption that it will defer cleanup of resources to a point when not much is happening. That never actually occurs during games, so either cleanup is delayed inordinately, and/or batches of objects get cleanup causing framerate stutter.

* Unless the GC is tuned for game environments, as is (or may be) the case with UE3's scripting engine, but then the GCed nature of UnrealScript is one of the big things I hear people bitch about regarding UE3.

I've sorta been expecting it to catch on more over the past few years as parallel, concurrent GC algorithms have gotten to be production ready.

wwqd1123
Mar 3, 2003
Can't you adjust the level of objects for garbage collections? ie higher level objects won't be collected before lower level objects? I think Microsoft has poured millions into it's GC so it wouldn't surprise me if you couldn't customize it for games.

Crazy RRRussian
Mar 5, 2010

by Fistgrrl
How do i get around the problem of using functions that take arguments by reference when I pass some intermediate result to them?

something like...

code:

void foo(string & a)
{...}

....
string z ="whateva";
foo( whateva + "lolz" );
In the form I gave there, it won't compile.

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh

wwqd1123 posted:

Can't you adjust the level of objects for garbage collections? ie higher level objects won't be collected before lower level objects? I think Microsoft has poured millions into it's GC so it wouldn't surprise me if you couldn't customize it for games.

What does "level" mean? Also, just because someone invested a lot of money in a problem doesn't make it suitable for every application. OpenMP has a lot of funding, but that doesn't make it the best parallelism library in all cases.

Besides that, the bottom line is that most garbage collectors force a layer of indirection between code and execution, making it even harder to predict what the CPU is going to be doing at any given time. When you are writing a system that demands high performance essentially 100% of the time, this is actually a significant problem.

OddObserver
Apr 3, 2009

LockeNess Monster posted:

How do i get around the problem of using functions that take arguments by reference when I pass some intermediate result to them?

something like...

code:

void foo(string & a)
{...}

....
string z ="whateva";
foo( whateva + "lolz" );
In the form I gave there, it won't compile.

It will work if foo takes a reference to a const (e.g. const string&).

(And g++ is surprisingly helpful with its error message on your example:
code:
/tmp/tc.cpp:9: error: invalid initialization of non-const reference of type ‘std::string&’ from a temporary of type ‘std::basic_string<char, std::char_traits<char>, std::allocator<char> >’

wwqd1123
Mar 3, 2003

Avenging Dentist posted:

What does "level" mean? Also, just because someone invested a lot of money in a problem doesn't make it suitable for every application. OpenMP has a lot of funding, but that doesn't make it the best parallelism library in all cases.

Besides that, the bottom line is that most garbage collectors force a layer of indirection between code and execution, making it even harder to predict what the CPU is going to be doing at any given time. When you are writing a system that demands high performance essentially 100% of the time, this is actually a significant problem.

My bad. According to Microsoft's C# documentation objects are divided into 3 generations. It looks like if objects survive a garbage collection they are promoted to generation 1, and if they survive another they are promoted to generation 2 which I suppose is Microsoft's way of trying to optimize garbage collection.

I guess that still doesn't help with game development though :(

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh

wwqd1123 posted:

My bad. According to Microsoft's C# documentation objects are divided into 3 generations. It looks like if objects survive a garbage collection they are promoted to generation 1, and if they survive another they are promoted to generation 2 which I suppose is Microsoft's way of trying to optimize garbage collection.

Generational garbage collection is pretty common. Java and Python both use it as well.

wwqd1123 posted:

I guess that still doesn't help with game development though :(

Honestly, one of the really useful things about C++ is that it doesn't mandate a memory management scheme. You're perfectly free to write your own garbage collector (or use someone else's) and use it via a smart pointer class, giving you the benefits of GC where you want it without the costs where you don't want it (by using regular local variables or other kinds of pointers).

king_kilr
May 25, 2007

Avenging Dentist posted:

Generational garbage collection is pretty common. Java and Python both use it as well.


Honestly, one of the really useful things about C++ is that it doesn't mandate a memory management scheme. You're perfectly free to write your own garbage collector (or use someone else's) and use it via a smart pointer class, giving you the benefits of GC where you want it without the costs where you don't want it (by using regular local variables or other kinds of pointers).

Uhh what Python? Certainly not CPython. PyPy has a generational collector, and IronPython and JYthon use host GC (which is generational), but the "common" Python isn't. Except for the cycle collector, which is psueo-generational, but the primary GC is refcounting.

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh

king_kilr posted:

Except for the cycle collector, which is psueo-generational, but the primary GC is refcounting.

That's what I was talking about. I don't really consider reference-counting to be "garbage collection". If I did, I'd probably have to consider C++'s destruction of local variables as "garbage collection" too.

cronio
Feb 15, 2002
Drifter

wwqd1123 posted:

Can't you adjust the level of objects for garbage collections? ie higher level objects won't be collected before lower level objects? I think Microsoft has poured millions into it's GC so it wouldn't surprise me if you couldn't customize it for games.

In addition to what's already been said, very few people develop games just for Windows -- and C# is not cross-platform (even on the 360 only the XNA games are allowed to use C#, and XNA is a *lot* slower than direct access to the graphics card).

digibawb
Dec 15, 2004
got moo?

cronio posted:

XNA is a *lot* slower than direct access to the graphics card

citation required.

Seriously, of all the things for XNA to be slow at on the 360, I really can't see why it would be the rendering. Do you have evidence for this? (Same applies for on Windows tbh)

Zerf
Dec 17, 2004

I miss you, sandman

digibawb posted:

citation required.

Seriously, of all the things for XNA to be slow at on the 360, I really can't see why it would be the rendering. Do you have evidence for this? (Same applies for on Windows tbh)

I guess that depends on what you mean by direct access to the "graphics card". We used to write stuff directly to the command buffers for the 360 graphics card where I used to work, but I doubt this is the common case.

Comparing XNA to C++ it might turn out to be slower, but graphics performance is probably just the same as C++, given the same API access of course.

digibawb
Dec 15, 2004
got moo?
Yeah, unless you are writing your own command buffers, which you don't have access to in XNA, then it's going to be pretty much identical.

No VMX support seems like a much bigger deal, to me anyway.

EDIT: And the limited control of memory...

digibawb fucked around with this message at 12:48 on Apr 10, 2010

Full Collapse
Dec 4, 2002

I have a C for microcontrollers question.

What I'm trying to do is write to an unsigned char variable using real-time input. Using one function it works, but when I add the rest of the functions I need, it doesn't work. Code follows:

code:
interrupt [EXT_INT0] void ext_int0_isr(void){  // First pot.  pin A (PORTD.2) [1]
        
    if(PINB.0 == 1){ // See footnote [2]
        if(PINC.1 != 0)
            scroll_up_od_vol();
        else if(PINC.1 != 1)
            scroll_down_od_vol();
    }
//     else if(PINB.1 == 1){ // See footnote [2]
//         if(PINC.1 != 0)
//             scroll_up_tr_rate();
//         else if(PINC.1 != 1)
//             scroll_down_tr_rate();
//     }
//     else if(PINB.2 == 1){ // See footnote [2]
//         if(PINC.1 != 0)
//             scroll_up_cm_att();
//         else if(PINC.1 != 1)
//             scroll_down_cm_att();
//     }
//     else if(PINB.3 == 1){ // See footnote [2]
//         if(PINC.1 != 0)
//             scroll_up_ph_rate();
//         else if(PINC.1 != 1)
//             scroll_down_ph_rate();
//     }

}
This code works since three of the four functions are commented out. The PINB array reads a +5 Vdc signal from that hardware pin depending the position of the selector switch (+5 Vdc = 1, 0 Vdc = 0, etc...). There is a second interrupt for the other pot, but the code is identical.

If all the else if statements are causing trouble, I'm wondering if a case statement will fix it. If not, then its probably hardware and I should check the wiring of my selector switch.

System is an Atmel ATmega16L using the CodeVision libraries.

e: Update: looks like its most definitely hardware.

Full Collapse fucked around with this message at 05:24 on Apr 11, 2010

Vanadium
Jan 8, 2005

So does clang parse conversion functions to references to arrays yet? Like, for gcc struct A { (&operator char())[4] { return data; } char data[4]; }; No? :smug:

That Turkey Story
Mar 30, 2003

Vanadium posted:

So does clang parse conversion functions to references to arrays yet? Like, for gcc struct A { (&operator char())[4] { return data; } char data[4]; }; No? :smug:

Yes.

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

wwqd1123 posted:

How much faster does C/C++ code actually run than C#? I notice that I've ported some of my algorithms that I made for project euler originally in C# into C++ and they executive at almost exactly the same speed?
It's worth pointing out that whether the C# port runs slower, about the same, or faster is entirely dependent on your particular program. Don't forget that by virtue of being JITted, the managed environment can do neat things like inlining oft-called function calls, which can result in a performance increase relative to the native version.

txrandom posted:

Currently I use WaitForSingleObject() to acquire a semaphore and subsequently dequeue the packet. Would a "semi"-busy wait implementation me more efficient? My plan is to continuously test the queue size and extract a packet if it's not empty. If it is empty, it will sleep for a few milliseconds and try again. Is this a bad way to design a program?
You're not in the "design" phase so much as the "savage optimization at the cycle level" phase, so while I would never recommend someone doing what you're talking about without first having seen the results of a profiler, it's entirely possible that spinning for a few milliseconds will give you a big performance win over the hundreds of thousands of cycles you'll "waste" by updating your semaphores.

Flobbster
Feb 17, 2005

"Cadet Kirk, after the way you cheated on the Kobayashi Maru test I oughta punch you in tha face!"

Vanadium posted:

So does clang parse conversion functions to references to arrays yet? Like, for gcc struct A { (&operator char())[4] { return data; } char data[4]; }; No? :smug:

:psypop:

I take back everything I've said about loving C++ in those template metaprogramming posts I've been making. That syntax is just raunchy.

Vanadium
Jan 8, 2005

I was told that syntax is a gcc extension, anyway.

Contero
Mar 28, 2004

Vanadium posted:

So does clang parse conversion functions to references to arrays yet? Like, for gcc struct A { (&operator char())[4] { return data; } char data[4]; }; No? :smug:

What's the usefulness of this?

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh

Contero posted:

What's the usefulness of this?

It returns an array by reference, that is what is useful. Sometimes you do not want to deal with array-pointer degradation. Example: You can overload sprintf in C++ to work like snprintf when you pass in an array instead of a pointer.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
This is definitely a syntax extension, and no, clang doesn't parse it. The standards-compliant way to write conversions to types with complex declarators is to use a typedef.

That Turkey Story
Mar 30, 2003

rjmccall posted:

This is definitely a syntax extension, and no, clang doesn't parse it. The standards-compliant way to write conversions to types with complex declarators is to use a typedef.

No, it's not a syntax extension, that's just standard code, and yes clang does parse it.

Edit: Parse != works

That Turkey Story fucked around with this message at 02:22 on Apr 11, 2010

cronio
Feb 15, 2002
Drifter

digibawb posted:

citation required.

Seriously, of all the things for XNA to be slow at on the 360, I really can't see why it would be the rendering. Do you have evidence for this? (Same applies for on Windows tbh)

Looks like I was partially wrong -- the Direct3D implementation on the 360 is much lower-level than on Windows. In the native API you do get direct access to the hardware though, i.e. you can bypass the Direct3D API or even write directly to the command buffer (and many do). I can't cite anything but this is directly from people who work on the 360.

Direct3D on the 360 is at least a lot better than OpenGL on the PS3 though, which virtually no one uses (that's direct from experience).

The same does not apply on Windows because you don't have the same level of hardware access.

cronio
Feb 15, 2002
Drifter

digibawb posted:

Yeah, unless you are writing your own command buffers, which you don't have access to in XNA, then it's going to be pretty much identical.

No VMX support seems like a much bigger deal, to me anyway.

EDIT: And the limited control of memory...

Yeah, I was assuming the difference was more along the lines of OpenGL vs. libgcm on the PS3. Since that's not the case the differences for the average coder are less, but game developers still do a lot of asm-level optimization (even if they're not writing assembly, analyzing the assembly generated by the compiler).

Also, game developers are dinosaurs, and don't like to change :). They also don't like anything unexpected, which managed languages sometimes provide.

Oh, and they already have millions of lines of asm/C/C++ code. I wouldn't be surprised if a lot of things go the way of Unity (C# as a scripting language -- a lot of developers use Lua now), with the engine still in C/C++.

Vinterstum
Jul 30, 2003

cronio posted:

Oh, and they already have millions of lines of asm/C/C++ code. I wouldn't be surprised if a lot of things go the way of Unity (C# as a scripting language -- a lot of developers use Lua now), with the engine still in C/C++.

It's usually less about legacy code and garbage collection, and more about speed and portability. Interpreted languages tend to rely on JITting for their speed but that doesn't work on consoles (all executable code must be signed), and statically compiling them (like Unity for the iPhone does) is pretty variably supported (Unity's solution is very customized and specialized).

And few people are gonna write an engine that confines you to desktop PCs :). Until there pops up a language that all the console manufacturers support out of the box, C++ is gonna remain as the only viable language for game engines.

But yeah, the high-level behavior has been done in scripting languages since, well, forever. Usually Lua these days (Grim Fandango pioneered that as early as '98).

digibawb
Dec 15, 2004
got moo?

cronio posted:

Looks like I was partially wrong -- the Direct3D implementation on the 360 is much lower-level than on Windows. In the native API you do get direct access to the hardware though, i.e. you can bypass the Direct3D API or even write directly to the command buffer (and many do). I can't cite anything but this is directly from people who work on the 360.

Direct3D on the 360 is at least a lot better than OpenGL on the PS3 though, which virtually no one uses (that's direct from experience).

The same does not apply on Windows because you don't have the same level of hardware access.

I work on both the 360 and PS3!

I was making the assumption that most 360 developers don't actually end up bypassing the API and rolling their own command buffers, perhaps this is a faulty assumption however. I can certainly see where you are coming from on the PS3 though :) The Windows comment was based on this assumption, in that the managed wrapper around D3D is unlikely to be a bottleneck.

cronio posted:

Yeah, I was assuming the difference was more along the lines of OpenGL vs. libgcm on the PS3. Since that's not the case the differences for the average coder are less, but game developers still do a lot of asm-level optimization (even if they're not writing assembly, analyzing the assembly generated by the compiler).

Also, game developers are dinosaurs, and don't like to change :). They also don't like anything unexpected, which managed languages sometimes provide.

Oh, and they already have millions of lines of asm/C/C++ code. I wouldn't be surprised if a lot of things go the way of Unity (C# as a scripting language -- a lot of developers use Lua now), with the engine still in C/C++.

I'd like to think I'm quite open to change, thank you very much :P

I'd certainly like to see C# in use for game code, though I don't think it's quite practical yet (for us anyway).

I muddle around with XNA at home, and not being able to see what it's doing under the hood will probably become a pain for me at some point, though I haven't hit that yet. Knowing that the JIT will never generate VMX output annoys me, more so than the fact that I just can't write it in intrinsics in the first place. The instruction set for SSE is rather different though, so I can see why this isn't practical either.

Vinterstum posted:

Interpreted languages tend to rely on JITting for their speed but that doesn't work on consoles (all executable code must be signed, and statically compiling them (like Unity for the iPhone does) is pretty variably supported (Unity's solution is very customized and specialized).

I'm pretty sure XNA is JIT'ed even on the 360, fairly certain I saw a post by Shawn Hargreaves stating it was only a couple of days ago in fact.




We might be getting a little off topic here though :D

UberJumper
May 20, 2007
woop

digibawb posted:

I work on both the 360 and PS3!

I was making the assumption that most 360 developers don't actually end up bypassing the API and rolling their own command buffers, perhaps this is a faulty assumption however. I can certainly see where you are coming from on the PS3 though :) The Windows comment was based on this assumption, in that the managed wrapper around D3D is unlikely to be a bottleneck.


I'd like to think I'm quite open to change, thank you very much :P

I'd certainly like to see C# in use for game code, though I don't think it's quite practical yet (for us anyway).

I muddle around with XNA at home, and not being able to see what it's doing under the hood will probably become a pain for me at some point, though I haven't hit that yet. Knowing that the JIT will never generate VMX output annoys me, more so than the fact that I just can't write it in intrinsics in the first place. The instruction set for SSE is rather different though, so I can see why this isn't practical either.


I'm pretty sure XNA is JIT'ed even on the 360, fairly certain I saw a post by Shawn Hargreaves stating it was only a couple of days ago in fact.




We might be getting a little off topic here though :D

I believe AIWar http://arcengames.com/ was written entirely in C#.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

That Turkey Story posted:

No, it's not a syntax extension, that's just standard code

Do you have some sort of argument for this? Because [class.conv.fct] pretty clearly (1) precisely limits the form of a conversion-type-id, (2) describes the type of the conversion function as conversion-type-id(), and (3) forbids the specification of a return type. This really isn't consistent with the idea that the conversion-type-id magically picks up extra type information from the rest of the declarator.

That Turkey Story posted:

and yes clang does parse it.

Sure, because we don't special-case the declarator parser just to break this. But AFAIK it's just an oversight that we don't diagnose the spurious bits of declarator.

Jose Cuervo
Aug 25, 2004
Hopefully a simple question:
In my class declaration, I have the variable int NumPeople and I want an array that has enough space to store the numeric of all the people. Right now I am trying to accomplish this with:
code:
class MyClass
{
public:
    int NumPeople;
    int PeopleID[NumPeople];
};
but this will not compile. Is it even possible to accomplish what I want?

UraniumAnchor
May 21, 2006

Not a walrus.

Jose Cuervo posted:

Hopefully a simple question:
In my class declaration, I have the variable int NumPeople and I want an array that has enough space to store the numeric of all the people. Right now I am trying to accomplish this with:
code:
class MyClass
{
public:
    int NumPeople;
    int PeopleID[NumPeople];
};
but this will not compile. Is it even possible to accomplish what I want?

Yes, in that you can use a std::vector to hold your data, and any decent implementation will be the same performance. No, in that what you have cannot possibly be made to work. The closest thing is an int pointer that allocates NumPeople ints.

Adbot
ADBOT LOVES YOU

jonjonaug
Mar 26, 2010

by Lowtax

Jose Cuervo posted:

Hopefully a simple question:
In my class declaration, I have the variable int NumPeople and I want an array that has enough space to store the numeric of all the people. Right now I am trying to accomplish this with:
code:
class MyClass
{
public:
    int NumPeople;
    int PeopleID[NumPeople];
};
but this will not compile. Is it even possible to accomplish what I want?

What you want to do here is this:

code:
class MyClass
{
     int numPeople;
     int* peopleID;
     public MyClass(int num)
     {
          numPeople=num;
          peopleID=new int[numPeople];
     }
}
When you do it like this, you need to make sure you use delete to free the memory when you're done with the array.

Read here for more detail.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply