Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Le0
Mar 18, 2009

Rotten investigator!

feedmegin posted:

That seems like it would be...slow.

The old-school way would be to use pointers of the relevant type and offset (uint32_t's or whatever) into a buffer. Watch for network versus host byte order! (because unless you're on a SPARC box like it's 1995 or something else old-school, those will be different).

eth0.n posted:

I assume you're doing this as a personal challenge? Otherwise, you should be using higher level libraries for network communication, or a library like libpcap if you want to deal with low level packet details.

With that assumption, sure, std::bitset is a fine way to handle the flag fields, but it would be silly to use it for the numerical fields.

@feedmegin: shouldn't be so bad. It's an extra copy vs manipulating in place, but bitsets themselves are no slower than manual bit-fiddling, assuming the bitset fits in a primitive integral type.

eth0.n posted:

But the bit representation of bit fields is non-portable. This means if you, for example, have a set of 32 flags packed into a 32-bit integer, with a bitset, you can gain portable (assuming endianness is handled) bitwise access with a single integral copy. With bit-fields, the only portable way to do it is 32 individual bit extractions and writes to a bit-field, because the compiler can put the bit fields in any arbitrary order.

Bit fields are OK for data structures that live solely in the memory of a given computer, but for network programming, they're largely a trap. The traditional "correct" way to do it is with manual masking and shifting; std::bitset is a nice C++ abstraction for doing that.

Thanks for the suggestions guys.
Yes this is a personal challenge, I'd like to use Qt to make a GUI for it afterwards and just dick around. The idea is to write the packet formats in a XML file, then dynamically generate the GUI so that I can get/set each field value easily. It's also something I could use at work for various types of packets.
I already done it in embedded C using the traditional way of doing it with bitwise operations etc... and I wanted to play around in C++ and figured that there would be a higher level way to do it.

Adbot
ADBOT LOVES YOU

Chuu
Sep 11, 2004

Grimey Drawer

feedmegin posted:

The old-school way would be to use pointers of the relevant type and offset (uint32_t's or whatever) into a buffer. Watch for network versus host byte order! (because unless you're on a SPARC box like it's 1995 or something else old-school, those will be different).

Not trying to be snarky at all -- is there a better way? I've never worked with a binary protocol where the authors didn't go this route for the implementation.

Chuu
Sep 11, 2004

Grimey Drawer

vileness fats posted:

It's perfectly readable even if you don't write much asm.

What are the best resources to learn asm these days, at least to get to the point of basic proficiency on a modern Intel processor? I'm having a surprising amount of trouble finding a good learning reference.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Chuu posted:

Not trying to be snarky at all -- is there a better way? I've never worked with a binary protocol where the authors didn't go this route for the implementation.
New code these days usually uses something that automatically generates accessor code from a schema definition, so you never have to worry about fiddling with memory by hand. My favorite is Cap'n Proto, which is efficient and has very good support for maintaining forwards/backwards compatibility across protocol revisions.

feedmegin
Jul 30, 2008

Ralith posted:

New code these days usually uses something that automatically generates accessor code from a schema definition, so you never have to worry about fiddling with memory by hand. My favorite is Cap'n Proto, which is efficient and has very good support for maintaining forwards/backwards compatibility across protocol revisions.

This guy isnt trying to serialise, hes trying to read tcp packet headers and such that are in a predefined binary format.

Edit: I was talking about Le0, yeah, sorry. Serialising stuff de novo when you control the wire format and having to read an existing protocol are two different things.

feedmegin fucked around with this message at 09:17 on Apr 8, 2016

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

feedmegin posted:

This guy isnt trying to serialise, hes trying to read tcp packet headers and such that are in a predefined binary format.
Which guy? I was replying to Chuu, who seemed to be talking about binary protocols in general.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)

Chuu posted:

What are the best resources to learn asm these days, at least to get to the point of basic proficiency on a modern Intel processor? I'm having a surprising amount of trouble finding a good learning reference.

Assembly Language for x86 Processors is a very accessible book, but I have a old edition so I'm not sure about the modern content. This book uses Visual Studio and MASM, so you can get started pretty quickly if you already have experience with VS and don't want to use the GAS syntax.

A lot of assembly tutorials and books either assume you're familiar with how computers are organized or assume nothing and teach mostly organization. If you're totally unfamiliar, I think Computer Organization and Design is the standard, and has exercises based on a simpler kind of processor so you don't get bogged down in x86 details. Write Great Code, Vol. 1 is also an easy read and more x86 centric, but it's a little old, and has no assembly programming ...

If you're comfortable with that, there are the Intel Developer's Manuals and the AMD Developer's Guides. If you're only doing application programming, you can skip the systems volume. Both manuals have slightly different content, so it's worth going over both. But, the font in the Intel manual is loving awful and AMD has much clearer diagrams, so if you only want to read over one I'd recommend the AMD version.

Also, I learned a bit from the OSDev wiki. There are a lot of examples of every kind on there, doubly so if you're writing an operating system.

I forgot, wikibook's x86 asm book has a lot of great little snippets that show how to use instructions in context. You can always just compile C code and pick it apart too.

dougdrums fucked around with this message at 14:28 on Apr 9, 2016

nielsm
Jun 1, 2009



Win32:

I have written a program, and a plugin for a 3rd-party application, the two need to communicate. I have set up a scheme of posting WM_APP messages between the two, by having message windows with known window-class names in each process.

On Windows 7 this works perfectly.

However, on Windows 10 it breaks. Messages sent from the plugin to my program arrive and get processed, but nothing sent from my program to the plugin seems to arrive.
I have checked the user interface isolation levels, and both processes are running at Medium, so that should not be the issue.

My debugging options are somewhat limited right now, since I only use the Visual Studio remote debugging facility, I'd rather not install a complete VS on my Win10 testing machine. I have trouble getting symbols to function on my development machine, so I haven't been able to set breakpoints or step, so really I only have debug logging to look at right now.

Does anyone have suggestions on how to proceed debugging this?


E: Never mind, somehow it works now. Quite sure I didn't change anything...

nielsm fucked around with this message at 14:21 on Apr 13, 2016

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


Is fopen not supposed to fail if you try to open for writing a file that's already open for writing by another process?

I ran two copies of the code below, one after the other paused, and they both printed success: (FAKE EDIT welp gently caress off CloudFlare, have a pseudocode version instead)

code:
filehandle = fileopen("test_file.txt", "a");

if (filehandle) print("success") else print("fail");
print("press enter to exit");
pause();
The reason I ask is I'm trying to put together a really lovely quick-and-dirty way to stop processes clobbering each other's data and I'm hoping like hell to avoid using lock files 'cos the software in question is very crash happy at this time (and therefore likely won't remove the lock file)

(edit) It appears fcntl() is what I need to go look up and learn how to use, especially advisory locks

Ciaphas fucked around with this message at 19:06 on Apr 13, 2016

hackbunny
Jul 22, 2007

I haven't been on SA for years but the person who gave me my previous av as a joke felt guilty for doing so and decided to get me a non-shitty av

Ciaphas posted:

Is fopen not supposed to fail if you try to open for writing a file that's already open for writing by another process?

It's OS-dependent. It will fail on Windows and succeed pretty much everywhere else

Ciaphas posted:

(edit) It appears fcntl() is what I need to go look up and learn how to use, especially advisory locks

... which, on the other hand, Windows doesn't implement. But I guess it's not an issue for you

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


Nope, long as it works on Solaris and Linux (it did) I'm good.

Handy that posix locks seem to work over NFS, too.

hackbunny
Jul 22, 2007

I haven't been on SA for years but the person who gave me my previous av as a joke felt guilty for doing so and decided to get me a non-shitty av

Ciaphas posted:

Nope, long as it works on Solaris and Linux (it did) I'm good.

Handy that posix locks seem to work over NFS, too.

Word of warning, advisory locks don't work too good on NFS. I don't know the details but it's one of the well known weak points of the protocol

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


In this case it's more a bonus I don't plan to count on, but thanks for the warning :v:

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Ciaphas posted:

Is fopen not supposed to fail if you try to open for writing a file that's already open for writing by another process?

I ran two copies of the code below, one after the other paused, and they both printed success: (FAKE EDIT welp gently caress off CloudFlare, have a pseudocode version instead)

code:
filehandle = fileopen("test_file.txt", "a");

if (filehandle) print("success") else print("fail");
print("press enter to exit");
pause();
The reason I ask is I'm trying to put together a really lovely quick-and-dirty way to stop processes clobbering each other's data and I'm hoping like hell to avoid using lock files 'cos the software in question is very crash happy at this time (and therefore likely won't remove the lock file)

(edit) It appears fcntl() is what I need to go look up and learn how to use, especially advisory locks

Another approach to this problem is to write your PID to the lockfile when it's created, and if the file already exists, you know you can safely delete it and retry if the PID stored in it does not correspond to any live process. On its own this will still require manual intervention if the PID has been reused by a live process, but that should be rare.

Of course, you'll still have to guarantee that your process doesn't crash before writing its PID but after creating the file.

Ralith fucked around with this message at 01:21 on Apr 14, 2016

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Ralith posted:

Of course, you'll still have to guarantee that your process doesn't crash before writing its PID but after creating the file.

This should be avoidable these days if you write the PID into a temp file and then rename it into the lockfile if it doesn't already exist.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Jabor posted:

This should be avoidable these days if you write the PID into a temp file and then rename it into the lockfile if it doesn't already exist.
I don't think data writes are serialized with respect to metadata updates by default on most filesystems.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Ralith posted:

I don't think data writes are serialized with respect to metadata updates by default on most filesystems.

Correct. There was a big kerfluffle* about EXT4 (and 3?) essentially deleting both the old and new copies after a crash when you do it that way. You have to fsync() the file before you rename it, then fsync() the directory afterwards. For lockfile purposes though, nobody cares - an empty file signifies a crash so go ahead and claim the lock yourself.

*this and this are good reading. ZFS/btrfs order everything the way you would expect, but you can't count on running on them due to the performance hit.

22 Eargesplitten
Oct 10, 2010



I'm starting work on a big mod for a game. The game takes about an hour to compile each time. Is there any sort of testing framework that could work with a game engine, so I can minimize the amount of time I'm compiling? It's the XRay engine, so there aren't any tools built specifically for it like there probably are for Unreal engine.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

22 Eargesplitten posted:

I'm starting work on a big mod for a game. The game takes about an hour to compile each time. Is there any sort of testing framework that could work with a game engine, so I can minimize the amount of time I'm compiling? It's the XRay engine, so there aren't any tools built specifically for it like there probably are for Unreal engine.

What kind of build system are you using? It sounds like you're compiling the entire engine every single time, which is completely unnecessary. I know Make will check modification dates on source files and only compile those it doesn't have up-to-date .o files for (if you don't delete intermediary files after compiling.) I use CMake and a GNU toolchain via CLion (using MinGW on Windows) and it only compiles the source files that have been modified since last compilation (unless I modify compiler/linker settings,) so maybe give that a shot.

If that's not an option, I guess you can compile the engine to a dynamically linked library and link your own executable against that?

Joda fucked around with this message at 07:16 on May 8, 2016

b0lt
Apr 29, 2005

22 Eargesplitten posted:

I'm starting work on a big mod for a game. The game takes about an hour to compile each time. Is there any sort of testing framework that could work with a game engine, so I can minimize the amount of time I'm compiling? It's the XRay engine, so there aren't any tools built specifically for it like there probably are for Unreal engine.

Are you using LTO?

22 Eargesplitten
Oct 10, 2010



I've just been compiling everything in Visual Studio, and I'm pretty sure the designer who was also doing all the programming before I came on was doing the same thing. We're not using any sort of LTO. All three people doing programming are amateurs. I've got an AS, and that's the most formal education anyone on the team has. I'll look into CMake. If I can make that work it seems like that should help a lot. I'm pretty sure at this point it checks to see if projects are up to date, but if anything has changed in the project, it compiles the whole thing. I actually misspoke, it's the biggest project that takes an hour to compile by itself.

Would CMake have to compile the whole changed project, or can it pull out the changed files and compile them into the otherwise compiled project by themselves? If it does, that would cut compile time down to a small fraction of what it currently is.

b0lt
Apr 29, 2005

22 Eargesplitten posted:

I've just been compiling everything in Visual Studio, and I'm pretty sure the designer who was also doing all the programming before I came on was doing the same thing. We're not using any sort of LTO. All three people doing programming are amateurs. I've got an AS, and that's the most formal education anyone on the team has. I'll look into CMake. If I can make that work it seems like that should help a lot. I'm pretty sure at this point it checks to see if projects are up to date, but if anything has changed in the project, it compiles the whole thing. I actually misspoke, it's the biggest project that takes an hour to compile by itself.

Would CMake have to compile the whole changed project, or can it pull out the changed files and compile them into the otherwise compiled project by themselves? If it does, that would cut compile time down to a small fraction of what it currently is.

CMake likely won't help you at all, msbuild should already have this functionality.

Is it the compiling step or the linking step that's slow? If compiling itself is slow, you can make compiling faster, and compile less stuff. Look at what gets recompiled and see if you have unnecessary header dependencies. If they're not unnecessary, you can use PImpl for classes that change often, but this comes at a significant cost (more pointer chasing, and an extra memory allocation/deallocation on construction/destruction). You could also try PCH if the headers are used everywhere (although, I don't know how much this actually buys you nowadays).

If linking is slow, you're pretty screwed, but you can try to make things a little better by finding classes that don't need generated functions (default/copy/move constructors, copy/move assignment operator), and sprinkling = delete all over them. There might be MSVC specific things you can do, but I don't really know anything about it, so dehumanize yourself and face to C++.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

22 Eargesplitten posted:

I've just been compiling everything in Visual Studio, and I'm pretty sure the designer who was also doing all the programming before I came on was doing the same thing. We're not using any sort of LTO. All three people doing programming are amateurs. I've got an AS, and that's the most formal education anyone on the team has. I'll look into CMake. If I can make that work it seems like that should help a lot. I'm pretty sure at this point it checks to see if projects are up to date, but if anything has changed in the project, it compiles the whole thing. I actually misspoke, it's the biggest project that takes an hour to compile by itself.

Would CMake have to compile the whole changed project, or can it pull out the changed files and compile them into the otherwise compiled project by themselves? If it does, that would cut compile time down to a small fraction of what it currently is.

To clarify: CMake's purpose is to be a cross-platform system to create build files for a variety of different toolchains. Build files, in turn, are the ones which tells your build system (VS, Make, whatever,) what steps to take to compile your project. The part of my toolchain that ensures that only newly modified files are compiled is Make which is part of the GNU standard toolchain (that is to say you need MinGW/Cygwin on Windows.)

I'm not that familiar with VS, but I'd be surprised if it doesn't offer a similar feature to Make in that regard?

MrMoo
Sep 14, 2000

There are replacements for msbuild such as ninja which integrate into CMake. Usually just an SSD and disabling AV is sufficient for a speedy build but the state-of-the-art comes into play at 1 GB+ link time objects.

Building something chunky like Boost on Windows is easier with a RAID array, I think a lot of it is IO as I build three sets of compilers in parallel and it barely changes the build time of one run.

MrMoo fucked around with this message at 20:08 on May 8, 2016

22 Eargesplitten
Oct 10, 2010



b0lt posted:

Is it the compiling step or the linking step that's slow? If compiling itself is slow, you can make compiling faster, and compile less stuff. Look at what gets recompiled and see if you have unnecessary header dependencies. If they're not unnecessary, you can use PImpl for classes that change often, but this comes at a significant cost (more pointer chasing, and an extra memory allocation/deallocation on construction/destruction). You could also try PCH if the headers are used everywhere (although, I don't know how much this actually buys you nowadays).

If linking is slow, you're pretty screwed, but you can try to make things a little better by finding classes that don't need generated functions (default/copy/move constructors, copy/move assignment operator), and sprinkling = delete all over them. There might be MSVC specific things you can do, but I don't really know anything about it, so dehumanize yourself and face to C++.

I'm not sure whether it's compiling or linking that's slow. How would I tell from the console output? The folder for the project that takes longest to compile contains 2,943 files, between .cpp files, .h files, and assorted other stuff.

I searched the main folder for the whole mod for .pch files and .h files. There are 27 .pch files and roughly 24-27,000 .h files, so I think we might be able to improve that.

Visual Studio does check what's already been compiled and doesn't recompile it, I think it takes maybe 5-10 seconds to recognize that, but I haven't timed it. There's I think 24 or 25 projects in the solution.

Does disabling antivirus make a big difference? I haven't done that.

MrMoo
Sep 14, 2000

22 Eargesplitten posted:

Does disabling antivirus make a big difference? I haven't done that.

Gigantic difference unless you have a tonne of memory and SSD already, compiling with intermediate files is like the perfect wrong usage case for AV. Every file the compile touches has to be rehashed and if you pull in a lot of files you will breach any file cache in the scanner. Every object gets serialized to disk and re--scan on write and read, multiple times in a single build.

I see people in my office with tiny projects taking 30-minutes on builds due to AV, nothing like using it together with a 5,400 rpm spinny disk.

MrMoo fucked around with this message at 20:21 on May 8, 2016

22 Eargesplitten
Oct 10, 2010



Okay. I've got 8gb and an SSD, so I'll disable AV and tell the other programmers about it too.

Xerophyte
Mar 17, 2008

This space intentionally left blank

22 Eargesplitten posted:

I'm not sure whether it's compiling or linking that's slow. How would I tell from the console output? The folder for the project that takes longest to compile contains 2,943 files, between .cpp files, .h files, and assorted other stuff.

I searched the main folder for the whole mod for .pch files and .h files. There are 27 .pch files and roughly 24-27,000 .h files, so I think we might be able to improve that.

Visual Studio does check what's already been compiled and doesn't recompile it, I think it takes maybe 5-10 seconds to recognize that, but I haven't timed it. There's I think 24 or 25 projects in the solution.

Does disabling antivirus make a big difference? I haven't done that.

First, multi-hour compilation times for the first time you build a very large piece of software isn't uncommon. Some of our older and cruftier products take me most of a day to check out and compile on my 20-core at work. This sounds less bad but a few thousand files for a single project out of several is still pretty large so a few hours for the initial compile on a single socket machine doesn't really surprise me. Welcome to C++!

Recompilation should, of course, not have that problem and indicates that something is poorly set up. Compiler and and linker settings you should look at include:

Compiler (Configuration Properties -> C/C++):
Multi-processor compilation: /MP enables multi-threaded compilation which is huge on modern machines. Always have this on.
Whole Program Optimization: /GL produces (slightly) better code by doing optimization passes at link-time. Makes linking exceptionally slow since it forces /LTCG and disables incremental linking. Leave off for all but your überoptimized release builds.
Enable Minimal Rebuild: /Gm tries to only recompile a file that includes an altered header if the header changes actually impact the including file. Can speed up compilation if you've got overly complex multi-inclusion. Can also bug out horribly in some specific situations. Not compatible with precompiled headers.
Debug Information Format: Not including debug info is faster, with the obvious drawback that you can't debug. No clue on performance differences between the debug info types.
Runtime Library: Using the DLL versions of the common runtime (/MD or /MDd for debug) builds faster than using the non-DLLs.

Linker (Configuration Properties -> Linker):
Enable Incremental Linking: Only link in changes from changed objects. A lot faster, but pads the binary so don't enable for final release builds. Implied by /DEBUG.
Generate Debug Info: Again, no debugging info is faster.
Link-time code generation: /LTCG runs an optimization pass at link time. Required if any dependency is compiled with /GL. Disables incremental linking*. Like /GL, should be off for all but überoptimized release builds.

A pretty common error seems to someone enabling /GL for one little static library somewhere, which then forces /LTCG and disables incremental linking for every binary that links it.

Of course, this being C++ it's also quite possible to shoot yourself in the foot and set up enough header interdependencies and static libraries that compiling always takes forever, in which case you might want to look into refactoring the code to use pimpl and more dynamic libraries. Just, pluck low hanging fruit first.

* Apparently VS2015 does have a "/LTCG:Incremental" setting for incremental link time code generation which I haven't used but might try out tomorrow since that sounds quite nice.

22 Eargesplitten
Oct 10, 2010



Xerophyte posted:

First, multi-hour compilation times for the first time you build a very large piece of software isn't uncommon. Some of our older and cruftier products take me most of a day to check out and compile on my 20-core at work. This sounds less bad but a few thousand files for a single project out of several is still pretty large so a few hours for the initial compile on a single socket machine doesn't really surprise me. Welcome to C++!

Recompilation should, of course, not have that problem and indicates that something is poorly set up. Compiler and and linker settings you should look at include:

Compiler (Configuration Properties -> C/C++):
Multi-processor compilation: /MP enables multi-threaded compilation which is huge on modern machines. Always have this on.
Whole Program Optimization: /GL produces (slightly) better code by doing optimization passes at link-time. Makes linking exceptionally slow since it forces /LTCG and disables incremental linking. Leave off for all but your überoptimized release builds.
Enable Minimal Rebuild: /Gm tries to only recompile a file that includes an altered header if the header changes actually impact the including file. Can speed up compilation if you've got overly complex multi-inclusion. Can also bug out horribly in some specific situations. Not compatible with precompiled headers.
Debug Information Format: Not including debug info is faster, with the obvious drawback that you can't debug. No clue on performance differences between the debug info types.
Runtime Library: Using the DLL versions of the common runtime (/MD or /MDd for debug) builds faster than using the non-DLLs.

Linker (Configuration Properties -> Linker):
Enable Incremental Linking: Only link in changes from changed objects. A lot faster, but pads the binary so don't enable for final release builds. Implied by /DEBUG.
Generate Debug Info: Again, no debugging info is faster.
Link-time code generation: /LTCG runs an optimization pass at link time. Required if any dependency is compiled with /GL. Disables incremental linking*. Like /GL, should be off for all but überoptimized release builds.

A pretty common error seems to someone enabling /GL for one little static library somewhere, which then forces /LTCG and disables incremental linking for every binary that links it.

Of course, this being C++ it's also quite possible to shoot yourself in the foot and set up enough header interdependencies and static libraries that compiling always takes forever, in which case you might want to look into refactoring the code to use pimpl and more dynamic libraries. Just, pluck low hanging fruit first.

* Apparently VS2015 does have a "/LTCG:Incremental" setting for incremental link time code generation which I haven't used but might try out tomorrow since that sounds quite nice.

I'm just getting back to this and I've set most of these options, I think. When I try to build, I'm getting an error saying error : Element <RuntimeLibrary> has an invalid value of "MD". I've also tried (/MD) and /MD, same error every time. What am I supposed to put in there?

I'm also considering generating and destroying a thread every time I need to load some assets while in-game. How resource-intensive is that? I'm just thinking that way it won't disrupt the normal function of everything else, because the way it's designed it brings everything to a complete standstill until it loads.

I decided to try building without the runtime library change. It was running at 100% utilization for a while, but now it's dropped to under 50%, which I assume means it went back to using one core. Is that a normal problem with me setting it up wrong?

The lead on this project just built with the /MP setting on. That project that was taking an hour to build just took 18 minutes on his 2500K.

22 Eargesplitten fucked around with this message at 20:00 on May 10, 2016

nielsm
Jun 1, 2009



During build, it should print the names of all the files and projects it compiles into the Output window.
Compare the number of files recompiled to how many you actually changed.
If many more files get rebuilt than were modified, figure out why those files are dependent on what you modified, and if that should really be necessary.

And as for the (hopefully) obvious: Don't "Rebuild" or "Clean" all the time, make sure your project can work with a partial rebuild using just the regular "Build".

nielsm fucked around with this message at 07:04 on May 11, 2016

Xerophyte
Mar 17, 2008

This space intentionally left blank

22 Eargesplitten posted:

I'm just getting back to this and I've set most of these options, I think. When I try to build, I'm getting an error saying error : Element <RuntimeLibrary> has an invalid value of "MD". I've also tried (/MD) and /MD, same error every time. What am I supposed to put in there?

I'm also considering generating and destroying a thread every time I need to load some assets while in-game. How resource-intensive is that? I'm just thinking that way it won't disrupt the normal function of everything else, because the way it's designed it brings everything to a complete standstill until it loads.

I decided to try building without the runtime library change. It was running at 100% utilization for a while, but now it's dropped to under 50%, which I assume means it went back to using one core. Is that a normal problem with me setting it up wrong?

The lead on this project just built with the /MP setting on. That project that was taking an hour to build just took 18 minutes on his 2500K.

/MD is the command line flag if you're entering those manually (you probably don't if you're just opening a .sln directly instead of using cmake or the like). In VisualStudio there's an entry box for it in the properties for each project, under Configuration Properties -> C/C++ -> Code Generation -> Runtime Library.

The CPU overhead of creating and destroying a thread is almost certainly insignificant versus the overhead of reading even the smallest file from disk or the network. Not entirely sure what you mean by resource-intensive; additional memory usage depends on how much temporary memory is needed for the load, execution time overhead is negligible. Actual parallel file loading (as in running N threads loading N files simultaneously) can be useful in some cases -- it depends on the drive, the data access pattern and on how much processing is done per chunk.

The CPU drop depends on the project structure but in general you can't parallelize linking that well. You need to wait for all the object files before the link can start and you might need to wait for the link to end to build the rest of the code. Expect that the CPUs are pegged during compilation and that usage drops during linking.

18 minutes for a clean rebuild of a tens-of-thousand-files project on a single 4-core is good in my book. 18 minutes for an incremental build after changing one line would drive me to murder.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
Creating a new kernel-scheduled thread is actually a really expensive operation on almost every established operating system, but IIRC that's especially true on Windows. This is why people tend to recommend offloading work onto a thread pool instead of creating a new thread. There are definitely languages/environments with really cheap threads, but they use M-to-N threading, and they overwhelmingly tend to not be C environments.

In contrast, reading a small file that's in cache is basically just the cost of copying the bytes.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
VC++'s implementation of std::async uses a thread pool, so just use that rather than explicitly creating a thread.

22 Eargesplitten
Oct 10, 2010



rjmccall posted:

Creating a new kernel-scheduled thread is actually a really expensive operation on almost every established operating system, but IIRC that's especially true on Windows. This is why people tend to recommend offloading work onto a thread pool instead of creating a new thread. There are definitely languages/environments with really cheap threads, but they use M-to-N threading, and they overwhelmingly tend to not be C environments.

In contrast, reading a small file that's in cache is basically just the cost of copying the bytes.

Is opening up a new thread, giving it orders directly, and then closing it a kernel-scheduled thread? I can look into std::async. I'm probably thinking of the worst ways to do this stuff. I guess I'm just not sure because Xerophyte said creating and destroying a thread would be insignificant and you said it would be really expensive, so I'm not sure if there's a distinction between how the two of you are saying to do it.

E: It just occurred to me I might as well say what I'm doing and see what the best way to do it is. In this game, the way they load new NPCs is stopping everything else until it loads. What I want to do is put it on one thread and let everything else run on the other thread while it's loading. For best performance, should I be making a new thread, throwing all of the loading on that, and then destroying the thread afterward, or should I put it on one of the two existing threads and then telling everything else to route to the other thread? We could try adding a third thread, but when the lead did that it caused everything to run super fast, so we're putting that on the back burner unless we really need it. I guess if I did make the new one we would need to handle that now so it wouldn't go in fast forward while loading for a second or two.

22 Eargesplitten fucked around with this message at 19:31 on May 11, 2016

Xerophyte
Mar 17, 2008

This space intentionally left blank

22 Eargesplitten posted:

Is opening up a new thread, giving it orders directly, and then closing it a kernel-scheduled thread? I can look into std::async. I'm probably thinking of the worst ways to do this stuff. I guess I'm just not sure because Xerophyte said creating and destroying a thread would be insignificant and you said it would be really expensive, so I'm not sure if there's a distinction between how the two of you are saying to do it.

It's probably safe to assume that RJ knows what he's talking about and I'm full of poo poo here (and in general). Thread creation on windows is apparently on the order of several ms so, no, don't create one for very short tasks.

22 Eargesplitten
Oct 10, 2010



Thanks. I'll try putting it on an existing thread. I've gotten mixed messages elsewhere, what do I need to do to tell the program to load on one thread and run everything else on the other? One thing on cplusplus said that distribution across cores is handled by the scheduler, but someone else said that I need to write code to tell it where to run stuff. The multi-core implementation by itself hasn't reduced the stuttering at all, either. Do I need to specify that all of the loading should only run on one thread, and let it automatically arrange everything else?

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
If you're loading a whole bunch of stuff, a thread is probably fine. Just don't do it casually for everything. But it's even better to use a thread pool mechanism like std::async if one exists; threads are a global resource that you don't necessarily want every last subsystem to be making its own private decisions about.

I would guess that resource loading for games is something that there's a base of good established knowledge about, but I'm not the person with that knowledge, sorry.

rjmccall fucked around with this message at 06:09 on May 12, 2016

Doc Block
Apr 15, 2003
Fun Shoe
It's probably pretty common for open world games and such to keep one or more background threads spun up that just handle loading data and streaming it to the GPU or whatever.

22 Eargesplitten
Oct 10, 2010



Okay. Looking at a presentation and document about how they made Dungeon Siege, that looks to be what they did, although back when that came out they only had two threads to work with. I've been looking for more, but right now that's all I've found. My current thought is to have two threads in a pool and one thread independent that's dedicated to loading and removing entities. This mod is for Stalker, which isn't technically an open world game, but has zones that take several minutes to run across at a full sprint. If I used std::async, would I still have to do a lot of locking to keep the processes being run from the pool from data racing? I'm thinking I would keep all of the stuff loading and being removed locked, but I'm not sure if the async function does that for you.

I hope it's okay to keep posting questions about stuff that will be used in a game here rather than the game development thread. I feel like questions like these are more specifically about the workings of C++, while most of the stuff in the gamedev thread is about how the game editors work.

Adbot
ADBOT LOVES YOU

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

22 Eargesplitten posted:

Okay. Looking at a presentation and document about how they made Dungeon Siege, that looks to be what they did, although back when that came out they only had two threads to work with. I've been looking for more, but right now that's all I've found. My current thought is to have two threads in a pool and one thread independent that's dedicated to loading and removing entities. This mod is for Stalker, which isn't technically an open world game, but has zones that take several minutes to run across at a full sprint. If I used std::async, would I still have to do a lot of locking to keep the processes being run from the pool from data racing? I'm thinking I would keep all of the stuff loading and being removed locked, but I'm not sure if the async function does that for you.

I hope it's okay to keep posting questions about stuff that will be used in a game here rather than the game development thread. I feel like questions like these are more specifically about the workings of C++, while most of the stuff in the gamedev thread is about how the game editors work.

You should lock or otherwise prevent data races, yes. You have no guarantees that the scheduler won't do exactly what you don't want it to do. From a practical standpoint though, if you know there will be a few minutes between when writing the data should end and reading it back should begin, the likelihood of running into an error is low. But when the solution is so simple, it's hard to see why you wouldn't want to be sure.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply