|
Windows does the same with .lib files, enable LTO on the linker just like MSVC to clean up afterwards.
|
# ? Nov 12, 2015 02:11 |
|
|
# ? Jun 7, 2024 12:01 |
|
This is for an assignment in C so please don't just give me the answer. I have just exhausted my patience for the evening trying to figure this out. What I am finding is that my array and its contents are formatted correctly ('asdf' , '\00000') but that I get a segfault in in my qsort call that traces back to the cmp function. All I can surmise is that I am somehow passing qsort a null pointer, or that my strings are formatted incorrectly which doesn't seem to be the case to me. The wordlist struct is used to hold an array of code:
code:
playground tough fucked around with this message at 08:52 on Nov 12, 2015 |
# ? Nov 12, 2015 08:50 |
|
OneEightHundred posted:Speaking of build stuff, why is it that the Unix-like solution to static libraries is still to throw all of the .o files into an archive instead of doing something like running it through the linker to generate a new ELF file and nuke all of the ODR redundancies? The fundamental assumption of a static library is that you want to bring in exactly those object files that you actually need. There's no such as a redundancy if it's unknown what symbols, and hence object files, you'll actually want. At best you could drop entire object files as completely redundant. The linker has no concept of the ODR. Most linkers provide some way to do an "incremental" link, i.e. combining a bunch of object files into a bigger object file. This is much more analogous to what happens with a shared library.
|
# ? Nov 12, 2015 09:10 |
|
rjmccall posted:There's no such as a redundancy if it's unknown what symbols, and hence object files, you'll actually want.
|
# ? Nov 12, 2015 09:46 |
|
Is there any (preferably sane) way to have Windows and Linux computers work together using OpenMPI? I am using MSMPI on Windows.
|
# ? Nov 12, 2015 16:02 |
|
OneEightHundred posted:There's redundancy from COMDATs and strings. I guess a better question would be, what's the advantage to shared libs being archives instead of ld -r output or similar? One obvious one is that it's the status quo.
|
# ? Nov 12, 2015 16:42 |
|
OneEightHundred posted:There's redundancy from COMDATs and strings. It's not redundancy because you're not necessarily linking in all the object files. If you are, I agree that you would be better off using incremental linking, and then just dropping that file inside an archive. But the whole point of static libraries is that there are significant code-size and build-time advantages to structuring your library so that statically linking against it automatically omits functionality you're not using. OneEightHundred posted:I guess a better question would be, what's the advantage to shared libs being archives instead of ld -r output or similar? Shared libraries are much closer to ld -r output than archives, since inherently you bring in the entire library whenever you use it.
|
# ? Nov 12, 2015 18:36 |
|
rjmccall posted:Shared libraries are much closer to ld -r output than archives, since inherently you bring in the entire library whenever you use it.
|
# ? Nov 13, 2015 01:50 |
|
OneEightHundred posted:I fat-fingered that, what I meant to ask was what the advantage was in static libraries being archives rather than ld -r output. rjmccall posted:It's not redundancy because you're not necessarily linking in all the object files. If you are, I agree that you would be better off using incremental linking, and then just dropping that file inside an archive. But the whole point of static libraries is that there are significant code-size and build-time advantages to structuring your library so that statically linking against it automatically omits functionality you're not using.
|
# ? Nov 13, 2015 01:57 |
|
I don't really follow why that's the case though. The individual object files are going to contain unused functionality too. What's the difference between omitting unused parts of object files that are linked and omitting entire files? Why do you get faster links or smaller code from an archive of object files than a single object that contains the same information but without the COMDAT/string duplications?
|
# ? Nov 13, 2015 02:18 |
|
For starters, the unused parts of object files aren't omitted.
|
# ? Nov 13, 2015 03:35 |
|
OneEightHundred posted:I don't really follow why that's the case though. The individual object files are going to contain unused functionality too. What's the difference between omitting unused parts of object files that are linked and omitting entire files?
|
# ? Nov 13, 2015 04:11 |
|
In addition to all that, while some linkers perform dead-stripping of the final linked image, that does have its limits. For example, once they're included in the image, static constructors and destructors cannot be removed and, hence, act as roots that pin anything they use. Often, such constructors are only needed to set up some specific feature or make it more efficient. In a properly-designed static library, the constructor will be in an object file that's only linked in if the feature is used.
|
# ? Nov 13, 2015 05:17 |
|
So I have a problem with building a CUDA project. I'm trying to parallelize a physics engine without having to massively rewrite my code, so I want to add in directives to define my factions as #define CUDA_CALLABLE_MEMBER __host__ __device__ in order to reduce the amount of code duplication. Basically I want my .cu files to call methods from my .h headers. For example: test.h code:
code:
code:
code:
quote:Error 42 error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\bin\nvcc.exe" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --use-local-env --cl-version 2013 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\x86_amd64" -I\Dependencies\ -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\include" -G --keep-dir x64\Debug -maxrregcount=0 --machine 64 --compile -cudart static -g -DWIN32 -DWIN64 -D_DEBUG -D_CONSOLE -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o x64\Debug\test.cu.obj "H:\Projects\Concordia\COMP 426 - Multicore Programming\bPhysicsEngine2D\bPhysicsEngine2D\bPhysicsEngine2D\test.cu"" exited with code 255. C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\BuildCustomizations\CUDA 7.5.targets 604 9 bPhysicsEngine2D Googling shows my problem seems to be different from all others. If I comment out that line it compiles fine. If I rewrite it to not bother with .h/.cpp and just use .cuh/.cu then it also compiles and works. But I would very much like to use regular C++ h/cpp files for time saving reasons. Any help would be appreciated.
|
# ? Nov 16, 2015 05:48 |
|
Raenir Salazar posted:So I have a problem with building a CUDA project. You should post to https://devtalk.nvidia.com/ if you haven't already.
|
# ? Nov 16, 2015 12:30 |
|
I crossposted to reddit, but I have learned something new though: If Boo() is defined inline in my .h file as CUDA_CALLABLE_MEMBER void boo() { }; Then it compiles. But defining it 'properly' CUDA_CALLABLE_MEMBER void helloWorld::boo() { } And I get the same error? e: Posted to the nvidia forums now too. Raenir Salazar fucked around with this message at 14:00 on Nov 16, 2015 |
# ? Nov 16, 2015 13:45 |
|
Raenir Salazar posted:I crossposted to reddit, but I have learned something new though: Oh OK, I think I see what's going on. My CUDA is rusty so I may be wrong on this, but I think the issue here may be that CUDA won't compile/link the CPP objects (which makes sense, since CL should be handling the CPP files, and the CUDA compiler builds the CU ones). When you inline in the header like that it's getting pulled into both so you're good. I'm not sure if there's a convenient way around that, but I think if you want to be calling those functions from device code, they need to be have implementations in the CUDA side of things, as well.
|
# ? Nov 16, 2015 15:01 |
|
According to reddit:quote:OK, so here's your problem. There's a compiler switch "-x cu" that tells the compiler to treat everything as a CUDA file. If you don't use that switch, nvcc uses the file extension to determine whether to compile as CUDA or standard c/c++. Hopefully this works! I'll try it out when I'm home after work.
|
# ? Nov 16, 2015 15:29 |
|
Raenir Salazar posted:According to reddit: Ahh there you go -- I thought there was some mechanism in place for doing that sort of thing (since cross-compiling existing codebases is a big thing in CUDA). What Reddit did you post that to, btw? I'm kind of uninformed about a lot of the GPGPU communities out there.
|
# ? Nov 16, 2015 15:42 |
|
To r/CUDA/ but now strangely the individual just posted that he got it to work but in a way that doesn't seem to match up with what I wanted:quote://test.cu Here he has boo() implemented in the .cu file when I'd really like to keep it in a .cpp file as I don't want my project to be permanently a cuda project but just temporarily CUDA until I have to do it in OpenCL (As it's a school multicore programming project). I hope he meant "working" without having to use -x cu.
|
# ? Nov 16, 2015 17:14 |
|
Raenir Salazar posted:To r/CUDA/ but now strangely the individual just posted that he got it to work but in a way that doesn't seem to match up with what I wanted: Worse comes to worst, you could always make a dummy .CU file and #include your CPP code there
|
# ? Nov 17, 2015 19:23 |
|
Hubis posted:Worse comes to worst, you could always make a dummy .CU file and #include your CPP code there I wish I thought of this sooner.
|
# ? Nov 17, 2015 20:01 |
|
And sadly I ran out of time and need to shift over to work on the next assignment (Implementing the engine in OpenCL). Problems I encountered while trying to convert to CUDA: 1. I had this nested for loop for testing for collisions; I had thought of a couple of ways potentially untangling it to implement in a kernel. Such as taking an array of all possible collisions and then returning an array of only the valid collisions, but returning an array of an unknown size isn't something I experimented with yet. 2. A *lot* of my code used STL::VECTOR which is a host function and can't be called from device code, I didn't discover this until too late to write my own thread safe push_back function for an array. 3. I use a weird jump table trick call the appropriate collision function but it has to be static which CUDA doesn't like, so I needed to rewrite that too. It's really frustrating working full time and trying to study at the same time. One more "Day" of work I probably could have gotten it. Hopefully OpenCL presents me a few less headaches.
|
# ? Nov 18, 2015 17:34 |
|
Coming back to C++ after a many years working in C#, and trying to get back on my feet. I really like C#'s try idiom, i.e.:code:
code:
Chuu fucked around with this message at 02:35 on Nov 20, 2015 |
# ? Nov 20, 2015 02:31 |
|
Instead of returning a bool and taking a reference argument, return a std::optional<int> or whatever.
|
# ? Nov 20, 2015 02:37 |
|
If you like C# idioms then you're going to want to strangle someone when you find out what happens when you try to read a value from an STL map by indexing it with a non-existent key. Anyway the idiomatic way of doing "try" equivalents depends on what's doing it. For containers, the idiomatic way is generally to have a container.find(whatever) that returns an iterator which compares equal to container.end() if the element doesn't exist, and is otherwise safe to dereference. For streams, the stream sets badbit or failbit on the stream object if an IO or formatting operation fails. Of course, if you're doing your own implementation, then nothing is stopping you from just implementing methods using the "try" idiom. OneEightHundred fucked around with this message at 03:09 on Nov 20, 2015 |
# ? Nov 20, 2015 02:56 |
|
Ralith posted:Instead of returning a bool and taking a reference argument, return a std::optional<int> or whatever. Thanks, that's exactly what I was looking for.
|
# ? Nov 20, 2015 02:57 |
|
OneEightHundred posted:If you like C# idioms then you're going to want to strangle someone when you find out what happens when you try to read a value from an STL map by indexing it with a non-existent key. Good rule of thumb for standard C++ and major libraries in general: If something seems obviously missing, it's probably there under another name, and you should go check the documentation to find out what that name is.
|
# ? Nov 20, 2015 02:58 |
|
OneEightHundred posted:If you like C# idioms then you're going to want to strangle someone when you find out what happens when you try to read a value from an STL map by indexing it with a non-existent key. Funny enough, in C# I always use the TryGet(...) accessors because I don't want to have to worry about if I'm going to get back null or throw an exception.
|
# ? Nov 20, 2015 03:00 |
|
Ralith posted:You want the 'at' method. Its functionality should have been implemented as another "insert" overload, if at all.
|
# ? Nov 20, 2015 04:06 |
|
Chuu posted:The obvious solution would be: A better solution is code:
(I don't know what you mean by public, why would that be a method at all?) Edit: Also, even for reasonable complex types, the default initialization is usually quite fine. Especially if you're willing to copy or move it by returning a std::optional. sarehu fucked around with this message at 06:04 on Nov 20, 2015 |
# ? Nov 20, 2015 05:42 |
|
nielsm posted:Your function prototypes for the thread main functions are wrong. A pthreads thread function also takes a parameter, that is pointer-sized, and contains data passed to the pthread_create call. You should really use the correct function prototype, avoid having to typecast the function pointer, and be able to have just one single "pirate" thread. That is, pass each thread as parameter how many pearls it can take, and what its name is. Vanadium posted:Do you really need both a mutex and a condition variable to wait on the "occupied" value? Doesn't the mutex being locked already model the condition that only one pirate (thread) can be in the cave (critical section)? I asked for help in here with an OS assignment involving threading. I don't think I ever said thanks for your help. I ended up slimming down my code to two simple functions, one for 15% and the other 10%. Anyways, I had someone ask me for help today printing out a singly linked list alphabetically without actually sorting the nodes in the list. His code had overloaded operators for comparing strings between the nodes. I couldn't for the life of me come up with an efficient way of doing this (IE - not using a temporary array or a bunch of pointers). Would anyone mind giving me a guiding hand here?I feel line this should be a fairly easy problem for me to solve but I can't and it's killing me. I couldn't come up with an implementation that made sense though. Here's roughly the idea I was going for in pseudocode. My biggest issue was figuring out how to delete the minimum node in the list. Another method I thought of was just changing the minimum value in the list to garbage (ie - "ZZZZ") so it's not found on another cycle. But changing data in the list would require creating a temporary list which doesn't sound very efficient. code:
Diametunim fucked around with this message at 07:01 on Nov 20, 2015 |
# ? Nov 20, 2015 06:56 |
|
sarehu posted:Don't use non-const references. (Except where you're supposed to.) sarehu posted:Edit: Also, even for reasonable complex types, the default initialization is usually quite fine. Especially if you're willing to copy or move it by returning a std::optional.
|
# ? Nov 20, 2015 07:08 |
|
Ralith posted:There's nothing wrong with mutable references in appropriate, unambiguous places. Yeah, i.e. on the left side of a += operator. And for functions like a range-based for_each, where the mutability is passed through to a user-supplied lambda. And some other enumerated set of use cases. Not for general purpose use. Ralith posted:Default initialization can be very expensive while moving remains cheap How? I can imagine it being true for a contrived type, like a not-null shared pointer, but realistically, how? Edit: Also, the point is that if you're willing to copy or move it, it probably is fine to default construct anyway. It doesn't matter whether that particular piece of code is actually copying or moving. sarehu fucked around with this message at 07:49 on Nov 20, 2015 |
# ? Nov 20, 2015 07:42 |
|
sarehu posted:Not for general purpose use. sarehu posted:How? I can imagine it being true for a contrived type, like a not-null shared pointer, but realistically, how?
|
# ? Nov 20, 2015 07:55 |
|
The point is, your movable types are probably not allocating significant memory (or any) in the default constructor. The copyable ones probably aren't either. Look at a bunch of classes that are copyable and see if they allocate memory. Some might, and they are the exception -- usually such types should be noncopyable (and nonmovable).Ralith posted:Clearly defined output parameters are perfectly legitimate, although output parameters in general are best avoided if possible of course. Using a pointer is obviously better, because you have to make it look like an output parameter at the callsite. And why should output parameters be avoided? There is no reason to say that. Maybe you think that because you're used to output parameters that use non-const references.
|
# ? Nov 20, 2015 08:05 |
|
Diametunim posted://stuff Since i can't sleep, i came up with this inefficient abomination O(n*n). Seems to work though: code:
code:
If min hasn't been assigned yet and the current value is greater than the current running minimum OR min is greater than the current value and the current value is greater than the current running minimum, set min to the current value. Essentially it looks for the smallest value that's greater than the current running minimum (which is the greatest minimum found so far). But is not efficient speed wise. Simpler to just sort the drat thing using one of the many available sorting algorithms.
|
# ? Nov 20, 2015 08:13 |
|
Sorry, why is a pointer better than a non-combat reference for things that can't be NULL?
|
# ? Nov 20, 2015 08:16 |
|
Hubis posted:Sorry, why is a pointer better than a non-combat reference for things that can't be NULL? The argument is that being explicit about out parameters is beneficial for code readability. Code like if (function(&foo, bar, baz)) is a clear idiom for an operation that can fail, that returns via a pointer passed as the first parameter. It's an ideological debate like whether vim is better than emacs. That is to say, there is one correct answer: vim, and using pointers for out parameters.
|
# ? Nov 20, 2015 08:29 |
|
|
# ? Jun 7, 2024 12:01 |
|
Hubis posted:Sorry, why is a pointer better than a non-combat reference for things that can't be NULL? For variables, the fact that they're unmodifiable.
|
# ? Nov 20, 2015 08:43 |