|
Paniolo posted:If I'm understanding the documentation correctly, creating a multithreaded device adds critical sections around nearly everything though, not just the resource creation functions. I'm not sure how much overhead this translates to in practice. Memory mapped files are very nice for accessing really large files with random access pattern. If you are sequentially reading in the file then memory mapped file probably will not yield significant results. Pretty much what happens is that you allocate a huge array and map it to file. The file is physically read from disk only when you access elements of that array, and only small portions of it are read. Thus mapping the file in first place takes almost no time and when your program wants to read a small chunk from your huge file it also takes little time. What is the access pattern in your program for your big files?
|
# ? Feb 26, 2010 18:51 |
|
|
# ? Jun 8, 2024 23:52 |
|
Paniolo posted:In any case we're kind of getting away from my original question, which is whether or not there's fundamentally a performance difference between ReadFile, ReadFileEx, or memory mapped files. That's not really a question that has a specific answer (beyond just "yes"). They perform qualitatively differently, not (just) quantitatively differently.
|
# ? Feb 26, 2010 18:54 |
|
Bakkon posted:I'm in a parallel programming course (we use the MPI API if anyone's curious) and for our final project, we get to pick a topic of our choice and implement it using multiple cores. I'm just looking for suggests on a potential algorithm that would be neat/fun/interesting to look into, preferably something processor intensive that would show noticeable difference when ran in parallel.
|
# ? Feb 26, 2010 19:20 |
|
Has anyone spent any time using boost::asio? I've been trying to use it to add networking to an existing program. I was wading through it fine until I found that async_reads(...) would block at io_service.run(). Are my choices to a) Spin the blocking io_service.run() off into a thread b) Trash my programs main loop and register a timeout every 5ms or so with the io_service I would much prefer to have a version of io_service.run() that only blocked while it was actually calling my completion callbacks and if a read/write was still in progress would return.
|
# ? Feb 26, 2010 22:23 |
|
Is that not what io_service's poll does?
|
# ? Feb 27, 2010 00:23 |
|
Can someone demystify the jmp instruction for me? Writing jmp 00400278 in the debugger becomes E9 73 02 5E C2 E9 is the jump, but I don't get how it comes up with the other bytes? What trickery am I missing here?
|
# ? Feb 27, 2010 02:05 |
|
slovach posted:Can someone demystify the jmp instruction for me? E9 is some sort of relative jump, so presumably that's the difference between the current location of the program counter and the address you specified. Does that help?
|
# ? Feb 27, 2010 02:13 |
|
ShoulderDaemon posted:E9 is some sort of relative jump, so presumably that's the difference between the current location of the program counter and the address you specified. Does that help? Yes, thanks
|
# ? Feb 27, 2010 02:29 |
|
I'm writing a "pin tool" (think plugin) for Pin for a homework assignment. It tracks malloc/free of the program being instrumented. So when I call malloc asking for say 40 bytes, the next call to malloc will be 40 + 8 bytes away from the first. I know that extra 8 bytes of space is used to keep track of the space allocated and some other stuff so free can work. What I don't know is how portable the 8 bytes are. I'm currently testing on 64 bit Linux, which is using 8 bytes, but I need to turn this program in on a machine running 32 bit Linux. Clearly the 8 bytes is not 100% portable, but I only care about 32/64 bit Linux with glibc. Does anyone know if I can get that 8 byte number by some means (preferably compile-time if possible)? edit: I'm partially retarded and confused. I should have read the glibc docs closer. It rounds to an 8 or 16 byte boundary (32/64 resp), but that doesn't answer how much extra space malloc uses to keep track of the allocated block. edit2: After some experimentation it seems to be nothing more than an extra 32/64 bits on 32/64 bit systems resp (when using glibc at least) 6174 fucked around with this message at 20:25 on Feb 27, 2010 |
# ? Feb 27, 2010 18:26 |
|
This may be a really basic question, but what is the best way to create a linked-list or similar data structure that contains elements of possibly differing types? For example, say I have three classes, Vehicle, and derived classes Car, and Airplane. Then I want to create a list of all objects of those types that can be iterated through, sorted, etc. A standard linked list doesn't seem to be able to work since pointers to other elements have to be the right type--attempting to make "Vehicle* firstVehicle" point to element "Car sedan" will just piss off the compiler. Is there any alternative workaround to this problem? Sorry if this is an idiotic question--I'm new to the concept.
|
# ? Feb 28, 2010 00:23 |
|
Genpei Turtle posted:Vehicle, and derived classes Car[...]attempting to make "Vehicle* firstVehicle" point to element "Car sedan" will just piss off the compiler. Sounds like you are doing it wrong, can you post some code?
|
# ? Feb 28, 2010 00:48 |
|
Genpei Turtle posted:attempting to make "Vehicle* firstVehicle" point to element "Car sedan" will just piss off the compiler. If Car is derived from Vehicle you can have a pointer to a Vehicle point to a Car object. It shouldn't piss the compiler off at all. Can you give us more details? What types are you expecting in this list? Do you really mean to be able to point to all types? Is there something Car can do that Vehicle can't that you'd need to do?
|
# ? Feb 28, 2010 00:48 |
|
Genpei Turtle posted:This may be a really basic question, but what is the best way to create a linked-list or similar data structure that contains elements of possibly differing types? The easiest way is to allocate objects separately and make your data structures work over pointers to objects. There are other solutions, but they start at "somewhat constrained and complicated" and end up at "extremely complicated".
|
# ? Feb 28, 2010 00:52 |
|
rjmccall posted:The easiest way is to allocate objects separately and make your data structures work over pointers to objects. There are other solutions, but they start at "somewhat constrained and complicated" and end up at "extremely complicated". Really they end up at "use C++0x" (std::unique_ptr)
|
# ? Feb 28, 2010 00:54 |
|
Avenging Dentist posted:Really they end up at "use C++0x" (std::unique_ptr) Can somebody explain this + the other solutions rjmccall mentioned?
|
# ? Feb 28, 2010 01:11 |
|
RussianManiac posted:Can somebody explain this + the other solutions rjmccall mentioned? std::unique_ptr is a smart pointer, similar to std::auto_ptr but better in all ways, partly due to a better design but mostly due to new language features. If your objects are actually owned by a single container and you want to use standard data structures and separate allocation is acceptable, they're a good solution. This is basically still just a twist on "make your data structure store pointers", though. Trying to eliminate separate allocation of linked-list nodes introduces a substantial amount of complexity if you're not willing to modify the classes you're storing.
|
# ? Feb 28, 2010 01:22 |
|
DeciusMagnus posted:If Car is derived from Vehicle you can have a pointer to a Vehicle point to a Car object. It shouldn't piss the compiler off at all. Can you give us more details? What types are you expecting in this list? Do you really mean to be able to point to all types? Is there something Car can do that Vehicle can't that you'd need to do? OK, well here's the basic idea, cutting out a lot of the fat; I start out by declaring pointers for the list: Vehicle* startVehicle = 0; Vehicle* endVehicle = 0; Then to create a new item on the list, something like: code:
I can totally understand if I can't throw everything but the kitchen sink into a linked list, but I was hoping if I wanted to create a list of different objects all derived from the same parent class, it might work.
|
# ? Feb 28, 2010 01:29 |
|
rjmccall posted:std::unique_ptr is a smart pointer, similar to std::auto_ptr but better in all ways, partly due to a better design but mostly due to new language features. Actually, unique_ptr and shared_ptr are pretty drat close to the original C++ spec (scoped_ptr is pretty similar to unique_ptr, but is unsuitable for containers): Boost posted:Summer, 1994. Greg Colvin proposed to the C++ Standards Committee classes named auto_ptr and counted_ptr which were very similar to what we now call scoped_ptr and shared_ptr. In one of the very few cases where the Library Working Group's recommendations were not followed by the full committee, counted_ptr was rejected and surprising transfer-of-ownership semantics were added to auto_ptr. Avenging Dentist fucked around with this message at 01:31 on Feb 28, 2010 |
# ? Feb 28, 2010 01:29 |
|
Genpei Turtle posted:This by itself works fine. The problem occurs when I introduce Car *NewCar(). It doesn't like "startVehicle new Car()," saying it can't do the conversion. Did you declare Car as class Car : Vehicle? Because private inheritance won't let you see the base class.
|
# ? Feb 28, 2010 01:30 |
|
Avenging Dentist posted:Did you declare Car as class Car : Vehicle? Because private inheritance won't let you see the base class. Car : public Vehicle actually. Should I have nixed the public keyword? Genpei Turtle fucked around with this message at 01:38 on Feb 28, 2010 |
# ? Feb 28, 2010 01:35 |
|
Genpei Turtle posted:Car : public Vehicle actually. Should I have nixed the public keyword? No. Can you give us the declaration to Vehicle and Car? If you have the following declared: code:
code:
code:
|
# ? Feb 28, 2010 03:07 |
|
DeciusMagnus posted:No. Can you give us the declaration to Vehicle and Car? If you have the following declared: Car, at the moment, is totally empty. It's literally: code:
code:
|
# ? Feb 28, 2010 03:42 |
|
Are you trying to do something like this?code:
code:
|
# ? Feb 28, 2010 03:50 |
|
DeciusMagnus posted:Are you trying to do something like this? Ah! That was exactly what I was doing wrong. Works like a charm now. Thanks!
|
# ? Feb 28, 2010 04:03 |
|
Erk, why not jut "return new Car;" ?
|
# ? Feb 28, 2010 05:40 |
|
OddObserver posted:Erk, why not jut "return new Car;" ? Because if you paid attention that's not what's happening.
|
# ? Feb 28, 2010 06:51 |
|
nm
Insurrectum fucked around with this message at 22:11 on Mar 3, 2010 |
# ? Mar 3, 2010 22:03 |
|
In a catch block, is there any way to see what my call stack looked like just before the throw and subsequent unwind short of saving off the stack in the thrown object in the first place? I highly suspect the answer is "no" but I just want to make sure. (edit) On another note, does anyone know of something equivalent to valgrind's memcheck tool that works for solaris SPARC systems?
|
# ? Mar 5, 2010 02:05 |
|
Ledneh posted:(edit) On another note, does anyone know of something equivalent to valgrind's memcheck tool that works for solaris SPARC systems? there's the debug heap library you can swap in, "watchmalloc"
|
# ? Mar 5, 2010 02:12 |
|
Ledneh posted:(edit) On another note, does anyone know of something equivalent to valgrind's memcheck tool that works for solaris SPARC systems? We use libumem. Seems to be a long way from what valgrind does, but I guess it depends on what you need.
|
# ? Mar 5, 2010 02:40 |
|
It would be Purify, commercial platforms means commercial tools
|
# ? Mar 5, 2010 03:18 |
|
I have been doing some timings of my simple algorithm (it involves a matrix and two nested loops) and it seems that the time to solve a problem of fixed size takes less time if my implementing function had chance to run a couple of times beforehand. What I do is I measure the time it took to execute a problem of some size and try to compute the average those times with so many iterations. During each iteration I generate a random input for the problem but keep its size fixed. I use gettimeofday to measure the execution time. What I notice is that if I execute the program once, I get the timings that are about 10-20% slower than when I execute the algorithm for random problems for say, 1000 times. Is this because of the instruction cache as well as branch prediction and other speculative execution? I am pretty confident that I am doing my timings correctly as I only measure the time it takes for a function call to problem solver to complete. EDIT: Here is the processor I am using: code:
Crazy RRRussian fucked around with this message at 04:48 on Mar 5, 2010 |
# ? Mar 5, 2010 04:21 |
|
MrMoo posted:It would be Purify, commercial platforms means commercial tools Ugh. Well hell we have ClearQuest and ClearCase laying around, maybe we have that too. Annoyingly, dbx (the debugger) has a memory use/access checker built in, but it is broken because of some Oracle libraries we need, oddly enough. Plus being godawful slow. And libumem didn't pick up errors I'm trying to detect (primarily mismatched new/new[]/delete/delete[]--one of our old programmers missed the memo). Never heard of watchmalloc though, I'll look into that too. Thanks for the suggestions
|
# ? Mar 5, 2010 14:26 |
|
What's a good (simple) parser generator? The last time I did anything like this was with yacc in college a few years ago. I know I can google "parser generator c" but I'm looking for recommendations.
|
# ? Mar 5, 2010 21:35 |
|
Everyone just uses yacc (often in its bison incarnation) anyways
|
# ? Mar 5, 2010 21:37 |
|
I hear nice things about lemon, (used in sqlite)
|
# ? Mar 5, 2010 21:39 |
|
Ledneh posted:In a catch block, is there any way to see what my call stack looked like just before the throw and subsequent unwind short of saving off the stack in the thrown object in the first place? Using a debugger usually gets you this.
|
# ? Mar 5, 2010 21:50 |
|
GrumpyDoctor posted:What's a good (simple) parser generator? The last time I did anything like this was with yacc in college a few years ago. I know I can google "parser generator c" but I'm looking for recommendations. Boost.Spiri--- oh wait you said simple
|
# ? Mar 5, 2010 22:22 |
|
He also said C
|
# ? Mar 5, 2010 22:33 |
|
|
# ? Jun 8, 2024 23:52 |
|
Otto Skorzeny posted:He also said C Actually he didn't say he required C anywhere??? (Preemptively: Google searches don't count as "requirements")
|
# ? Mar 5, 2010 22:57 |