Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
6174
Dec 4, 2004

JoeNotCharles posted:

You're insane. Autotools are ridiculously terrible to work with.

I have to agree. The reason for Autotools in the first place was to deal with shortcomings of the various implementations of Make and different compilers. Since those are relatively rare today there isn't a lot of reason to start a new project with Autotools. Even just using Make can get tedious on large projects. Modern, usable build controls systems exist such as CMake. In fact KDE semi-recently switched over to CMake.

Adbot
ADBOT LOVES YOU

6174
Dec 4, 2004

Peanutmonger posted:

6174:
cmake looks very promising. It's a little annoying that they chose the name cmake (since we have make, gmake, bmake), especially when it doesn't replace make, but that's okay. It also makes sense that KDE would choose it, since it loses all of the dependencies that autotools had, and they're heading for super cross-platformness. I don't have it installed currently (other than KDE4, the list of packages using it seems small), but maybe I'll test it out on one of my toy projects. Generating a VS project would be very nice...

If cmake isn't to your liking there are other options too such as SCons, and Boost.Jam. If you're using a language other than C or C++ the options also change, but I won't mention those since this is the C/C++ thread.

Basically it is pretty easy to find alternatives because Autotools isn't a good solution today. The main problem it used to solve is different today, and it has some screwy features, like the use of M4. M4 is just a pain in the rear end and asking a developer to become even somewhat familiar with it to build a program is just wrong.

6174
Dec 4, 2004

ColdPie posted:

Not to mention traversing directories and making makefiles for each directory. Eclipse does a pretty good job of making makefiles, in my opinion.

Recursie Make Considered Harmful. In my opinion, the paper goes a little far as there are sometimes reasons to use make recursively, but more often than not it is abused which slows down the build process unnecessarily.

6174
Dec 4, 2004

Llama Patrol posted:

It would only cause a problem if there is another header file, which he's already said there is not. Unless he's doing something weird like including C files.

No, only one header file is needed for the problem JoeNotCharles is diagnosing. If the file Captain Frigate mentioned is called header.h and is included by both file1.cpp and file2.cpp, it is enough to cause a problem. This problem is fixed exactly as JoeNotCharles said, with ifndef guards.

6174
Dec 4, 2004

Llama Patrol posted:

You're definitely wrong in C. I'm a C programmer primarily and don't do much in C++, but I'm pretty certain it's the same. Two files, file1.c and file2.c, can include the same header file without a problem, because both C files get compiled independently of each other. They have no idea about each other at compile time, so there's no conflict.

When it's time to link, again there's no problem, because the contents of the header file are "hidden" in file1.o and file2.o.

I just checked it and you're right, you need two headers and one header to include the other header. I know this kind of poo poo happens in Fortran as that is what I spent several hours last week fixing exactly this problem, I must have gotten confused.

6174
Dec 4, 2004
I've got a template function in which I'm doing a comparison like:

code:
if (number < 0) {
    number *= -1;
}
number is of type T. When T is an unsigned type the compiler gives me a warning:

../shared/number_stuff.h:102: warning: comparison of unsigned expression < 0 is always false

Is there anyway I can reasonably make these warnings go away? I tried enclosing it in an if (numeric_limits<T>::is_signed) block, but the compiler still complains.

6174
Dec 4, 2004
I've got a C program that I've got to insert some sorting operations into. I've got several arrays, say a, b, and c. I need to sort a into non-decreasing order, however the order of b and c must change in the same manner as a does. This is easy enough to do by swapping the elements of b and c in the same manner as a when sorting a. The problem comes into when later I've got a similar situation with arrays x and y. I don't want to have to be coding up 4-5 slight variants of the same sorting algorithm (and the number of "dependent" arrays varies from 1 to 4). How can I reasonably make only one sorting algorithm deal with this situation? Would a function pointer to a swap routine be reasonable? If needed I can restructure things to make this easier.

6174
Dec 4, 2004

HB posted:

This would be quite reasonable. You'd have the indices as its parameters and just have it perform the same swap on any of an arbitrary set of arrays.

That is what I figured, I just wanted to make sure I wasn't missing something stupid.

6174
Dec 4, 2004

Kessel posted:

I'm trying to read in the tag size for an ID3v2.3 tag. In the file, it's specified as four bytes with the first bit of each byte set to zero and ignored; that is,

0xxxxxxx 0xxxxxxx 0xxxxxxx 0xxxxxxx

So 257 would be

00000000 00000000 00000002 00000001

I can read the four bytes byte by byte, but what should I do to turn them into an integer value that I can use? The ignoring of the leading bit is tripping me up.

That 2 in your example should be a 1, I think.

Assuming I counted correctly, the number would be:
byte1 * 2^22 + byte2 * 2^15 + byte3 * 2^8 + byte4

6174
Dec 4, 2004

Kessel posted:

The last two bytes are O0000002 O0000001, where I've used O to indicate ignored bits. So put them together and you actually get 20000001, which is 2*128 + 1 = 257.

It appears you've got this sorted out from later posts, but if you are going to specify a byte by putting zeros to pad out to eight positions, then having a "2" doesn't logically make sense since binary has only zeros and ones. That is what my comment about the 2 was previously. Also apparently I can't count, but JoeNotCharles can.

6174
Dec 4, 2004

Citizen Erased posted:

Does anyone know of a faster alternative to the standard library vector container? A year or so ago I made a 3D application which relied very heavily on the stl vector and since, a friend has told me how slow vectors are. I'd like to re-work some of the old code and replace the vectors with something similar but more efficient if it means I can eek a few more frames per second out of it. Is there anything faster and more suited for real time 3D applications?

While you really should listen to TSDK first as he knows plenty more about C++ than me, but Boost.Array may be of interest. However before you go nuts are start replacing things, have you profiled your code to determine that the vectors are really a problem?

6174
Dec 4, 2004
I've got a template question. I have two functions:
code:
template<class T>
vector<T> arbitrary_partition(T number, vector<T> partite_sizes);

template<class T>
T arbitrary_partition(T number, vector<T> partite_sizes);
The first function calculates the number of ways to split the number into partitions with partite sizes coming from the vector partite_sizes. The return is a vector of size number + 1 where at position i is the number of ways that i can be partitioned.

What I want is the second function to basically be
code:
template<class T>
T arbitrary_partition(T number, vector<T> partite_sizes)
{
     return arbitrary_partition(number, partite_sizes)[number];
}
(This function may be wrong as I haven't gotten it to compile)

However when I have both functions, and invoke the first as:
code:
const unsigned long num = 100;

vector<unsigned long> partite (num,0);

for (unsigned long i = 0; i <= num; ++i) {
      partite[i] = i+1;
}

vector<unsigned long> count = arbitrary_partition(num, partite);
I get the error:
code:
/home/6174/Projects/projecteuler/src/0071-0080/problem76.cpp: In static member function ‘static void Problem76::solve()’:
/home/6174/Projects/projecteuler/src/0071-0080/problem76.cpp:36: error: call of overloaded 
     ‘arbitrary_partition(const long unsigned int&, std::vector<long unsigned int, std::allocator<long unsigned int> >&)’ is ambiguous
/home/6174/Projects/projecteuler/src/0071-0080/../shared/partition.h:48: note: candidates are: std::vector<T, std::allocator<_CharT> >
     arbitrary_partition(T, std::vector<T, std::allocator<_CharT> >) [with T = long unsigned int]
/home/6174/Projects/projecteuler/src/0071-0080/../shared/partition.h:65: note:                 
     T arbitrary_partition(T, std::vector<T, std::allocator<_CharT> >) [with T = long unsigned int]
make[2]: *** [CMakeFiles/projecteuler.dir/src/0071-0080/problem76.o] Error 1
make[1]: *** [CMakeFiles/projecteuler.dir/all] Error 2
make: *** [all] Error 2
My problem is I don't see what is ambiguous. It would seem to me that since count is a vector, it should be obvious that the function to use is the one returning a vector. What am I doing wrong, or how can I make this non-ambiguous?

If it makes a difference this is with g++ (GCC) 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)

edit:

For reference:
code:
unsigned long count = arbitrary_partition(num, partite);
Fails in the same way, where I would have expected it to use the second function returning type T.

6174 fucked around with this message at 00:48 on May 5, 2008

6174
Dec 4, 2004

ColdPie posted:

Your trouble is right at the start. These two functions have the same signature: arbitrary_partition(T, vector<T>). There's no way for the compiler to distinguish between them. You'll either need to give the functions different names, or change the parameters they accept.

That would explain it. I thought the return value was considered for deciding what to do with overloaded functions. I guess not, and 7.4.1 of Stroustrup explains why.

6174
Dec 4, 2004

Nitis posted:

I've uploaded the file here. (BTW, if someone knows a good online file uploader, I'd love to hear about it)

As Avenging Dentist said, Pastebin works pretty well for code. I've put your file here http://pastebin.com/m70aea19c

edit: To clarify Avenging Dentist's answer, look at lines 48, 81, and 162.

6174 fucked around with this message at 17:11 on May 12, 2008

6174
Dec 4, 2004
I've got a problem I'd like to solve that is easily transformed into an instance of a shortest path of a digraph. Being lazy, I'd prefer to use a library to save me some time. So I've been looking at the Boost Graph Library and have a question. The problem is the natural transformation of my problem into a graph puts the weights onto the vertices instead of the edges. The BGL seems to only solve the problem when the weights are on edges. Of course the graph can be transformed to put the weights on the edges (by making the out-edges carry the weight of the vertex), but since the digraph I'm going to apply this to is roughly 3-regular and has about 64000 vertices, that adds about 128000 additional integers to be stored that don't need to be. Is there some simple way to adapt the BGL into this situation or am I just better off writing a more specialized implementation to solve this problem?

edit: I'm retarded and it is only about 6400 vertices, so it isn't as crazy to put weights on all the edges, but I'd still prefer to use the weights on vertices instead.

edit2: I'm not fixated upon BGL and would consider other libraries.

6174 fucked around with this message at 18:47 on May 12, 2008

6174
Dec 4, 2004

Nitis posted:

Looking at this, I'm still not sure where I went wrong.

The code that defines the function says that the function getBetAmount should take a parameter (line 162). The function prototype does not mention any parameters (line 48). You call the function with a parameter (line 81). These three things must match.

6174
Dec 4, 2004

cliffy posted:

I'm not very familiar with BGL, but after glancing at the documentation it appears that they have a notion of 'properties' which are basically data members that can be part your edge or vertex object. However, instead of being POD-types they appear to be more like associative containers such as a map of strings to data members, in fact they call this concept a 'property map'. Using the 'put' and 'get' property-map functions you can associate a weight with a vertex like:
...

I'm not very familiar with the library either, which I suppose is part of my problem, but from what you've posted there might be a way to convince BGL to do what I want. I guess I need to start reading the docs from the beginning to wrap my head around how to do what I want.

6174
Dec 4, 2004
I've got a C program I'm fixing/updating and there are couple constants I can't identify that hopefully someone recognizes.

The program basically is a wrapper around a plain text file before sending it off to a printer. It does things like add page numbers, line numbers, a header saying what file is being printed, etc. The program used to talk straight to a HP LJ 4m+. The network changed a bit and now that printer is accessed by going through an LPD print server (and hence why I'm updating the program). It has a whole bunch of undocumented usage of PCL 5e (HP's Printer Control Language) that I've been deciphering. However the constants in question don't quite look like PCL commands, which is a lot of the reason I'm confused.

The constants are:
code:
#define PRT_ON "\33[5i"
#define PRT_OFF "\33[4i"
They are used only if a variable "do_termprt" is set, which is in turn set by the command line flag T.

The minimal documentation about -T is "do print through terminal to laserjet".

The other weird thing is PRT_ON is printed before job control commands (ie simplex/duplex and orientation PCL commands) and PRT_OFF is printed immediately following the job control commands and before the contents of the file that is being processed.

I don't think these are PCL commands because PCL commands start with ESC (hex 0x1B, octal 033, decimal 27). However other PCL commands were declared using \033, so I can't rule out that these were typos and should have had an extra 0 to make them octal constants. But I haven't run across any PCL commands that use a [ and PCL commands terminate by a capital letter, which "i" is not. (Keep in mind I hardly knew what PCL was a few hours ago, so I can't say authoritatively that they are not PCL commands)

Does anyone recognize what those constants do?

6174
Dec 4, 2004

Vanadium posted:

They look like those ANSI escapes used to get colored text on terminals. :shobon:

I think you've got it. According to here those commands do:

quote:

Stop Print Log <ESC>[4i
* Disable log.

Start Print Log <ESC>[5i
* Start log; all received text is echoed to a printer.

Now this does mean that the program didn't correctly function earlier since the constant would need to be \033, not \33, but it clearly is almost the intended behavior of the original programmer. (The order of PRT_ON and PRT_OFF also need to be switched in the code to suppress the job control commands, which I believe is the intended behavior)

This also explains why it was hard to find these commands in PCL references.

6174
Dec 4, 2004

TSDK posted:

Nope, the '33' is taken to be octal. You can only specify numeric escape sequences as either octal or hexadecimal.

I've started looking at '\0' in a whole new light since I found out it's a second-class octal zero and not a good and proper decimal zero.

Interesting. I guess I was too quick to assume the old guy made a mistake. I'm just too used to weird bugs or weird practices in his code. Like that C program I was working on yesterday was in K&R C, despite being up updated every couple years (though it was started in 1990, so being K&R C isn't too odd).

6174
Dec 4, 2004

Creepy Doll posted:

I think your behaviour is pretty reminiscent to that of a grammar nazi correcting a typo(also known as nitpicking).

C/C++ is not a language. Your post implied that it was. Just because there are similarities between them does not mean that lumping them together is reasonable in all contexts. In particular your post was addressed at someone who clearly is very new to C++ and using C/C++ is needlessly confusing. This "nitpicking", as you deride it, is not only appropriate here, it should be encouraged. Moreover, alluding that MarsMattel's comment is reminiscent of a grammar nazi is fatuous. If you have a point, make it, making a personal attack on someone else because you messed up reflects very poorly on you.

6174
Dec 4, 2004
I've got a problem with a C program and I'm not sure how to proceed.

Basically the program is a relatively messy, fragile, program that I didn't write, but I get to maintain. Recently, the program was downloaded by a user, who reported problems. The initial set of log files I received with the problem weren't enough to determine what was happening. So I sent the guy a shell script that recompiles the program using some debug settings, runs some test cases and puts everything into an archive to send to me. The result of this is everything worked.

The systems I test with have gcc 3.4.6 (32 bit) on an old RHEL4, and gcc 4.1.3 (64 bit) on Ubuntu Gutsy. The program works on both of these systems with the debug settings both on and off.

The system with the problems is Fedora 7 with gcc 4.1.2, with I believe 32-bit. uname -a indicates i686, but it is a Intel Core 2 Quad system, Q6600, so 64 is possible here if I'm misreading things.

The entire time the optimization flags to the compiler remained constant (specifically not set, so the default). The debug flags I had the guy recompile with were of the form "-D DEBUG". This turns on various #ifdef statements in the program which only print out variable quantities.

The problem now is how can I track down the problem? What sort of things should I be looking for?


edit: I've verified that all the conditionally compiled lines that were added were either simple printf/fprintf statements, or loops through an array with either printf/fprintf for each array entry.

At this point all I can think of is getting the object files from this guy compiled both with the flags, and without, and start looking at disassembled code. But I'm hoping to avoid doing this since it would inevitably be a lot of tedious work.

6174 fucked around with this message at 22:10 on Oct 29, 2008

6174
Dec 4, 2004

Cross_ posted:

Please define "problems".

The basic purpose of this program is to read in several data and configuration files, based on these call program A (written in FORTRAN 77) with a particular setup, read and process the results, and then call program B (written in Fortran 95) with a particular setup. This program is used by atmospheric physicists to analyze the mixing ratios of various gases in our atmosphere.

The set of all three programs is distributed with a half dozen example cases that are intended for the user to test and verify their setup, and provide examples of how to setup the various computations. The user with problems was having problems running these example cases.

One of the data files used by program B is a spectrum in a proprietary format. The C program reads in the configuration file for program B which tells it what interval to analyze. The C program then verifies that the spectrum contains the interval and converts that portion of the spectrum to yet another format for program B to work on.

The problem the user was having was that in all but one of the example cases, the C program would report that the desired interval was not contained in the spectrum. The spectra do actually contain the desired intervals. Based upon the standard output the program logs to various files I could tell that the program was looking for the correct interval, and was correctly reading the spectrum file.

The debug flags I asked the user to recompile with print out a lot of logging statements during the reading in of the relevant configuration file for program B, logging statements during the reading of the spectrum, and general logging statements of the part of the program that locates the data and configuration files.

Cross_ posted:

My standard approach for hard to find problems is littering the code with log statements and checks for memory integrity. Then narrow it down to specific functions or contexts from there. That your test cases passed on his system might mean that either his build was incorrect or that your test cases are too lenient- you seem to be ignoring the second option.

Adding the ton of logging statements makes the program work correctly. I also don't have access to the system with the problems, so it is not feasible to start adding logging statements one at a time until things work. To further complicate matters, I can't rely on this user to do much work since communicating with him is hard because his English is minimal and I don't understand a bit of Mandarin.

6174 fucked around with this message at 23:56 on Oct 29, 2008

6174
Dec 4, 2004

6174 posted:

C debugging problems

Using valgrind I found several locations that it complains about conditional jumps or moves depending on uninitialized data. The program is in worse shape than I thought, but at least now I've got some stuff to work from.

6174
Dec 4, 2004
I just spent several hours documenting some old binary files. I've got two variants of an old C program that should read them (old as in written in K&R C, last updated in 1992). Neither work because they assume the width of ints, shorts and so forth that are not valid with my machine. I am updating the program to C99 so I can use the fixed-width types from stdint.h to avoid this issue, but that only works for integral types.

The problem is I've also got floats and doubles in this binary file (in IEEE format). I can't seem to find an equivalent to fixed-width types for floating points types or even a way to specify IEEE format. Since I don't want to have to mess with this program ever again, is there some way that I can read the data and put it into whatever underlying floating point format the particular compiler is using that has a reasonable chance of just working in the future with at most a recompile? Endianness shouldn't be a problem since I can detect it based on this particular file format (Except for some pathological counter examples but realistically those won't occur).

6174 fucked around with this message at 07:05 on Aug 14, 2009

6174
Dec 4, 2004

Avenging Dentist posted:

That's because the underlying format for floating point types is not defined by the ISO standard.

I realize that. It also doesn't specify the underlying format for the five standard signed integer types, but there are also the intN_T / uintN_t types where it is specified. My problem is I can't find something corresponding to intN_t for floating point values (which may be because they don't exist).

6174
Dec 4, 2004

Avenging Dentist posted:

See also Annex F of ISO/IEC 9899:1999.

That is what I get for not reading the Annexes. This should be plenty good enough for what I need. Though crazily enough there is yet another variant of this same program that does convert the IEEE values in the file to Cray floating point.

Avenging Dentist posted:

Also keep in mind that intN_t and uintN_t are not mandated by the standard.

The main thing I'd like is for it to just work with at most a recompile, but if that can't occur failing in a way that is obvious that the build environment is crazy should be good enough.

6174
Dec 4, 2004
C99 Question:

I've got the following union:
code:
union
{
    uint8_t u8buff[512];
    uint16_t u16buff[256];
    uint32_t u32buff[128];
    uint64_t u64buff[64];
    float fbuff[128];
    double dbuff[64];
} record_t;
It is being used to read a binary file that is made up of many records that are all 512 bytes long. When I currently compile my program, sizeof() of the union is 512 as I expect it to be. I know the ISO spec specifies it to be implementation specific, but is there a reasonable implementation that would not make that union exactly 512 bytes? The reason I ask is because my program verifies it to be 512, and gives an error if it is not. I just want to be sure that is a reasonable expectation on my part.

edit: For reference, I am using the __STDC_IEC_559__ macro that AD mentioned on the previous page to verify that the float and double aren't crazy.

edit2: I know it sounds like I'm being paranoid, so let me give a few more details to explain why I'm being so cautious. The data files this program reads are scientific instrument recordings spanning about 4-5 years in the early 90s. I'm in the process of packaging that data up to put into a long term storage computer. In addition this program I'm writing will be sitting next to the data. What will most likely occur is no one will use this program for a decade or more (if ever). I'm trying to make it as likely as I can that this program will still function then, or give enough information so that a researcher, who will almost certainly have minimal programming experience, can still access the data or hand the program off to someone to update it so they can access the data.

6174 fucked around with this message at 19:33 on Aug 19, 2009

6174
Dec 4, 2004

Dijkstracula posted:

In general, it's not a reasonable expectation, but in your case, it should be okay.

Yeah, certainly alignment in general can't be assumed, but for the types in this particular union it seemed like a reasonable expectation.

Dijkstracula posted:

If you know something about the compiler that will be used, you can use a directive like __attribute__ ((__packed__)) on a union, but I've never actually needed this, so I can't tell you how the behaviour would play on different architectures.

Unfortunately I've got no clue what compilers will be common in the future. For now I'm using gcc, and trying to keep from using any odd extensions and attempting to make explicit the various assumptions I'm making.

Dijkstracula posted:

Also scientific code :rolleye:

This will be at least the 3rd or 4th generation of this program (others not written by me, and don't completely work with modern compilers if they ever worked), but it will be the first to have any comments or error checking.

6174
Dec 4, 2004

floWenoL posted:

Are you checking at compile-time or at run-time? If you're compiling it now to be used 10 years from now, then if you do a compile-time check you'll know that whenever you run it it'll still work.
It is currently run-time because sizeof won't work as part of a pre-processor command, and I don't know of an alternative means to check the size of the union at compile-time. Also, it won't be the binary that is distributed, but the source.

Vanadium posted:

Okay I never wrote anything where alignment or padding or whatever mattered but would it not be safer and more intuitive to just stuff everything into a char[512] instead of hoping that your unions are not going crazy on you
Most of the reading of the block I am simply using the uint8_t type, so I guess in retrospect just reading into an array and dealing with it from there should be simplest way to proceed. I am already basically dealing with it byte by byte because of endianness issues.

6174 fucked around with this message at 01:35 on Aug 20, 2009

6174
Dec 4, 2004

floWenoL posted:

You should probably do it anyway, even if you're distributing the source, so that if it does break, you'll know it early.

I haven't seen static asserts like that in C before. I'll definitely be adding those checks.

6174
Dec 4, 2004

fritz posted:

A simple thing to do is to include some kind of reference file with a readme describing exactly what your code's supposed to do on it, that way future scientist can tell if they need to have someone do more thinking.

That is already done. However, you assume to much. In my experience such additional files are completely ignored.

6174
Dec 4, 2004

fritz posted:

Does Numerical Recipes still do that thing where they originally wrote the code in fortran with 1-up indexing and when it came time to write the C/C++ version with 0-up they decided the best thing to do was just add a dummy 0th element onto the front of all their arrays?

bobbles posted:

Yes they unfortunately still do.

Maybe it is not consistent through the book, but I can't find an instance of this in NR 3rd ed. I've never actually used their code (I just look at the formulas when I need them), so I only just flipped to some random pages.

6174
Dec 4, 2004

rjmccall posted:

Is this right after it denies the fundamental theorem of algebra, but before it gets into its discussion of the continuum hypothesis?

For what it is worth, the relation GrumpyDoctor described (two floating point numbers are related if and only if they are within some epsilon of each other) is not an equivalence relation because that relation is not transitive.

GrumptyDoctor, I'd have to know more about what you are doing to suggest a solution, but it is possible (likely for an appropriate epsilon) that your data set won't violate transitivity. I'd start by writing a script that checks for such violations (sort your data then check). Only if you found violations would I worry about it.

6174
Dec 4, 2004
I've got a program that needs to use the equivalent of ntohl, but for 64 bit. I only have a little-endian machine to test on at the moment, so can someone verify for me this works correctly on a big-endian machine?

code:
    union {
        uint64_t ui64;
        uint32_t ui32[2];
    } val;
    if (htons(1) == 1)
    {
        // Network order / Big-endian
        val.ui32[0] = ntohl(*((uint32_t *) storage));
        val.ui32[1] = ntohl(*((uint32_t *) (storage + 4)));
    }
    else {
        // Little-endian
        val.ui32[1] = ntohl(*((uint32_t *) storage));
        val.ui32[0] = ntohl(*((uint32_t *) (storage + 4)));
    }
storage is a uint8_t pointer pointing to the first byte of a uint64_t in network order. I know about be64toh in endian.h, but this program (for school) will be tested on a machine that almost certainly doesn't have it (and I can't check).

6174
Dec 4, 2004
I'm writing a "pin tool" (think plugin) for Pin for a homework assignment. It tracks malloc/free of the program being instrumented. So when I call malloc asking for say 40 bytes, the next call to malloc will be 40 + 8 bytes away from the first. I know that extra 8 bytes of space is used to keep track of the space allocated and some other stuff so free can work.

What I don't know is how portable the 8 bytes are. I'm currently testing on 64 bit Linux, which is using 8 bytes, but I need to turn this program in on a machine running 32 bit Linux. Clearly the 8 bytes is not 100% portable, but I only care about 32/64 bit Linux with glibc. Does anyone know if I can get that 8 byte number by some means (preferably compile-time if possible)?

edit: I'm partially retarded and confused. I should have read the glibc docs closer. It rounds to an 8 or 16 byte boundary (32/64 resp), but that doesn't answer how much extra space malloc uses to keep track of the allocated block.

edit2: After some experimentation it seems to be nothing more than an extra 32/64 bits on 32/64 bit systems resp (when using glibc at least)

6174 fucked around with this message at 20:25 on Feb 27, 2010

6174
Dec 4, 2004
I'm writing a C++ program and I've come to a design aspect that I don't know how to nicely solve.

My program plays a board game. I will be writing several decision algorithms on how to play (basically variations of one another). Each of these algorithms will use a heuristic to evaluate how good a board position is at various points. How the evaluation heuristic is calculated is totally separate from the specific decision algorithm used.

I would like to be able to easily mix and match decision algorithms with heuristics.

One obvious solution is multiple inheritance with separate abstract base classes for the decision algorithms and the heuristics. Then a full strategy is formed by inheriting a decision algorithm and a heuristic. I'm not particularly enthusiastic about this option because the number of classes I'd have to write is the product of the number of decision algorithms and evaluation heuristics which grows quickly when trying variations of heuristics, even if the classes are essentially trivial.

Is there a better way to do this than multiple inheritance?

6174
Dec 4, 2004

TheBoogeyMan posted:

I like templates a lot with this type of thing.

That sounds like a nice solution, but I'm not sure how to make templates do this. Do you have any examples or search terms which would reveal some examples?

6174
Dec 4, 2004
Does anyone know of a good way to make something like unit tests that track performance regressions?

Last week I accidentally introduced a significant performance regression into a personal research project. It took me almost a week to find because all of my unit tests were still passing. The difference between 50ms and 500ms wasn't something I noticed while running the unit tests. However my main program takes hours to run as is and needs to chunk through tens of thousands of cases so that time difference is quite significant.

Ideally I would like something where I can write tests similar to gtest (which is what I'm using for my unit tests). Then I could run a baseline test which stores results for later comparison. Successive tests would compare against the baseline and print warnings/errors if the timing deviates more than some percent/amount.

Looking around I found two projects: haiyai and Celero. But both appear to just print out numbers and don't really identify regressions. Also they both seem to only have a handful of users at best so I suspect there are still crazy bugs lurking around that I'd prefer not to wrangle with.

I'm currently using C++11 (I only care about Clang and gcc) with CMake and Google Test if there is something that integrates well with that combination.

Adbot
ADBOT LOVES YOU

6174
Dec 4, 2004
Unless I'm misunderstanding you that is going to be extremely error prone. I run this code on multiple machines so hard coding any times is going to just break since specific numbers are not going to be precise enough across multiple machines. Even on my laptop, if I'm plugged in or on battery makes a significant difference in runtime. Obviously I could write a tool that creates a profile for each reasonable machine configuration, then use that as part of the unit tests. But then that basically describes one way a tool I'm looking for could work and I would prefer to not divert my attention away from my main work to write such a thing.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply