|
JoeNotCharles posted:You're insane. Autotools are ridiculously terrible to work with. I have to agree. The reason for Autotools in the first place was to deal with shortcomings of the various implementations of Make and different compilers. Since those are relatively rare today there isn't a lot of reason to start a new project with Autotools. Even just using Make can get tedious on large projects. Modern, usable build controls systems exist such as CMake. In fact KDE semi-recently switched over to CMake.
|
# ¿ Mar 1, 2008 06:12 |
|
|
# ¿ May 6, 2024 02:54 |
|
Peanutmonger posted:6174: If cmake isn't to your liking there are other options too such as SCons, and Boost.Jam. If you're using a language other than C or C++ the options also change, but I won't mention those since this is the C/C++ thread. Basically it is pretty easy to find alternatives because Autotools isn't a good solution today. The main problem it used to solve is different today, and it has some screwy features, like the use of M4. M4 is just a pain in the rear end and asking a developer to become even somewhat familiar with it to build a program is just wrong.
|
# ¿ Mar 1, 2008 16:12 |
|
ColdPie posted:Not to mention traversing directories and making makefiles for each directory. Eclipse does a pretty good job of making makefiles, in my opinion. Recursie Make Considered Harmful. In my opinion, the paper goes a little far as there are sometimes reasons to use make recursively, but more often than not it is abused which slows down the build process unnecessarily.
|
# ¿ Mar 1, 2008 20:43 |
|
Llama Patrol posted:It would only cause a problem if there is another header file, which he's already said there is not. Unless he's doing something weird like including C files. No, only one header file is needed for the problem JoeNotCharles is diagnosing. If the file Captain Frigate mentioned is called header.h and is included by both file1.cpp and file2.cpp, it is enough to cause a problem. This problem is fixed exactly as JoeNotCharles said, with ifndef guards.
|
# ¿ Apr 7, 2008 16:11 |
|
Llama Patrol posted:You're definitely wrong in C. I'm a C programmer primarily and don't do much in C++, but I'm pretty certain it's the same. Two files, file1.c and file2.c, can include the same header file without a problem, because both C files get compiled independently of each other. They have no idea about each other at compile time, so there's no conflict. I just checked it and you're right, you need two headers and one header to include the other header. I know this kind of poo poo happens in Fortran as that is what I spent several hours last week fixing exactly this problem, I must have gotten confused.
|
# ¿ Apr 7, 2008 17:31 |
|
I've got a template function in which I'm doing a comparison like:code:
../shared/number_stuff.h:102: warning: comparison of unsigned expression < 0 is always false Is there anyway I can reasonably make these warnings go away? I tried enclosing it in an if (numeric_limits<T>::is_signed) block, but the compiler still complains.
|
# ¿ Apr 11, 2008 19:45 |
|
I've got a C program that I've got to insert some sorting operations into. I've got several arrays, say a, b, and c. I need to sort a into non-decreasing order, however the order of b and c must change in the same manner as a does. This is easy enough to do by swapping the elements of b and c in the same manner as a when sorting a. The problem comes into when later I've got a similar situation with arrays x and y. I don't want to have to be coding up 4-5 slight variants of the same sorting algorithm (and the number of "dependent" arrays varies from 1 to 4). How can I reasonably make only one sorting algorithm deal with this situation? Would a function pointer to a swap routine be reasonable? If needed I can restructure things to make this easier.
|
# ¿ Apr 14, 2008 22:11 |
|
HB posted:This would be quite reasonable. You'd have the indices as its parameters and just have it perform the same swap on any of an arbitrary set of arrays. That is what I figured, I just wanted to make sure I wasn't missing something stupid.
|
# ¿ Apr 14, 2008 22:24 |
|
Kessel posted:I'm trying to read in the tag size for an ID3v2.3 tag. In the file, it's specified as four bytes with the first bit of each byte set to zero and ignored; that is, That 2 in your example should be a 1, I think. Assuming I counted correctly, the number would be: byte1 * 2^22 + byte2 * 2^15 + byte3 * 2^8 + byte4
|
# ¿ Apr 24, 2008 00:22 |
|
Kessel posted:The last two bytes are O0000002 O0000001, where I've used O to indicate ignored bits. So put them together and you actually get 20000001, which is 2*128 + 1 = 257. It appears you've got this sorted out from later posts, but if you are going to specify a byte by putting zeros to pad out to eight positions, then having a "2" doesn't logically make sense since binary has only zeros and ones. That is what my comment about the 2 was previously. Also apparently I can't count, but JoeNotCharles can.
|
# ¿ Apr 24, 2008 02:58 |
|
Citizen Erased posted:Does anyone know of a faster alternative to the standard library vector container? A year or so ago I made a 3D application which relied very heavily on the stl vector and since, a friend has told me how slow vectors are. I'd like to re-work some of the old code and replace the vectors with something similar but more efficient if it means I can eek a few more frames per second out of it. Is there anything faster and more suited for real time 3D applications? While you really should listen to TSDK first as he knows plenty more about C++ than me, but Boost.Array may be of interest. However before you go nuts are start replacing things, have you profiled your code to determine that the vectors are really a problem?
|
# ¿ Apr 24, 2008 22:32 |
|
I've got a template question. I have two functions:code:
What I want is the second function to basically be code:
However when I have both functions, and invoke the first as: code:
code:
If it makes a difference this is with g++ (GCC) 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2) edit: For reference: code:
6174 fucked around with this message at 00:48 on May 5, 2008 |
# ¿ May 5, 2008 00:39 |
|
ColdPie posted:Your trouble is right at the start. These two functions have the same signature: arbitrary_partition(T, vector<T>). There's no way for the compiler to distinguish between them. You'll either need to give the functions different names, or change the parameters they accept. That would explain it. I thought the return value was considered for deciding what to do with overloaded functions. I guess not, and 7.4.1 of Stroustrup explains why.
|
# ¿ May 5, 2008 01:19 |
|
Nitis posted:I've uploaded the file here. (BTW, if someone knows a good online file uploader, I'd love to hear about it) As Avenging Dentist said, Pastebin works pretty well for code. I've put your file here http://pastebin.com/m70aea19c edit: To clarify Avenging Dentist's answer, look at lines 48, 81, and 162. 6174 fucked around with this message at 17:11 on May 12, 2008 |
# ¿ May 12, 2008 17:09 |
|
I've got a problem I'd like to solve that is easily transformed into an instance of a shortest path of a digraph. Being lazy, I'd prefer to use a library to save me some time. So I've been looking at the Boost Graph Library and have a question. The problem is the natural transformation of my problem into a graph puts the weights onto the vertices instead of the edges. The BGL seems to only solve the problem when the weights are on edges. Of course the graph can be transformed to put the weights on the edges (by making the out-edges carry the weight of the vertex), but since the digraph I'm going to apply this to is roughly 3-regular and has about 64000 vertices, that adds about 128000 additional integers to be stored that don't need to be. Is there some simple way to adapt the BGL into this situation or am I just better off writing a more specialized implementation to solve this problem? edit: I'm retarded and it is only about 6400 vertices, so it isn't as crazy to put weights on all the edges, but I'd still prefer to use the weights on vertices instead. edit2: I'm not fixated upon BGL and would consider other libraries. 6174 fucked around with this message at 18:47 on May 12, 2008 |
# ¿ May 12, 2008 18:11 |
|
Nitis posted:Looking at this, I'm still not sure where I went wrong. The code that defines the function says that the function getBetAmount should take a parameter (line 162). The function prototype does not mention any parameters (line 48). You call the function with a parameter (line 81). These three things must match.
|
# ¿ May 12, 2008 19:10 |
|
cliffy posted:I'm not very familiar with BGL, but after glancing at the documentation it appears that they have a notion of 'properties' which are basically data members that can be part your edge or vertex object. However, instead of being POD-types they appear to be more like associative containers such as a map of strings to data members, in fact they call this concept a 'property map'. Using the 'put' and 'get' property-map functions you can associate a weight with a vertex like: I'm not very familiar with the library either, which I suppose is part of my problem, but from what you've posted there might be a way to convince BGL to do what I want. I guess I need to start reading the docs from the beginning to wrap my head around how to do what I want.
|
# ¿ May 12, 2008 22:43 |
|
I've got a C program I'm fixing/updating and there are couple constants I can't identify that hopefully someone recognizes. The program basically is a wrapper around a plain text file before sending it off to a printer. It does things like add page numbers, line numbers, a header saying what file is being printed, etc. The program used to talk straight to a HP LJ 4m+. The network changed a bit and now that printer is accessed by going through an LPD print server (and hence why I'm updating the program). It has a whole bunch of undocumented usage of PCL 5e (HP's Printer Control Language) that I've been deciphering. However the constants in question don't quite look like PCL commands, which is a lot of the reason I'm confused. The constants are: code:
The minimal documentation about -T is "do print through terminal to laserjet". The other weird thing is PRT_ON is printed before job control commands (ie simplex/duplex and orientation PCL commands) and PRT_OFF is printed immediately following the job control commands and before the contents of the file that is being processed. I don't think these are PCL commands because PCL commands start with ESC (hex 0x1B, octal 033, decimal 27). However other PCL commands were declared using \033, so I can't rule out that these were typos and should have had an extra 0 to make them octal constants. But I haven't run across any PCL commands that use a [ and PCL commands terminate by a capital letter, which "i" is not. (Keep in mind I hardly knew what PCL was a few hours ago, so I can't say authoritatively that they are not PCL commands) Does anyone recognize what those constants do?
|
# ¿ May 27, 2008 21:39 |
|
Vanadium posted:They look like those ANSI escapes used to get colored text on terminals. I think you've got it. According to here those commands do: quote:Stop Print Log <ESC>[4i Now this does mean that the program didn't correctly function earlier since the constant would need to be \033, not \33, but it clearly is almost the intended behavior of the original programmer. (The order of PRT_ON and PRT_OFF also need to be switched in the code to suppress the job control commands, which I believe is the intended behavior) This also explains why it was hard to find these commands in PCL references.
|
# ¿ May 27, 2008 22:02 |
|
TSDK posted:Nope, the '33' is taken to be octal. You can only specify numeric escape sequences as either octal or hexadecimal. Interesting. I guess I was too quick to assume the old guy made a mistake. I'm just too used to weird bugs or weird practices in his code. Like that C program I was working on yesterday was in K&R C, despite being up updated every couple years (though it was started in 1990, so being K&R C isn't too odd).
|
# ¿ May 28, 2008 14:31 |
|
Creepy Doll posted:I think your behaviour is pretty reminiscent to that of a grammar nazi correcting a typo(also known as nitpicking). C/C++ is not a language. Your post implied that it was. Just because there are similarities between them does not mean that lumping them together is reasonable in all contexts. In particular your post was addressed at someone who clearly is very new to C++ and using C/C++ is needlessly confusing. This "nitpicking", as you deride it, is not only appropriate here, it should be encouraged. Moreover, alluding that MarsMattel's comment is reminiscent of a grammar nazi is fatuous. If you have a point, make it, making a personal attack on someone else because you messed up reflects very poorly on you.
|
# ¿ Jun 4, 2008 16:29 |
|
I've got a problem with a C program and I'm not sure how to proceed. Basically the program is a relatively messy, fragile, program that I didn't write, but I get to maintain. Recently, the program was downloaded by a user, who reported problems. The initial set of log files I received with the problem weren't enough to determine what was happening. So I sent the guy a shell script that recompiles the program using some debug settings, runs some test cases and puts everything into an archive to send to me. The result of this is everything worked. The systems I test with have gcc 3.4.6 (32 bit) on an old RHEL4, and gcc 4.1.3 (64 bit) on Ubuntu Gutsy. The program works on both of these systems with the debug settings both on and off. The system with the problems is Fedora 7 with gcc 4.1.2, with I believe 32-bit. uname -a indicates i686, but it is a Intel Core 2 Quad system, Q6600, so 64 is possible here if I'm misreading things. The entire time the optimization flags to the compiler remained constant (specifically not set, so the default). The debug flags I had the guy recompile with were of the form "-D DEBUG". This turns on various #ifdef statements in the program which only print out variable quantities. The problem now is how can I track down the problem? What sort of things should I be looking for? edit: I've verified that all the conditionally compiled lines that were added were either simple printf/fprintf statements, or loops through an array with either printf/fprintf for each array entry. At this point all I can think of is getting the object files from this guy compiled both with the flags, and without, and start looking at disassembled code. But I'm hoping to avoid doing this since it would inevitably be a lot of tedious work. 6174 fucked around with this message at 22:10 on Oct 29, 2008 |
# ¿ Oct 29, 2008 20:45 |
|
Cross_ posted:Please define "problems". The basic purpose of this program is to read in several data and configuration files, based on these call program A (written in FORTRAN 77) with a particular setup, read and process the results, and then call program B (written in Fortran 95) with a particular setup. This program is used by atmospheric physicists to analyze the mixing ratios of various gases in our atmosphere. The set of all three programs is distributed with a half dozen example cases that are intended for the user to test and verify their setup, and provide examples of how to setup the various computations. The user with problems was having problems running these example cases. One of the data files used by program B is a spectrum in a proprietary format. The C program reads in the configuration file for program B which tells it what interval to analyze. The C program then verifies that the spectrum contains the interval and converts that portion of the spectrum to yet another format for program B to work on. The problem the user was having was that in all but one of the example cases, the C program would report that the desired interval was not contained in the spectrum. The spectra do actually contain the desired intervals. Based upon the standard output the program logs to various files I could tell that the program was looking for the correct interval, and was correctly reading the spectrum file. The debug flags I asked the user to recompile with print out a lot of logging statements during the reading in of the relevant configuration file for program B, logging statements during the reading of the spectrum, and general logging statements of the part of the program that locates the data and configuration files. Cross_ posted:My standard approach for hard to find problems is littering the code with log statements and checks for memory integrity. Then narrow it down to specific functions or contexts from there. That your test cases passed on his system might mean that either his build was incorrect or that your test cases are too lenient- you seem to be ignoring the second option. Adding the ton of logging statements makes the program work correctly. I also don't have access to the system with the problems, so it is not feasible to start adding logging statements one at a time until things work. To further complicate matters, I can't rely on this user to do much work since communicating with him is hard because his English is minimal and I don't understand a bit of Mandarin. 6174 fucked around with this message at 23:56 on Oct 29, 2008 |
# ¿ Oct 29, 2008 23:37 |
|
6174 posted:C debugging problems Using valgrind I found several locations that it complains about conditional jumps or moves depending on uninitialized data. The program is in worse shape than I thought, but at least now I've got some stuff to work from.
|
# ¿ Oct 30, 2008 23:41 |
|
I just spent several hours documenting some old binary files. I've got two variants of an old C program that should read them (old as in written in K&R C, last updated in 1992). Neither work because they assume the width of ints, shorts and so forth that are not valid with my machine. I am updating the program to C99 so I can use the fixed-width types from stdint.h to avoid this issue, but that only works for integral types. The problem is I've also got floats and doubles in this binary file (in IEEE format). I can't seem to find an equivalent to fixed-width types for floating points types or even a way to specify IEEE format. Since I don't want to have to mess with this program ever again, is there some way that I can read the data and put it into whatever underlying floating point format the particular compiler is using that has a reasonable chance of just working in the future with at most a recompile? Endianness shouldn't be a problem since I can detect it based on this particular file format (Except for some pathological counter examples but realistically those won't occur). 6174 fucked around with this message at 07:05 on Aug 14, 2009 |
# ¿ Aug 14, 2009 06:56 |
|
Avenging Dentist posted:That's because the underlying format for floating point types is not defined by the ISO standard. I realize that. It also doesn't specify the underlying format for the five standard signed integer types, but there are also the intN_T / uintN_t types where it is specified. My problem is I can't find something corresponding to intN_t for floating point values (which may be because they don't exist).
|
# ¿ Aug 14, 2009 07:58 |
|
Avenging Dentist posted:See also Annex F of ISO/IEC 9899:1999. That is what I get for not reading the Annexes. This should be plenty good enough for what I need. Though crazily enough there is yet another variant of this same program that does convert the IEEE values in the file to Cray floating point. Avenging Dentist posted:Also keep in mind that intN_t and uintN_t are not mandated by the standard. The main thing I'd like is for it to just work with at most a recompile, but if that can't occur failing in a way that is obvious that the build environment is crazy should be good enough.
|
# ¿ Aug 14, 2009 08:31 |
|
C99 Question: I've got the following union: code:
edit: For reference, I am using the __STDC_IEC_559__ macro that AD mentioned on the previous page to verify that the float and double aren't crazy. edit2: I know it sounds like I'm being paranoid, so let me give a few more details to explain why I'm being so cautious. The data files this program reads are scientific instrument recordings spanning about 4-5 years in the early 90s. I'm in the process of packaging that data up to put into a long term storage computer. In addition this program I'm writing will be sitting next to the data. What will most likely occur is no one will use this program for a decade or more (if ever). I'm trying to make it as likely as I can that this program will still function then, or give enough information so that a researcher, who will almost certainly have minimal programming experience, can still access the data or hand the program off to someone to update it so they can access the data. 6174 fucked around with this message at 19:33 on Aug 19, 2009 |
# ¿ Aug 19, 2009 19:02 |
|
Dijkstracula posted:In general, it's not a reasonable expectation, but in your case, it should be okay. Yeah, certainly alignment in general can't be assumed, but for the types in this particular union it seemed like a reasonable expectation. Dijkstracula posted:If you know something about the compiler that will be used, you can use a directive like __attribute__ ((__packed__)) on a union, but I've never actually needed this, so I can't tell you how the behaviour would play on different architectures. Unfortunately I've got no clue what compilers will be common in the future. For now I'm using gcc, and trying to keep from using any odd extensions and attempting to make explicit the various assumptions I'm making. Dijkstracula posted:Also scientific code This will be at least the 3rd or 4th generation of this program (others not written by me, and don't completely work with modern compilers if they ever worked), but it will be the first to have any comments or error checking.
|
# ¿ Aug 19, 2009 22:11 |
|
floWenoL posted:Are you checking at compile-time or at run-time? If you're compiling it now to be used 10 years from now, then if you do a compile-time check you'll know that whenever you run it it'll still work. Vanadium posted:Okay I never wrote anything where alignment or padding or whatever mattered but would it not be safer and more intuitive to just stuff everything into a char[512] instead of hoping that your unions are not going crazy on you 6174 fucked around with this message at 01:35 on Aug 20, 2009 |
# ¿ Aug 20, 2009 01:18 |
|
floWenoL posted:You should probably do it anyway, even if you're distributing the source, so that if it does break, you'll know it early. I haven't seen static asserts like that in C before. I'll definitely be adding those checks.
|
# ¿ Aug 20, 2009 01:38 |
|
fritz posted:A simple thing to do is to include some kind of reference file with a readme describing exactly what your code's supposed to do on it, that way future scientist can tell if they need to have someone do more thinking. That is already done. However, you assume to much. In my experience such additional files are completely ignored.
|
# ¿ Aug 23, 2009 04:38 |
|
fritz posted:Does Numerical Recipes still do that thing where they originally wrote the code in fortran with 1-up indexing and when it came time to write the C/C++ version with 0-up they decided the best thing to do was just add a dummy 0th element onto the front of all their arrays? bobbles posted:Yes they unfortunately still do. Maybe it is not consistent through the book, but I can't find an instance of this in NR 3rd ed. I've never actually used their code (I just look at the formulas when I need them), so I only just flipped to some random pages.
|
# ¿ Sep 7, 2009 09:29 |
|
rjmccall posted:Is this right after it denies the fundamental theorem of algebra, but before it gets into its discussion of the continuum hypothesis? For what it is worth, the relation GrumpyDoctor described (two floating point numbers are related if and only if they are within some epsilon of each other) is not an equivalence relation because that relation is not transitive. GrumptyDoctor, I'd have to know more about what you are doing to suggest a solution, but it is possible (likely for an appropriate epsilon) that your data set won't violate transitivity. I'd start by writing a script that checks for such violations (sort your data then check). Only if you found violations would I worry about it.
|
# ¿ Feb 1, 2010 04:52 |
|
I've got a program that needs to use the equivalent of ntohl, but for 64 bit. I only have a little-endian machine to test on at the moment, so can someone verify for me this works correctly on a big-endian machine?code:
|
# ¿ Feb 8, 2010 02:47 |
|
I'm writing a "pin tool" (think plugin) for Pin for a homework assignment. It tracks malloc/free of the program being instrumented. So when I call malloc asking for say 40 bytes, the next call to malloc will be 40 + 8 bytes away from the first. I know that extra 8 bytes of space is used to keep track of the space allocated and some other stuff so free can work. What I don't know is how portable the 8 bytes are. I'm currently testing on 64 bit Linux, which is using 8 bytes, but I need to turn this program in on a machine running 32 bit Linux. Clearly the 8 bytes is not 100% portable, but I only care about 32/64 bit Linux with glibc. Does anyone know if I can get that 8 byte number by some means (preferably compile-time if possible)? edit: I'm partially retarded and confused. I should have read the glibc docs closer. It rounds to an 8 or 16 byte boundary (32/64 resp), but that doesn't answer how much extra space malloc uses to keep track of the allocated block. edit2: After some experimentation it seems to be nothing more than an extra 32/64 bits on 32/64 bit systems resp (when using glibc at least) 6174 fucked around with this message at 20:25 on Feb 27, 2010 |
# ¿ Feb 27, 2010 18:26 |
|
I'm writing a C++ program and I've come to a design aspect that I don't know how to nicely solve. My program plays a board game. I will be writing several decision algorithms on how to play (basically variations of one another). Each of these algorithms will use a heuristic to evaluate how good a board position is at various points. How the evaluation heuristic is calculated is totally separate from the specific decision algorithm used. I would like to be able to easily mix and match decision algorithms with heuristics. One obvious solution is multiple inheritance with separate abstract base classes for the decision algorithms and the heuristics. Then a full strategy is formed by inheriting a decision algorithm and a heuristic. I'm not particularly enthusiastic about this option because the number of classes I'd have to write is the product of the number of decision algorithms and evaluation heuristics which grows quickly when trying variations of heuristics, even if the classes are essentially trivial. Is there a better way to do this than multiple inheritance?
|
# ¿ Jan 26, 2011 07:08 |
|
TheBoogeyMan posted:I like templates a lot with this type of thing. That sounds like a nice solution, but I'm not sure how to make templates do this. Do you have any examples or search terms which would reveal some examples?
|
# ¿ Jan 26, 2011 07:15 |
|
Does anyone know of a good way to make something like unit tests that track performance regressions? Last week I accidentally introduced a significant performance regression into a personal research project. It took me almost a week to find because all of my unit tests were still passing. The difference between 50ms and 500ms wasn't something I noticed while running the unit tests. However my main program takes hours to run as is and needs to chunk through tens of thousands of cases so that time difference is quite significant. Ideally I would like something where I can write tests similar to gtest (which is what I'm using for my unit tests). Then I could run a baseline test which stores results for later comparison. Successive tests would compare against the baseline and print warnings/errors if the timing deviates more than some percent/amount. Looking around I found two projects: haiyai and Celero. But both appear to just print out numbers and don't really identify regressions. Also they both seem to only have a handful of users at best so I suspect there are still crazy bugs lurking around that I'd prefer not to wrangle with. I'm currently using C++11 (I only care about Clang and gcc) with CMake and Google Test if there is something that integrates well with that combination.
|
# ¿ Apr 13, 2013 23:20 |
|
|
# ¿ May 6, 2024 02:54 |
|
Unless I'm misunderstanding you that is going to be extremely error prone. I run this code on multiple machines so hard coding any times is going to just break since specific numbers are not going to be precise enough across multiple machines. Even on my laptop, if I'm plugged in or on battery makes a significant difference in runtime. Obviously I could write a tool that creates a profile for each reasonable machine configuration, then use that as part of the unit tests. But then that basically describes one way a tool I'm looking for could work and I would prefer to not divert my attention away from my main work to write such a thing.
|
# ¿ Apr 14, 2013 00:50 |