Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
crazypenguin
Mar 9, 2005
nothing witty here, move along

Thug Bonnet posted:

that contain variables of mixed types

Do you know the general format? fscanf might be quite useful.

Adbot
ADBOT LOVES YOU

crazypenguin
Mar 9, 2005
nothing witty here, move along
If you're just drawing lines, one possibility is to just write out an SVG file, and then try to open it up in gimp to convert it to whatever.

I *think* gimp handles large files pretty well...

crazypenguin
Mar 9, 2005
nothing witty here, move along

Scaevolus posted:

You mean grayscale?

And is it impossible for this to be done in sections because parts of the image reference other parts? (Which fractal are you drawing?)


No, it doesn't. Try creating a 20,000x20,000 image if you don't believe me.
Touche. Does photoshop? I guess this is somewhat deviating from C/C++

crazypenguin
Mar 9, 2005
nothing witty here, move along
strcmp returns 0 when the strings are the same.

so basically, stick a ! before those strcmps

crazypenguin
Mar 9, 2005
nothing witty here, move along
Zorba provides a slightly contrived situation but a very good example of the kind of subtle and unintuitive bugs unsignedness can produce, and all you can do is rave about how the situation is slightly contrived?

The point is you can do perfectly normal things with signed integers because it's pretty easy to avoid +/- 2 billion (all you have to consider is whether 32 bits is big enough). You pretty much always have to start thinking about modular arithmetic with unsigned integers because it's drat hard to avoid 0.

crazypenguin
Mar 9, 2005
nothing witty here, move along

That Turkey Story posted:

You need to reread this conversation if that is what you pulled out. The problem is the signed version won't work for all strings whereas the unsigned version will. You're trading writing a proper algorithm for the ability to avoid an amateur mistake that could easily be picked up in testing. One is correct, the other is not.
There's a difference between library code and application code.

In library code, you need to be able to handle everything. There's a lot of great examples of how hard this is to do, the most famous is Bentley's paper on writing a binary search algorithm. Amusingly it had an error that nobody noticed for two decades. Unsignedness is important for correctness there because you need to work correctly with almost unknown inputs.

(As an aside, I wrote a compiler recently that comes with a really fun complication example. The language had simple constructs like "for i := a..b" and it turns out you have to contort the translation of for loops in unexpected ways to accommodate b = MAX_INT. Exercise left for the reader!)

In application code, you know your inputs a lot better. You know you're never going to be working with data structures 2GB in size on a 32 bit machine. You're not that loving insane. You also need to do applicationy things, like iterate over all elements of a list except the last one because that's a special case. Well, okay let's go to size()-1! Perfectly reasonable thing to do.

You should simply NOT write application code like you would library code or you will never finish the project. Not as long as we're still using languages like our current ones, anyway. If the above blog post isn't a good enough example of how every piece of library code needs to be written in weird ways to handle odd corners cases, I'm not sure what is.

It's really quite simple, for signed integers, all you have to do is ask "will the magnitude of this value ever exceed 2 billion" and you know whether the code is correct. For unsigned values, you have to ask "will the magnitude of this value ever exceed 4 billion, oh and for every single calculation I make with it, could it possibly underflow?"

KISS principle. Don't over engineer things. That's all I'm saying, anyway.

crazypenguin
Mar 9, 2005
nothing witty here, move along

That Turkey Story posted:

I find it hard to believe that using the proper datatype is over-engineering.

The preceding paragraph explains the conceptual complexity, the preceding examples show some of the practical complexity, and the fact that this discussion started with Google's recommended coding guidelines suggests that they've probably run into exactly these same problems internally.

crazypenguin
Mar 9, 2005
nothing witty here, move along

Nuke Mexico posted:

what the crap happened in this thread? Using "integral indices" instead of iterators...

crazypenguin posted:

Zorba provides a slightly contrived situation but a very good example of the kind of subtle and unintuitive bugs unsignedness can produce, and all you can do is rave about how the situation is slightly contrived?
That's what happened. :colbert: I wonder where this discussion might have gone if the example hadn't been a indexing into a data structure at all. Still not a real world example, but at least we can stop arguing about iterators! You can't have negative beers on the wall, right?!
code:
for(unsigned i = 99; i >= 0; --i)
   cout << i << " bottles of beer on the wall.." << endl;

crazypenguin
Mar 9, 2005
nothing witty here, move along

Avenging Dentist posted:

The argument is that you're substituting something that fails on a known edge case (or in this example, fails 100% of the time) and is easy to diagnose if you actually, you know, run the code, in order to save yourself from the minimal effort to write code that doesn't fail for 50% of the valid use cases.

It is intended to be obvious that it fails, while being perfectly reasonable code if you had just not insisted on unsigned integers for a variable that "can't be negative."

I guess I'll give one more example a shot. In the linux kernel, where it IS perfectly reasonable to use unsigned integers, there have been many security vulnerabilities (and this isn't just confined to linux) related to underflowing unsigned integers. The vast majority of these could have been avoided by using a large sized, signed integer, though obviously that's not always a potential solution for something like a kernel. If high profile, security sensitive things like kernels can make subtle mistakes involving underflow of unsigned integers, why would you recommend using unsigned more than is absolutely strictly necessary, inflicting this entire class of bugs on application code?

Really, if you want to convince me I'm wrong, instead of focusing on the illustrative example, find something horribly wrong with this:

crazypenguin posted:

It's really quite simple, for signed integers, all you have to do is ask "will the magnitude of this value ever exceed 2 billion" and you know whether the code is correct. For unsigned values, you have to ask "will this value ever exceed 4 billion, oh and for every single calculation I make with it, could it possibly underflow?"

crazypenguin
Mar 9, 2005
nothing witty here, move along

ValhallaSmith posted:

Also, what magic is it that lets MS compilers get a 10-20% edge on GCC/LLVM? Does MS have some crazy superior optimization in the compiler? Or is it because they have a superior library?

The biggest chunk is probably patents (software patents are retarded).

The next biggest chunk for GCC is probably that the software architecture doesn't really lend itself to being changed very easily because god forbid somebody write a nonfree plugin for GCC! Quick! Chop our own arms off!

There's a small amount of it that comes from the fact that Microsoft just has lots of PhDs paid good money just to work on this sort of thing, so it's just better.

Any the last little bit probably comes from the fact that Microsoft mostly just has x86 to worry about, while GCC is more general and supports a ton of different architectures. So they may have the opportunity to push things like vector optimizations up to a higher level than crammed down at the bottom in the instruction selection phase.

Also, I kinda doubt that there's a 20% edge there. Compiler optimization just isn't that good unless it's some pathological special case.

crazypenguin
Mar 9, 2005
nothing witty here, move along

TheSleeper posted:

p.s. I'm an idiot.

Getting .erase() and .remove() mixed up is hardly idiotic. It's just the downside to not using Javaesque names like .removeByIterator() and .removeByValue()

crazypenguin
Mar 9, 2005
nothing witty here, move along

Anunnaki posted:

I need a way to use something like getline to search the input, and every time there's a space, grab the preceding characters and put them in their own position in a string array

string s;
cin >> s;

It stops when it reaches whitespace. I have no idea why people are writing anything more complicated?

crazypenguin
Mar 9, 2005
nothing witty here, move along

Vanadium posted:

No, you would want to ask the type system to instantiate your class, with, uh, g_object_new.

To elaborate on this, see line 691: http://svn.gnome.org/viewvc/gtk%2B/trunk/gtk/gtkbutton.c?revision=21123&view=markup

crazypenguin
Mar 9, 2005
nothing witty here, move along
Are you sure optimizations are off? That kind of output is exactly what I'd expect if there's even a bit of dead code elimination, or constant propagation. Then the debugger can sometimes show "statements" being executed that don't exist in the compiled program (and so of course they have no effect.)

Try looking at the assembly?

crazypenguin
Mar 9, 2005
nothing witty here, move along

Ender.uNF posted:

I know this is probably a dumb question but perhaps we have a compiler writer or C history expert here who can answer.

Why does C require every declaration of a struct or enum be qualified with the struct/enum keyword? Obviously we all just typedef that away, so why did K&R bother and why hasn't any proposed version of the C spec changed it?

Before typedefs, C was LALR(1) and quite trivial to parse.

The introduction of typedefs created an ambiguity in the grammar, and now everything that parses C-derived languages has to use "the lexer hack," which is to build the symbol table while parsing, and using that information to disambiguate.

The problem is simple code like "a * b;" which will parse differently if "a" was a declared as a typedef earlier in the file or not.

This, plus the preprocessor, were pretty much C's "original sin" that lead to all C-derived languages to have pretty poor tooling today. (C++ added all kinds of new mistakes along the same lines. Not just needing symbol resolution, but having to do type checking and template expansion during parsing, too.)

crazypenguin
Mar 9, 2005
nothing witty here, move along
Try the suggestion in this question: http://stackoverflow.com/questions/9371238/why-is-reading-lines-from-stdin-much-slower-in-c-than-python

crazypenguin
Mar 9, 2005
nothing witty here, move along
I don't think there's any need for an array at all?

If you've got 15 simplification rules and like 5 subtypes of expression, that's and average of 3 things to go look for on each method in the visitor. I have no idea why you think this is ugly or too much for a method?

Are you forgetting that in each visitor method you know what the root of that subtree is? Like, you don't need to check the OR rules in "visitAndExpr".

crazypenguin
Mar 9, 2005
nothing witty here, move along
The hierarchy should almost certainly be flat. And there's no way constants as a subclass of variables makes sense even if it wasn't flat.

And you'll probably want helper visitors. Consider writing a "constantValue" visitor that returns an enum of NONCONST, CONST_TRUE, CONST_FALSE. Or something like that. (perhaps two: "isConstant" and "constantValue" that returns a boolean.) Once you have this, there's probably no reason you'd ever need to ask specifically what class a child tree is rooted with.

edit: also, I'd say your "coding issues" are stemming from some conceptual problems here, so asking the TA is probably appropriate.

crazypenguin
Mar 9, 2005
nothing witty here, move along

hooah posted:

What do you mean by a flat hierarchy? We're required to use the Composite Pattern for the expression stuff.

What are helper visitors? I also don't know about enums :/

Flat hierarchy means everything should derive from Exp. Nothing deriving from other things that derive from Exp.

I'm not sure what the composite pattern is, so perhaps they're making you do something that makes things hard for no reason.

Helper visitor is just another visitor, like the ones I suggested. If you haven't done enums, then writing the two I suggested that just return booleans works too.

crazypenguin
Mar 9, 2005
nothing witty here, move along

quote:

When you mentioned constants as a subclass of variables not making sense, then how would you deal with the literals ("true" and "false")?
Depends on how you want to do it. I would have a True class and a False class each deriving from Exp. An alternative would be to have a Constant class with a boolean member. However you like, just prefer flat over subclasses.

quote:

I can sort of see where you're going with this, but let me see if I actually am: isConstant would check if something is a literal, and constantValue would return the value for that literal?
Yeah, and perhaps recursively. An AndExpr could be const if both its operands are const, for example.

crazypenguin
Mar 9, 2005
nothing witty here, move along

tractor fanatic posted:

The idea is to have a restricted set of operations on the data in the struct that the compiler can enforce, but I don't need any virtuals because each interface is specific to this struct itself, kind of like how const partitions the set of operations into mutating and non mutating ones.

Sounds like a good time to pick composition over inheritance. Just have a bunch of classes with the struct as a private member.

crazypenguin
Mar 9, 2005
nothing witty here, move along
Import performance for sqlite should be pretty good. Have you googled around for tricks on how to do it faster?

For instance, (and my recall is very rusty, so verify with google before believing the specifics) there's a bunch of things turned on by default, like I think journaling and locking and maybe a default index, that can be disabled when you're importing. And I seem to recall that importing the data, THEN creating your indexes is faster than the other way around.

crazypenguin
Mar 9, 2005
nothing witty here, move along
You have statements at file scope. Wrap them in main or something.

crazypenguin
Mar 9, 2005
nothing witty here, move along

Joda posted:

What baffles me the most is that I always get the right result with f2,

Memory model. That function (f2, not the other one) will almost certainly (even without optimizations) read the values of m, n, x once at the beginning of the function and write them out once at the end. With optimizations, it gets even better since it won't even loop. The other function always reads and stores for every access and update, and the loop always stays.

There should be a interleaving which causes f2 to give not-0, but it might actually be very hard to trigger. There could even be weird cache-(in?)coherency behavior on that part of your processor which could make it almost impossible to trigger.

The only sure-fire way to see it happen would be for one thread, on one core to be interrupted in the middle of reading the two values of m and n, and to switch to another thread (on the same core) which updates them. This is a window of 1 cycle (because m and n are likely on the same cache line, as soon as the processor loads the value for one, it has the other, too.) These threads are so simple and short to run, they probably are never interrupted at all. Every one just runs to completion when it gets cpu time. So you probably can't induce a bad interleaving without making them more complex.

crazypenguin
Mar 9, 2005
nothing witty here, move along

I think you should put your post back. It's more concrete than my high-level waving around. And it at least corrected my claim that it'd do loads and stores only once in the whole function without optimizations. Evidently (if I recall correctly, you removed the assembly dump, so I can't double check!), it was doing the loads at the start of the loop and store at the end of the loop (without optimizations, right?)

crazypenguin
Mar 9, 2005
nothing witty here, move along

ullerrm posted:

Sadly, Linux is the main offender here. Userland apps don't have a way to force commit.

I don't know if it's documented behavior, but one thing I know the JVM does is use mmap() to allocate memory, then immediately re-mmap the allocated memory to the same place this time using MAP_FIXED. This seems to cause the (re)allocation to fail if there is insufficient available memory for the whole thing.

However, this doesn't actually reserve the memory for the app, it just seems to actually force the system to be okay with returning an allocation failure right then, regardless of over commit settings.

Adbot
ADBOT LOVES YOU

crazypenguin
Mar 9, 2005
nothing witty here, move along

Blotto Skorzany posted:

What is the title of that C-and-C++ course, and what is its ultimate purpose?

Pretty much this. Every course design should start with learning objectives.

Talking about the specific things they should learn about C/C++ is just a (bad) proxy for talking about what they should be prepared for after taking the course. Go examine what courses list it as a prerequisite, and talk to those instructors about what they need students to know from the course. Then design from there.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply