Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
That Turkey Story
Mar 30, 2003

tef posted:

welp a game designer writes terrible code

I didn't know you were a game designer!

Adbot
ADBOT LOVES YOU

That Turkey Story
Mar 30, 2003

You know, I bet your code does increase in quality by a ton when you have that many people watching you code. You're both probably way more self-conscious and people will be catching your bugs all the time for you entirely for free as soon as you write them.

That Turkey Story
Mar 30, 2003

Toady posted:

I don't think it's accurate to pin negative opinions on nerd jealousy or increased scrutiny due to success. Minecraft's code is objectively bad. It doesn't even combine identical adjacent quads.

So what, how often would that actually happen in a game like minecraft?

Oh yeah.

Edit: Although, to be honest, I highly doubt that would ever be a bottleneck anyway.

That Turkey Story
Mar 30, 2003

Toady posted:

Quite often since most buildings and other user creations, plains-like biomes, oceans/lava, etc. have flat surfaces.

Yeah, that would be the joke.

That Turkey Story
Mar 30, 2003

Factor Mystic posted:

From reddit

Ok, javascript post. More laffeaux javascript nonsense?


Nope, just floating point. And this is pointed out:


But then,


Uhhh...

This post makes me so depressed.

That Turkey Story
Mar 30, 2003

Suspicious Dish posted:

My favorite is the guy who thinks that your memory has the capacity to repeat to infinity.

Ummm, his name is base2op. I think he knows what he's talking about....

That Turkey Story
Mar 30, 2003

Zombywuf posted:

idgi

What does the fact that the Derived has a Base as a private member have to do with operator overloading?

Agreedo.

That Turkey Story
Mar 30, 2003

SavageMessiah posted:

How about coding standards horrors? I'm "improving" some code I wrote for our parent company by switching over to their "standards".

I can't call functions in if conditions. I can't use booleans implicitly in if conditions (true == blah ahoy!). I have to use single point of return. Hungarian notation.

<snip>


... this is it, folks, this is how we write good code!

That Turkey Story
Mar 30, 2003

It's so hotta here.

That Turkey Story
Mar 30, 2003

quote:

PHP is a language with many high-level functions and while they're not always implemented as consistently as we'd like (mostly to blame on its underlying C parts)

Err... what??? How can you possibly push the blame onto C.

That Turkey Story
Mar 30, 2003


Maybe I'm missing something, but isn't your example just equivalent to a switch nested inside of a while loop:
code:
{
  int s = some_var;
  while(true)
  {
    switch(s)
    {
      // Your stuff here as written with the continue. break by going to the label
    };
  }
}
a_goto_label:

That Turkey Story
Mar 30, 2003

SupSuper posted:

It might be horrible, but the readable comments and variable names put it way above a lot of code I've seen.

I dunno, I think I'd rather have a nice, 5 line function that is very clear than have a 1000 line mess that does the same thing coupled with several paragraphs of comments.

Edit: Even if they used 1-letter variable names I think it'd be better at 5 lines.

That Turkey Story
Mar 30, 2003

At a place I worked they did something similar regarding linked lists. I can relate to a lot of that stuff. The game industry really needs some fixing.

That Turkey Story
Mar 30, 2003

Some day, people will more widely embrace generic programming and we won't have to deal with retarded, inflexible hierarchies of poo poo.

That Turkey Story
Mar 30, 2003

Pilsner posted:

Just curious, if I ever were to code some C++ again, is Boost like a good wrapper for all the horrible tedious stuff in basic C++, like having to write 10 lines of code to read a file and such? What about networking, image handling, databases, etc?

Reading files was already mentioned, but as for networking there's Boost.Asio, for image handling there's Boost.GIL.

E:

ToxicFrog posted:

Don't forget to delete[] (not delete) buf when you're done with it!

If you're manually using new and delete for dynamic memory allocation in modern C++, you're probably missing a better alternative in the standard library and/or boost.

That Turkey Story fucked around with this message at 19:35 on Sep 10, 2012

That Turkey Story
Mar 30, 2003

I'm only saying "probably" to be a pedant. unique_ptr is good for most cases where you'd use dynamic memory allocation, even when building higher-level data-structures, and shared_ptr and weak_ptr are good enough in those situations where one wants shared ownership. It's pretty difficult to come up with situations where you'd manually use new and delete for dynamic memory allocation in place of tools like these.

That Turkey Story
Mar 30, 2003

shrughes posted:

Boost is more like what happens when you take a bunch of sperglords who feel the need to make poo poo complicated in order to feel smart about themselves.

This is what programming plebes actually believe.

That Turkey Story
Mar 30, 2003

Boost isn't perfect, but it's generally as complicated as it needs to be and no more. Libraries go through peer review before being accepted. If you think something specific can be simplified without sacrificing functionality then post to the mailing list and someone will either listen to you and make changes or they will kindly explain existing rationale. If you have patches, you'll be all the more welcome.

That Turkey Story
Mar 30, 2003

I realize you're likely trolling but I'm going to respond anyway because that's what I do.

shrughes posted:

No, this is what is actually true. Or rather, it is the theory that most accurately fits the facts. Pick any boost library. For example... *picks at random* the boost uuid library. It defines a struct that contains a 16-byte char array and has a few functions for creating UUIDs. It somehow uses or includes something that uses MPL.
It's almost as if MPL is a useful, generic library that lots of other libraries depend on. Imagine that! It must be terrible! I can't believe that one Boost library would depend on another one from Boost!

shrughes posted:

Let's pick another ... *picks at random* the boost spirit library, probably the archetypal example. We implemented a bunch of spirit parsers and it's clearly overengineered garbage. Infinitely slow to compile, not fast or optimizable to run. It's always simpler, more debuggable, and more editable by coworkers, to pound out your own recursive descent parser, or to use some other method of building a parser.
If you really believe that then go right ahead and do so, though I highly doubt your claim that rolling your own recursive descent parser is easier for anything even remotely non-trivial. Of course, using Spirit would require actually learning Spirit without just whining like a baby and deciding that it'd be better to write everything from scratch. Clearly it's so much better to reinvent the wheel than it is to work with a well-designed, well-tested, and heavily used library. Plus, if you used Spirit, people would be able to look at the Spirit documentation as a means to understand your parser, rather than having to study your code to figure out some ad-hoc, likely buggy implementation. And we certainly can't have that.

Boost.Spirit exists for people who want an EDSL for parsing and output generation without the tedium and chance to introduce bugs that comes about from hand-rolling parsers from scratch. You are obviously not one of those people, so don't use it. That's fine, but if you really don't see the advantages of it then you're just completely oblivious.

shrughes posted:

Let's pick another ... *picks something "safe"* scoped_ptr. It has an implicit conversion operator.
That's called the convertible-to-bool idiom, and it's both good practice in C++03 and not specific to Boost. It's explicitly implemented in a manner that gets around the drawbacks of implicit conversion operators. The idea is you want to be able to do if( my_pointer ) just like you can with pointers but without having implicit conversion to bool (you don't want implicit conversion to bool because bools can be promoted to integers due to integer promotion rules). Instead, you convert to a type that can be used in a boolean context but that's not a bool itself and so it doesn't suffer from the integral promotion problem.

The real point here is that probably 90% of programmers don't know this and don't have to know this. Thousands of people use the library without ever having to think about it or even look at the implementation because there is no reason for them to, as is the case with most well-designed, well-tested code.

But of course, it's much easier to just criticize code you don't understand than actually ask someone about it or look into the rationale yourself. Certainly you can do it better, so just make your own -- oh wait, don't tell me, you've done so already. Have fun maintaining that while everyone else uses a more reliable library that the community in general is already familiar with.

shrughes posted:

Let's pick... shared_ptr: it is all mixed up with weak_ptr, to make it easier to make your code unnecessarily complicated.
What do you even mean here by calling them "mixed up?" weak_ptr and shared_ptr are two sides of the same coin. You use weak_ptr to have a non-owning reference to the object that a shared_ptr references. weak_ptr's implementation depends heavily on that of shared_ptr. Clearly those crazy Boosters were onto something since shared_ptr and weak_ptr are standard in C++11. Man, what were they thinking!

shrughes posted:

Let's pick... boost serialization. "Oh, you want to serialize stuff? Sorry, you're using some slow library that wants to keep track of pointer graphs [and iirc, uses RTTI]."
Yeah, if you want to "serialize" pointers, then that's the automatic way to do it. If you don't want that functionality then you don't have to use it. God forbid a library has a feature that you personally don't use -- that must mean it's over-engineered nonsense! It must have just been put in there because the guy wanted to look smart, not because he, you know, had a use-case for it (believe it or not, a lot of people use that feature, though I personally do not).

shrughes posted:

The best example of making poo poo complicated for complete vanity is this binary conversion utility. It has a billion preprocessor expansions when you can just use a hexadecimal or octal constant, or convert at run time, or glue different field width sections together with a macro, or run an octal constant through some bitshifting...
Har har, yeah, I made that, which I'm sure you already knew, and actually it owns. I know this may surprise you, but converting at run-time is often not an option, for obvious reasons, and the whole point in writing a value directly in binary is because you are often trying to represent something that is most clearly expressed... in binary (I.E. bitmasks). Writing your constant in hex or octal often does not show your intent whereas writing it in binary does. If you don't see the advantage of being able to write a literal in binary, then do you at least see the advantage of writing literals in octal or hex, or are those useless to you as well? After all, why not just write everything in decimal all of the time? Those people who use hex and octal must just be trying to look smart!

Anyway, if you really think it's for complete vanity, it's not. The reason that the macro exists is because other people wanted it -- I wasn't even the original one to propose it. In fact, I didn't even write it until a review for someone else's implementation was already underway (which first requires agreement that such a utility is useful to begin with). People show desire for a facility, there is a request for consideration of an implementation, it goes through peer review, and then it is accepted or denied (and a lot does get denied). It's not just a bunch of know-it-all programmers throwing in whatever they want.

shrughes posted:

It's very easy to create a "rationale" for any feature: you need it to do X. The argument against making things complicated is less tangible -- it is a sense learned by experience and not derivable through software philosophy.
What you refer to as "complexity" is what other people call "features." The functionality that exists in Boost is there because it is what people want or need. If you can implement that same functionality but in a simpler manner, then go right ahead and do so, and post to the mailing list about it. People will gladly listen and possibly use your code. If, on the other hand, all you do is complain about a library having a feature that you personally don't use or about techniques that you don't understand, then you're going to be rightfully laughed at -- not because those people are trying to be smart, but because you are being stupid.

That Turkey Story
Mar 30, 2003

Shinku ABOOKEN posted:

Inexperienced opinion here:
The thing I hate the most about C++ is the polluted namespace. Can't include anything without having gazillion macros in the namespace.
Why can't macros be namespace or file local? :argh:
Macro scopes were proposed for C++11 but ended up being postponed.

That Turkey Story
Mar 30, 2003

shrughes posted:

- No, it's not reasonable for UUID to use MPL.
Why? Because you say so? What if it included the C++11 <type_traits> header instead of MPL and used std::conditional instead of boost::mpl::if_? You seem to have this unnatural fear of MPL. People aren't going to arbitrarily avoid using MPL if one of its facilities is useful to them. It's not like including one little fart in MPL pulls in the entire thing. Maybe you don't want to pull all of Boost into your company's project. Fine, although that's rarely a worthwhile concern, especially given that the vast majority of boost is header only and you likely aren't going to be frequently altering it once it's in your project.

shrughes posted:

The fact that you think it's reasonable is an example of why Boost is such a horrible set of libraries: you don't care about quality, and you don't understand the benefits of keeping code simple with a constrained set of dependencies.
So because I don't care that UUID indirectly uses a part of MPL, I all of a sudden don't care about quality or having a constrained set of dependencies? Explain to me how you've come to this conclusion and how your life is so horribly impacted because UUID indirectly relies on MPL. Did MPL kill your parents or something? MPL is a toolset of very general and very useful compile-time utilities. Programmers don't just use it to be cool or to pull in dependencies.

shrughes posted:

- You think I don't understand how Spirit works. I have used Spirit for parsing and used other people's uses of Spirit for parsing big and little things. We're no longer using it.
Your personal opinions on an individual library are not greatly affecting my, and other people's opinion on it because we have our own experience to rely on. Yeah, it often takes long to compile parsers generated by Spirit, knowing Spirit is a hurdle to understanding Spirit code, and you generally need to be comfortable with expression templates to be able to work with it. You've weighed your options and decided that for your project you do not want to use it. Fine, but using that as a reason to say that the library is of poor design is foolish and ignorant. Different people have different projects with different requirements and people working on code often have different skill sets. Spirit is not the end-all be-all of everything related to parsing that everyone will use for every project. That doesn't imply that it is poorly designed and it doesn't imply that everyone should hand-roll their recursive descent parsers from scratch as you have personally decided to do. Is there some specific implementation flaw that you'd like to criticize or do you just think parser generators in general are useless and a waste of time?

shrughes posted:

Y'know, when somebody says that they used to think X but then experience made them change their mind to Y, while you still hold the opinion X, it means you should reconsider X.
Believe it or not, I too form my own opinions based on experience and rational thought, not because shrughes says so. If you have something worthwhile and objective to say, I'll actually listen. If, on the other hand, you have some arbitrary philosophical criticisms about one Boost library including part of another Boost library or some personal anecdote, then I really don't care at all.

shrughes posted:

- Saying that people "want" Spirit does not mean it's a good idea. I don't know why you'd think it means it's a good idea. It's a plausible implementation of a bad idea.
I didn't say that's why it's a good idea, just that it's not some arbitrary library thrown down the public's throats. It exists as one very powerful option to filling a common need. You have yet to explain why Spirit is objectively a bad idea. I understand that in shrughes's personal opinion, it's garbage and you've decided not to use it for your project, but you've failed to explain to me why it's not good design or why it's not good at what it does.

Do you think that Spirit is useless for everyone or just you? If just you, then how does that imply overall poor design? If everyone, you'd better make a pretty strong case and an alternative approach that doesn't involve writing everything from scratch all of the time, which you haven't.

shrughes posted:

- I know about the convertible-to-bool idiom, I've written it myself, and then removed it, and I just included reliable evidence that I looked at the code in my previous post, so why are you acting like I don't know how it works?
Because you talk about it as if you don't know how it works. Yeah, it's a weird idiom, but it is effective... so what exactly is your point? That if some random person sees the code they might be confused for all of a few seconds? If you're really that uncomfortable with it, write a little comment rather than deciding to sacrifice the functionality it provides.

A lot of smart pointers do this and people don't whine about it because there's nothing there to whine about. Like I said, in the real world nobody cares what's going on when they do if( my_pointer ), nor should they have to. It works, it's correct, it's efficient, and perhaps best, they didn't have to write it. What's more is they're probably not ever going to look at the source, just like they'll rarely look at their standard library's source. If I took two smart pointer types, showed their usage to a programmer, pointing out that one allows you to do if( my_pointer ) and the other makes you do something along the lines of if( my_pointer.is_valid() ) or if( my_pointer.get() ) do you honestly think that they give a poo poo that one uses an idiom that they may or may not be aware of underneath the hood and that it will somehow negatively impact their ability to write code or use the library? Further the interface difference is unbelievably superficial. Would I really care if I had to do if( my_pointer.is_valid() ) all of the time? No, but if there's a more concise way of writing it, I might as well take advantage of it.

shrughes posted:

Here's a conclusive example where even you should recognize that you were wrong about something. Given this new self-knowledge, you should probably trust your opinion about other matters less than you do. The convertable-to-bool idiom is another example of the X -> Y movement where you're still at X and I've moved on to Y.
:what:

shrughes posted:

- Writing OCTAL_BINARY(01101011100) (or heh, just using hex) does indeed show your intent.
First off, your macro doesn't work for large binary values, which believe it or not, is exactly the primary use-case for writing in binary (again, bitmasks). It also is very easy to misuse by simply forgetting to have a leading 0, in which case you won't get a compile error, you will just get the wrong result and a very subtle bug. There are also other features of the macro as it exists in boosts that yours does not have, such as suffixes and groupings, but I'm sure this is just entirely pointless to you and so therefore must be poor design. Finally, that macro doesn't do a lot of preprocessor metaprogramming, but it's not exactly easy to figure out what's going on there or that it even works for the full range of values (again, it doesn't).

Also, just using hex generally does not directly show your intent when working with bitmasks. First off, when writing a bitmask you have a bit pattern in mind -- that's why it's a bitmask. If you have to write the value in hex or octal, you're going to have to appropriately, and manually, convert it. Similarly, for a reader, the first thing you're going to do to figure out what the bitmask corresponds to is convert the hex or octal digits to binary in your head. Writing the value, you know, actually in binary directly, removes both of these unnecessary steps, and in doing so removes some of the chance for human error. I understand that you have the correspondence between every nibble and every hex digit memorized, but plenty of people don't, nor should they have to to quickly and easily write out something as trivial as a bitmask.

shrughes posted:

- Yes, I have written smart pointer classes that did use the convertable-to-bool idiom and now don't. And guess what: they're reliable and don't require "maintenance." (Why the hell would you expect a smart pointer type to be hard to maintain? I know, maybe it's because you like to introduce unneeded dependencies to other libraries.)
All libraries require maintenance, shrughes. You're telling me you literally poo poo out gold that was completely bug-free from the start, manifested by God, complete with tests that have been run on dozens of setups, all in the time it takes to write #include <some_library_header>? Forgive me, I wasn't aware that I was in the presence of the programming messiah.

Go ahead, write all your trivial bits of code from scratch because you have some arbitrary philosophical gripe with already written, heavily-tested code that does exactly the same thing and that thousands of other programmers are already familiar with. I personally love working on projects where some supposedly hot-poo poo programmer likes to write everything on his own, including trivial smart pointers, rather than use existing solutions to problems solved a decade ago. I know, it must be fun to write or learn yet another smart pointer for a given project that has no actual benefit over something in the C++ standard or in Boost. That's really a great approach to programming. There is no possible way that that time would have been better spent writing code specific to whatever domain you are actually dealing with.

Plorkyeran posted:

A dependency the size of MPL is not something to throw in to simply some already simple code unless your primary goal is to ensure that the only way to use boost is to depend on all of it, rather than to create a useful library.
You make it sound as if you use a part of MPL you are using all of MPL. Many MPL headers are very tiny and don't include a lot of other stuff. The library is also header-only. It's not like when you #include <boost/mpl/bool.hpp> you are pulling in <windows.h> or something. The headers are properly fine-grained and the only thing that generally makes things larger are workarounds for compilers you may not need to personally support. Things like boost::mpl::if_ are extremely useful for many libraries, and is why it made it's way into the C++ standard. It's no wonder that a lot of libraries use it. Of course, I wouldn't imagine you'd throw a hissy fit if someone wrote #include <type_traits> to get std::conditional, but in actuality that pulls in more unused code than #include <boost/mpl/if.hpp> does just to get the equivalent functionality of boost::mpl::if_c.

Plorkyeran posted:

I think I would be less annoyed by boost if it admitted it was a monolithic blob with some optional components rather than pretending that it's actually modular. I don't really get why bcp even exists, as I've never seen it output something reasonable to distribute along with the application source.
I wasn't aware that Boost pretended to be anything but monolithic. It is inteded for a user to download all of boost and drop it into a project without hassle. BCP is tool for very specific cases and is not intended for general use.

That Turkey Story
Mar 30, 2003

Vanadium posted:

Also D lets you assign non-static member functions to function pointers without a cast or anything, and the function pointers will segfault when called while this doesn't happen to have the right type in the lexical context of the call. :stare:

Wait, what? What was even the rationale for this?

That Turkey Story
Mar 30, 2003

Senso posted:

Same guy trashed linked lists in his next post. I'm a newbie regarding that but what's the general opinion of this thread, is he right? Linking directly in the object (intrusive lists) is preferred to using linked lists?
Intrusive containers are great, but they are certainly not universally better than non-intrusive ones, despite his claims. Rather than go through everything here, the Boost documentation has an actual sensible, not-one-sided list of advantages and disadvantages (and with an implementation that is much more usable than Patrick Wyatt's and actually gives them standard-compliant iterators, meaning that you may use them with generic algorithms... of course, the header file is longer than his so it's clearly inferior :rolleyes:).

Like anything else, you weigh your options. Lots of types (containers, smart pointers, etc.) may be implemented intrusively, with varying benefits. If you are dealing with value types, a std::list is often a better choice and is easier to reason about. Throwing down your fist and saying that users should always prefer intrusive over non-intrusive containers is ridiculous and at the very least forces people to intrusively alter their types simply because of the containers they may be stored in when it may not even be worthwhile, and it forces you to change your types if you simply change where/how they are stored.

Also, a fair amount of what he says is not entirely true:

Patrick Wyatt posted:

Here is code that safely removes a person record from the linked list:

code:
// NOTE: O(N) algorithm
// -- requires list scanning
// -- requires memory deallocation
void erase_person (person *ptr) {
    std::list <person*>::iterator it;
    for (it = people.begin(); it != people.end(); ++it) {
        if (*it != ptr)
            continue;
        people.erase(it);
        break; // assume person is only linked once
    }
}
Tragically, this unlinking code is awful: for a list with N entries, on average it is necessary to scan N/2 entries to find the element we’re interested in removing, which is why a linked list is a bad choice for storing records that need to be accessed in random order.

More importantly, it’s necessary to write list-removal functions like the code above, which takes programmer dev-time, slows compilation, and can have bugs. With a better linked-list library, this code is entirely unnecessary.
A few things here. First, if you want to remove something from the list in constant time, you usually hold on to an iterator instead of a pointer, which you get when you insert it into that list. This allows you to delete it from the list in constant time with no searching required. If your pointer originally came from somewhere else, then yeah, you will have to search, but again, that is only a subset of cases.

Second, his comment that it forces people to write "list-removal" functions such as the one he wrote out above are completely not true. Despite deciding that std::list is universally bad, he apparently doesn't even know the STL well enough to realize that std::list has a function for removal (imagine that) and it's called, unsurprisingly, "remove." You just have to write your_list.remove( your_person ); You don't have to manually write a search as he explicitly claims and cites as a reason why his intrusive list is "better" than std::list.

Third, but admittedly on somewhat of a tangent, in cases like these, very often people mistakenly choose a std::list over something like a std::vector simply because they want constant time removal from the container. There are a couple issues here. First, you can remove items from a vector in constant time, it just alters the stored order and potentially invalidates an iterator -- you do so by swapping the element you want to remove with the one in the back and then pop_back. Second, if you are dealing with something like a container of pointers (or really anything that is trivially movable), as is in the example, erasing something from the middle can generally just be a memmove on the chunk of data that appears after the removed element anyway, and will even preserve storage order. It's also important to note that neither of these operations will require a call to your allocator's deallocation function, unlike when using a std::list, since in the list case, a node has to be deleted. For small lists, it's unlikely the difference between the constant-time erasure and the erasure from the middle of a vector of trivially-movable types, such as pointers, is significant at all, and depending on a number of variables including the size of the container, how allocation/deallocation is performed, and where your list nodes happen to be in memory in relation to one another, the vector erase may end up being faster, even if you don't use the swap trick. Finally, linked lists provide constant removal, but iteration over them is potentially much slower and not cache-friendly. If you are iterating over the list frequently (I.E. every frame, or almost every frame), but are removing things from the list only on occasion (I.E. when an object moves from one sector to another, or when an item is removed from inventory or a selection list, etc.), you should probably be valuing fast iteration a lot more than fast removal (though in those cases, neither would likely be a bottleneck anyway).

Anyway, the point is, you always need to weigh your options. The differences between std::list, intrusive lists (whatever implementation you choose), std::vectors, and any container are trade-offs. One is not universally better than another.

Senso posted:

EDIT: Eh, he's also trashing Boost, since we're talking about it.
Not uncommon for game programmers and people who always roll their own datastructures because it's easy :rolleyes:. "This header file is big, but I can write it in fewer lines so therefore my version must be better! Whoa, a metafunction, what is this voodoo nonsense!?!" The boost containers provide compliant iterators so that you can actually use them with generic code, they are well tested, documented, and programmers know them. Even if they don't use boost, anyone who knows the language's standard library already knows 90% of how to work with a boost container, and if you are using some hand-rolled solution that doesn't do this, there is 0 chance that anyone else is going to be familiar with your code before being brought onto the team. In addition, with a solution such as boost's that provides compliant iterators, you don't have to write your own algorithms for everything because of it's iterator support. Of course, since he doesn't even know about std::list::remove I doubt he's ever touched <algorithm>.

Edit:

shrughes posted:

Actually it runs around more like 80 lines. It could potentially balloon to 90 if I ever need to iterate the thing without consuming it.

Mine is designed to make nodes inherit from a base class though, so it's a bit different, you'd need some ugly inheritance to get an object into two lists at once. I'll have to write a non-base-class version though to see how that works out.

Or, you could just use a peer-reviewed, efficient, tested, open source solution that people already know instead of making your own intrusive container, apparently for at least the second time, programming messiah. *cough*

That Turkey Story fucked around with this message at 22:58 on Sep 12, 2012

That Turkey Story
Mar 30, 2003

Vanadium posted:

I'd like to suggest that the D motto is "Those who cannot learn from history are doomed to repeat it" but I can't really argue against the combined Walter Bright and Andrei Alexandrescu C++ learnin'.
I find this to be the case with most languages. It seems like for every language that introduces something useful, they get rid of something else that is useful because it's "complicated" (who cares if it's useful and correct -- it's scary!).

That Turkey Story
Mar 30, 2003

HORATIO HORNBLOWER posted:

What a nonsense post. Part one was intriguing, but this is just a mess. My three favorite things about it: a) purports to explain how to avoid game crashes while concerning itself exclusively with performance; b) goes out of its way to point out that the author did not invent the concept of a linked list without seperate container objects, like, no poo poo, sherlock; and c) advocates hand-rolled code as less susceptible to bugs than standard libraries.

C) is what really gets me more than anything else and it happens with tons of programmers. How big of an ego do you have to have to think that something you write ad-hoc for a project is going to have fewer bugs, be better designed, and be less buggy than a standard or open-source solution put together by one or more people who have devoted their time explicitly to it, usually with plenty of tests and other people using it, improving it, and potentially finding bugs. This is true even, if not especially, for something trivial -- I say especially for something trivial only because there are probably fewer potential trade-offs that could actually make a difference for a project.

Anyway, if the code really is so trivial that you could write it in an hour or two and be relatively convinced of its correctness, why would you spend any time making it at all, particularly with even the slightest chance that you may mess something subtle up. For something so trivial, there are probably plenty of already-written, known alternatives, that other people are aware of. What are you gaining by reinventing the wheel? Even if it just takes an hour, devote that hour to whatever specific task you are trying to accomplish, not some mundane little algorithm or datastructure.

That Turkey Story
Mar 30, 2003

Ithaqua posted:

Some individuals / organizations have an insane sense of pride that all of their software uses no third-party code. I interviewed at a place like that; they weren't amused when I suggested that they implement their own operating systems, web servers, and database servers to be truly 100% in-house.
I wish I had the balls to do that, but I tend to feign admiration instead.

That Turkey Story
Mar 30, 2003

Contero posted:

I'm curious what your list of retained features would be to make a successful successor to C++.
Any list I try to quickly make here is bound to be incomplete, but of the things that many other modern, mainstream, statically-typed languages don't have, such a language would need something analogous to templates that allows for generic programming with the efficiency and genericity that templates provide. Note that "generics" are not it. Value semantics (in other words copy constructors, assignment operators, and preferably move operations, etc.). Destructors that are automatically called when a type leaves scope. Function overloading (and preferably operator overloading, though not strictly necessary). Some way to do compile-time metaprogramming. Something akin to either concept maps or ADL, but preferably just concept maps and ditching ADL. Both signed and unsigned integers for god's sake. I'm probably missing obvious stuff but these are ones that jump to mind as being really important and that some modern languages don't have.

Contero posted:

My biggest issue is that the first thought when replacing C++ always seems to be "So this language is going to be like C++, except with garbage collection! Then we can get rid of those nasty, confusing destructors."
Yes! I hate this and I see it all the time. That so many programmers don't understand the flaw in this is especially upsetting.

GrumpyDoctor posted:

Garbage collection isn't nice because destructors are hard, garbage collection is nice because keeping track of object lifetimes is cognitive overhead and if you can get rid of it then why the hell not?
There's nothing wrong with garbage collection, but its uses are far more limited than many programmers understand and it is frequently misused. The biggest issue comes from the fact that it's not a replacement for destructors, and since memory allocation and deallocation often involve types that need to manage some kind of resources other than memory, you still need to be explicit about disposal of resources when in the presence of garbage collection, otherwise you get non-deterministic disposal or no disposal at all (it's why languages have things like C#'s using statement, which is unfortunately not anywhere close to a replacement for destructors).

Realistically, garbage collection is fine in cases where 1) non-deterministic memory management is acceptable, 2) you actually want shared ownership of a given object (garbage collection is pointless for clearly scoped objects), and 3) where disposal of the type is trivial and predictably will be trivial in the future (i.e. in C++ terms, any C++ type with a trivial destructor or that only directly or indirectly dynamically allocates types that have trivial destructors).

In terms of #1, most people are okay with it, and that's fine. I'm including this for completeness and because it isn't always acceptable. If you really are trying to make a general-purpose language, this is definitely important to understand. Optional garbage collection is one thing, but if you force it for all dynamically allocated objects then you're ruling out a whole class of users for no reason.

In terms of #2, you should already be striving for minimizing or eliminating shared ownership to begin with -- you use value semantics in C++. Take a look at all of the C++ standard libraries and all of the boost libraries, for example. Specifically, which of their components would be significantly, if at all, impacted by the presence of garbage collection and how would it make their implementation better, easier to understand, or even easier to write for that matter? Since they use value semantics, pretty much none of them (you could probably stretch and say something like shared_ptr could be metaprogrammed to have a different implementation in cases where types have trivial destructors).

Further, as for #3, even if you're not dealing with value semantics, if your types are nontrivially dispoable or if your code (especially generic code) is to contain anything that is potentially not trivially disposable, then you want or need deterministic disposal anyway, which garbage collection itself cannot provide. Because of this, you need some other way to keep track of when disposal can take place that is actually deterministic and timely -- if you have to keep track of when to dispose anyway, then the benefits of garbage collection in that scenario go out the window (if you know when to dispose of the object, you know when its memory is ready to be reclaimed).

Garbage collection is fine for dynamic memory allocation of trivial types, but it falls flat on its face with respect to deterministic resource management. In something like C++ with garbage collection, there's nothing detrimental about the facility being there and it's very welcome, but good practice will still always be to use value semantics unless you have a solid reason not to, use unique_ptr in places where you have simple lifetimes of dynamic objects, and use shared_ptr in places where you have dynamic objects with shared lifetimes (though again, that's something you should strive to avoid if possible anyway).

That Turkey Story fucked around with this message at 21:28 on Sep 13, 2012

That Turkey Story
Mar 30, 2003

Zombywuf posted:

Have you seen Clay. It's in early stages but it seems to be going in a nice direction.

No, but from the listed features and design philosophy I'm already interested.

Edit: Skimming the language reference, I like how they handle a lot of things, often better than C++: overloading, discriminated unions (though I'm not entirely sure they should be a language feature as opposed to a library feature), and function return type deduction.

Edit2: In IRC someone is claiming that development on the language isn't really active anymore, even though it looks like it was updated relatively recently. :/

Edit3: I really like this language a lot.

That Turkey Story fucked around with this message at 01:55 on Sep 14, 2012

That Turkey Story
Mar 30, 2003

shrughes posted:

4) You want memory safety and don't want every piece of broken code to be a security flaw.

If by that you mean you don't want somebody to be able to accidentally write to something that's been deleted and/or destroyed, I agree, but unless you're disallowing holding references to non-dynamically-allocated objects, or you are always implicitly dynamically allocating and garbage collecting all objects, you're still going to have that potential problem anyway. That's a gigantic trade-off.

You could sort of have it both ways, though -- keep track of traceable memory references ala GC, and if other pieces of code are still holding onto them at the time delete is called or the object is destroyed, produce some kind of error (probably something like terminate without stack unwinding, allowing some kind of hook). The thing is, even if you avoid trampling over memory, your program is still in some erroneous, unaccounted for state by the programmer. Something still should be done other than silently continuing.

That Turkey Story
Mar 30, 2003

GrumpyDoctor posted:

It looks like it's how they do their OO:
Right, well, it's more like the alternative to OOP. It's like a boost::variant only with direct language support instead of as a library feature (other languages have it as a language feature as well). I sort of see that they want language support for them so that they can be open, though I'm not convinced you always want them to be open. I exclusively use boost::variant for any runtime polymorphism in all my personal C++ code anyway (I don't use virtual functions), so this seems like it's exactly the type of thing I would want. Edit: Well, I use virtual functions for type erasure too, but really I'd like a more direct way to do type erasure without having to hack it through oop facilities.

A lot of the ideas look like they're directly influenced by Stepanov's Elements of Programming. The only thing is, it doesn't look as though it's yet at the stage where it has full concept and concept map support, or if they're necessarily planned in the sense that they are for C++.

That Turkey Story fucked around with this message at 02:44 on Sep 14, 2012

That Turkey Story
Mar 30, 2003

Vanadium posted:

Clay looks like it shares some DNA with Haskell, so I'm all for it :v:

Yeah, it really got a ton of poo poo right already, and a lot of the important stuff is the basis for language and not an after-thought. It's like the designer actually understood what makes a good language before he went out and designed a language, and knows C++ well enough to actually understand what is required to make a "better C++." Usually I have some nasty criticism about a language, but I don't really with Clay yet.

I don't think I really like the fact that overload selection is partially based on ordering, though. IMO, if something is ambiguous, it probably should be a compile error and shouldn't just pick the last one. If anything, if you want the unambiguous behavior and you want your newly-written implementation to be used, I think you should have to be explicit about it (I.E. somehow reference the previous overload that would be a worse match in the declaration of the new overload to notate that this one takes precedence). Anyway I do a lot of generic programming and I just don't see the rationale for this feature at all, but perhaps I'm missing something?

Edit: errrr actually, I guess I misunderstood and it's worse than that? It uses the first match it finds in reverse order, so something that'd be a worse pick in C++ would be picked over a better one if it were written later. I don't see why anyone would want this behavior.

That Turkey Story fucked around with this message at 17:44 on Sep 14, 2012

That Turkey Story
Mar 30, 2003

McGlockenshire posted:

This is the straight-jacket of inheritance. If you want behavior shared between classes, it's going to be beneficial to share the code as well instead of doing c&p.

Languages that support composition as an alternative to inheritance (see: traits / mixins) can be a win here, as you can gain shared code without contaminating the class tree.

Of course, most of these languages tend to be scripting languages, so you wouldn't be caught dead writing an RTS in one anyway.

Nothing in C++ required this approach. You don't need a scripting language to avoid misusing inheritance.

That Turkey Story
Mar 30, 2003

Otto Skorzeny posted:

One of the reasons it bogs down so much late in long running games :v:

Civ V isn't much better in that respect -- they ditched python for lua. Maybe I'm crazy, but I really don't see what the big advantage is with using either of them over something that's statically typed for this. I know the games pretty well and while there's a lot of stuff going on it's pretty straightforward. There has to be some simple, statically-typed language that works with LLVM that you could use as a scripting language instead.

That Turkey Story
Mar 30, 2003

ToxicFrog posted:

In the case of Lua, the advantage is that it's a very small, simple language designed from the start for being compact, fast, and easy to embed - it's a configuration and scripting language first and foremost. I don't know if it was around when CiV was written, but it also has two JIT interpreters - luaJIT and llvm-lua - which are API compatible with the reference implementation and pretty fast.

There are statically typed languages they could use, and some of them might even be faster, but I can't think of any offhand that are designed to be easy to embed and use as a scripting/configuration language.

(Also, there are enough high-performing games that use Lua heavily that, even without luaJIT, I don't think you can blame CiV's performance woes on the choice of language.)

Speaking as someone who has worked on multiple professional games that have used lua for gameplay scripting, performance problems do get traced back to lua, through profiling, and it's really not specific to lua either. The problem for us was mainly due to creating lots of objects in lua, which all end up being dynamically allocated and garbage collected. If for every bullet or projectile you're spawning an object in lua, and all of your objects are dynamically typed and all of your function calls are on types in a dynamic language, it does cripple performance, and the worst part is, there's no simple way to parallelize the logic code that is typically implemented in a scripting language, so you end up really hurting without many options.

What's dumb is that most of these things don't need to be dynamically allocated nor does the language need to be dynamically typed.

That Turkey Story
Mar 30, 2003

ToxicFrog posted:

My point is that these issues can be - and are in many games - solved by "not allocating shitloads of lua objects every frame" and "running independent scripts in separate threads" (which is actually quite easy to do) respectively.
If you have to bend over backwards to avoid allocating objects (which is not as trivial as it sounds), you're complicating things. If you simply used a language where you don't have a bunch of implied overhead when simply creating an object that could easily have just been on the stack, none of this would be a problem. Using a scripting language is supposed to make your life easier, not harder. If you have to do things an an unintuitive manner and avoid abstraction for performance reasons, the code only ends up difficult to write, understand, and maintain. There is nothing intrinsic about a scripting language that requires this nonsense, it's just that lua has become common for a variety of reasons -- it's a small language with good support that cooperates well with C++ and C, it's similar to javascript meaning that a lot of people somewhat know it even if they've never written in lua before, it's easy for modders to pick up and write little scripts, etc. Unfortunately, it does not make a great choice when you want to do the bulk of your gameplay programming in it in a complicated game.

Also, you usually cannot simply "run independent scripts in separate threads" nor would that necessarily be a good idea or even fix anything to begin with. First, we're talking about general gameplay code that cannot easily be run concurrently with other scripts (and other code does run at the same time: C++ code runs in parallel occupying the cores that it can, but the lua is actually the bottleneck). Simply running certain scripts concurrently does not scale well, either, even if it were an actual option in our case, which it is not.

ToxicFrog posted:

Yes, it will never be as fast as writing equivalent code in a language that compiles ahead of time to optimized machine code, but it can be "fast enough", and evidently a lot of development teams consider that a worthwhile tradeoff in exchange for faster development and increased moddability.
It depends on the project and what you're doing in scripts. Again, this isn't some hypothetical here, this is actual experience on a game that isn't some casual game, and where the lua isn't simply for basic scripting -- the entirety of the engine is in C++, and the entirety of the gameplay and gui, etc. is in lua, with the idea being that it's easy for modders to work with (which is very true). It's caused a lot of performance problems as the game became more complex.

ToxicFrog posted:

That said, I'm certainly interested in suggestions for languages that have better performance characteristics than Lua/LuaJIT while still being threadsafe, suitable for embedding, and capable of on-the-fly script editing and loading.
That was my question. With LLVM, this should not be as difficult as it used to be. You can even use C++ if you actually wanted to (trust me, I would have loved it, but the modding community probably would not). Lua just happens to be popular at this point because it is an existing solution and it's a simple language that people know. It's good enough for a lot of cases so there's not a huge push for anything else, but many teams do have problems with it, and the more that gameplay code is done in scripts, the more people are going to run into it as a bottleneck.

That Turkey Story
Mar 30, 2003

Plorkyeran posted:

WoW UI modders came up with a library that made this fairly easy -- by switching to manual memory management and clearing and reusing tables rather than allocating new ones. The whole thing was hilariously slow compared to the amortized cost of lua's garbage collector, but triggering the GC during combat could lock up the UI for multiple seconds.

Yeah, when you have to manually manage memory in a dynamically typed language with GC, I really start to question the benefits of using that language as a scripting language.

That Turkey Story
Mar 30, 2003

hobbesmaster posted:

What would you expect it to do?

Yeah, I'm confused as to what the problem is here. If it didn't do this, how would you define non-function-style macros that start with parentheses? Would you make a new syntax? I think this is the most concise way to do it.

That Turkey Story
Mar 30, 2003

Suspicious Dish posted:

The preprocessor really should always insert parens for you automatically. I can't think of a single reason for it not to.

You mean have macros always expand to something parenthesized? That wouldn't work -- the are lots of times where you don't want the expansion to result in something that's parenthesized I.E. almost anything that doesn't result in an expression.

That Turkey Story
Mar 30, 2003

Suspicious Dish posted:

I don't think so. It already takes comments and strings into account, and certainly parses a lot of C's existing structure. Determining whether the macro expansion will result in an expression isn't that hard, I don't think.

Those are all very trivial. The preprocessor basically has no knowledge of C++ or C for that matter. It doesn't even know what an expression is let alone how to differentiate one from other code. This is actually very complicated in C++. For a simple example:

int( foo )

What is that? Is that an expression or is it a type? It depends on what foo is. If foo is a type, then int( foo ) is also a type. If foo is not a type, then that's an expression constructing an int from foo. Also, what about macros that are partial or potentially partial:

#define foo -a

What is that? Is that -a or is that the second part of a subtraction? There's a ton more stuff to consider other than what I've shown that make it impossible to determine whether or not the user actually wants parentheses, even in the case of expressions.

Suspicious Dish posted:

I do think that C needs something that enables metaprogramming a bit more than its current processor, but I'm not sure what that should entail.

I'm with you there. Right now the C preprocessor is technically Turing complete, but it's still a bitch to do complicated stuff with.

Adbot
ADBOT LOVES YOU

That Turkey Story
Mar 30, 2003

Suspicious Dish posted:

But you can't keep state when looping, which I thought was necessary.

Hmm? You pass along the state as macro arguments. You can emulate fold with the C preprocessor -- Boost.Preprocessor has an implementation. In fact, you can emulate all sorts of constructs up to a given limit. For instance, you can implement recursion in the more general sense than just fold, you can do while, for each, you can even emulate mutability, and more (Chaos even has lambda functions)!.

Of course, most of these library-emulated constructs have limits and you "recurse" up to a given depth. You could probably argue that because of this it is not really Turing complete, but that ends up being somewhat unimportant especially since all compilers effectively have internal limits anyway. The only difference is that the limits with the C preprocessor for recursion are library-dependent as opposed to compiler dependent. In practice, that difference doesn't actually matter.

That Turkey Story fucked around with this message at 07:15 on Sep 19, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply