|
Drastic Actions posted:Hopefully now more people can get articles in ZDNet with the advance hacking techniques I’ve provided. Look out Tavis, there's a new sheriff in town.
|
# ? Mar 16, 2019 22:53 |
|
|
# ? May 17, 2024 02:10 |
|
SupSuper posted:They still suck for compile time though. Yeah the compiler still has to parse the code, instantiate it, maybe inline it etc. Trivial functions like std::move and std::forward that don't actually do anything (they're just specializations of the static_cast operator) have a massive impact on compilation time: every time you compile, the compiler has to do all the instantiation, optimization etc. work just to reach the same conclusion every time (they just return their argument and they're effectively no-ops). I wonder if there are any compilers that precompile templates in headers to actual executable code hackbunny fucked around with this message at 02:28 on Nov 28, 2021 |
# ? Mar 16, 2019 23:29 |
|
Mooey Cow posted:Not really. Maybe 20 years ago, but if you need code to work over multiple types, they generally result in identical code to handwritten specializations. Sometimes smaller, since unused template functions and methods are not even emitted (although unused handwritten code could also be removed at linktime). It's also usually possible to separate out code that doesn't depend on the type to avoid unnecessary code duplication within functions, but you would also have to do that if you wrote your functions by hand and wanted to minimize size. This has not been my recent experience. (template-heavy code --> boss asks to get working on embedded platform --> oh god)
|
# ? Mar 17, 2019 00:06 |
|
I tried to use the PEGTL (template based PEG grammar parser) library for a parser recently and the grammar file took upwards of 15 seconds to compile just for what amounted to basically a stub of the full grammar, so yeah, templates for basic generic programming are one thing, but using them for more complex compile-time programming is definitely a huge trade off.
|
# ? Mar 17, 2019 00:15 |
|
PEGTL also generates absurdly huge object files.
|
# ? Mar 17, 2019 05:37 |
|
HappyHippo posted:I don't think templates being turing complete affects the parsing. Many turing complete languages are parseable after all. The issue (I believe) is there are ambiguities that can only be resolved if you know what kinds of things the tokens involved are (such as types or variables), eg: That’s actually exactly why parsing is affected by the theoretical complexity of things like templates and constexpr. When you have a construct like A<E>::x, parsing sometimes depends on the kind of declaration named by x. If A<E> is a dependent specialization, then that’s unknowable and C++ has various rules for picking a presumptive kind (and overriding it when necessary) so that parsing can continue; but if it’s non-dependent, C++ says that parsing uses the actual lookup, which means you have to resolve the specialization, which means you have to resolve E down to a concrete type/value, which can require instantiating other templates or calling constexpr functions or lots of other stuff. If you’re willing to form an ambiguous parse tree then that’s quite different, but the ambiguity can accumulate pretty quickly because of things like T*x.
|
# ? Mar 17, 2019 20:26 |
|
All this stuff sounds like a complete nightmare. Is writing a c++ compiler a constant series of Herculean problems or is it all more tractable than it sounds?
|
# ? Mar 18, 2019 01:44 |
|
Dr Monkeysee posted:All this stuff sounds like a complete nightmare. Is writing a c++ compiler a constant series of Herculean problems or is it all more tractable than it sounds? Vendors argue against new features because they’re hard to add. So.. yeah; giant barely tractable project if you want to support the full thing.
|
# ? Mar 18, 2019 02:38 |
|
I think I heard that for C++17 or C++20 the standards committee gave up on telling vendors what they should add and just started picking out new features that the vendors had already added.
|
# ? Mar 18, 2019 02:49 |
|
It's a huge language with a lot of complexity and corner cases even before you start considering the massive set of vendor extensions. You try to get the basic concepts right in the compiler design so that you're not fighting yourself too much, and hopefully that carries you through well enough so that you only have a small (but endless) trickle of (1) things you got subtly wrong and (2) things that you gave up on so you could ship and always meant to get back to. The parsing thing really isn't a problem for the implementation because the parser generally has a handle on something that knows how to do semantic analysis anyway; similarly, formal decidability is not particularly important because formally almost any conceivable language processor is only implementing a conservative subset of the formal language because of implementation limits, either statically (e.g. due to exponential worst-case behavior in Hindley-Milner unification) or dynamically (e.g. due to finite memory). But what the C++ committee did was just to start requiring implementation experience before it would consider core language proposals, which is a good idea in any language — it's quite easy for someone who's just writing a design document to overlook even more complexities and corner cases and odd interactions than you actually intended in your design. This being C++, of course, the committee only learned this lesson after repeatedly screwing up, including in ways that are sometimes now baked into the ABI. rjmccall fucked around with this message at 09:05 on Mar 18, 2019 |
# ? Mar 18, 2019 08:57 |
|
What I don't get is why they add so much stuff that users could implement themselves. I was at a talk with Stroustrup himself, he mentioned they had added support for binary literals. Then the people who wanted the binary literals found out it was quite difficult (!) to read big numbers in binary, but easier with spacing, so you can group your ones and zeroes with spaces. Why not just tell them to write their own classes to parse input and output numbers? Why should every one else in the chain have to worry about it?
|
# ? Mar 18, 2019 09:12 |
|
Ola posted:What I don't get is why they add so much stuff that users could implement themselves. I was at a talk with Stroustrup himself, he mentioned they had added support for binary literals. Then the people who wanted the binary literals found out it was quite difficult (!) to read big numbers in binary, but easier with spacing, so you can group your ones and zeroes with spaces. Why not just tell them to write their own classes to parse input and output numbers? Why should every one else in the chain have to worry about it? So you're saying that if someone wants conveniences like binary literals and optional digit grouping in numeric literals (both of which are found in various mainstream languages), they should have to modify their compiler to get it? That doesn't strike me as reasonable.
|
# ? Mar 18, 2019 10:02 |
|
re: the ambiguity of MyTemplate<T>::x * y meaning either a pointer declaration or a multiplication operation: is there a way you can write that and make it explicit that you mean it to mean one thing or the other? Like can I explicitly invoke operator*()? I guess if you want to make sure it can only be a pointer declaration there might be a way to force that by using the typename keyword? After all, this ambiguity only arises because it so happens that the * character is used for two different purposes
|
# ? Mar 18, 2019 10:06 |
|
Hammerite posted:So you're saying that if someone wants conveniences like binary literals and optional digit grouping in numeric literals (both of which are found in various mainstream languages), they should have to modify their compiler to get it? That doesn't strike me as reasonable. Well if person A won't do it for person A's convenience, person B has to do it for person A's convenience. Is that more reasonable? If there is a big demand for it, performance boosts, sure. But the way Stroustrup explained it, it was a pretty niche group and they didn't have great reasons for it. And it's perhaps not the greatest example. Anyway, Stroustrup seemed like a guy with serious mass email flame war PTSD.
|
# ? Mar 18, 2019 10:38 |
|
Ola posted:What I don't get is why they add so much stuff that users could implement themselves. I was at a talk with Stroustrup himself, he mentioned they had added support for binary literals. Then the people who wanted the binary literals found out it was quite difficult (!) to read big numbers in binary, but easier with spacing, so you can group your ones and zeroes with spaces. Why not just tell them to write their own classes to parse input and output numbers? Why should every one else in the chain have to worry about it?
|
# ? Mar 18, 2019 11:52 |
|
Ola posted:Well if person A won't do it for person A's convenience, person B has to do it for person A's convenience. Is that more reasonable? If there is a big demand for it, performance boosts, sure. But the way Stroustrup explained it, it was a pretty niche group and they didn't have great reasons for it. And it's perhaps not the greatest example. Anyway, Stroustrup seemed like a guy with serious mass email flame war PTSD.
|
# ? Mar 18, 2019 14:40 |
|
What really bothers me is that Stroustrup's book uses a non-monospaced font for code examples.
|
# ? Mar 18, 2019 15:52 |
|
Binary literals and digit spacing with _ are weird examples because they're straightforward to design and implement and have obvious benefits for certain kinds of programming. It is extremely typical of Stroustrup to wring his hands over that while passionately pushing for every complex and cross-cutting expressivity feature under the sun because without them C++ will fade and pass into the west.
|
# ? Mar 18, 2019 16:07 |
|
I don't have the galaxy brain required to do C++, but with constexpr it should be possible to implement compile-time binary literals as a plain library without any need for compiler work, no?
|
# ? Mar 18, 2019 16:52 |
|
What would be useful is if bitfields had a guaranteed order. Maybe the do in c++ idk.
|
# ? Mar 18, 2019 19:00 |
|
NihilCredo posted:I don't have the galaxy brain required to do C++, but with constexpr it should be possible to implement compile-time binary literals as a plain library without any need for compiler work, no? It is possible (I implemented hex literals for a custom bit-array class some time ago), but it is cumbersome. Also, binary literals can be parsed almost trivially by the tokenizer, using templates is much more expensive and slows down compilation.
|
# ? Mar 18, 2019 19:26 |
|
Dr Monkeysee posted:Is writing a c++ compiler a constant series of Herculean problems Well, considering the language makes the Augean stables look clean ....
|
# ? Mar 18, 2019 19:40 |
|
dougdrums posted:What really bothers me is that Stroustrup's book uses a non-monospaced font for code examples. Look we all already knew he was a monster this is just gilding the lily
|
# ? Mar 18, 2019 20:04 |
|
Some of Stroustrup's choices in The Design and Implementation of C++ (I think that was the name) regarding syntax just drove me up the wall because it's like "What if you want to run C code through your C++ compiler?" It's just not a sensible design choice for a new language.
|
# ? Mar 18, 2019 21:35 |
|
It's almost like C++ was designed as an extension to C rather than a new language.
|
# ? Mar 18, 2019 21:39 |
|
I don't write C or C++ but isn't one of its most important features that you can drop in C code transparently?
xtal fucked around with this message at 22:30 on Mar 18, 2019 |
# ? Mar 18, 2019 22:27 |
|
People like to go "well actually C++ isn't a perfect superset of C and therefore...", but yes, being able to seamlessly mix in C and compile most C code as C++ is a very useful feature.
|
# ? Mar 18, 2019 23:07 |
|
Plorkyeran posted:People like to go "well actually C++ isn't a perfect superset of C and therefore...", but yes, being able to seamlessly mix in C and compile most C code as C++ is a very useful feature. I think you might be understating the difference to pre-ANSI C, though.
|
# ? Mar 18, 2019 23:24 |
|
The biggest differences between early-release and ANSI C — function prototypes, struct member namespacing, a less ridiculous basic type system — were already things that people acknowledged as bad, so fixing them in C++ was fine. extern "C" is really the biggest remaining difference. I'm surprised sometimes that C++ didn't just make normal global functions non-overloadable so that they could be extern "C" by default and headers would've just worked. I suppose that would've interfered with some of the language-linkage pipe dreams.
|
# ? Mar 19, 2019 00:10 |
|
Why would anyone want binary literals, on a binary computer system? Madness I tell you
|
# ? Mar 19, 2019 06:20 |
|
Spatial posted:Why would anyone want binary literals, on a binary computer system? Madness I tell you Or a Boolean data type, come to that?
|
# ? Mar 19, 2019 10:04 |
Spatial posted:Why would anyone want binary literals, on a binary computer system? Madness I tell you Octal is all you'll ever need.
|
|
# ? Mar 19, 2019 10:12 |
|
Spatial posted:Why would anyone want binary literals, on a binary computer system? Madness I tell you Not being able to write numbers as ones and zeroes on a computer that uses ones and zeroes is why the world was unable to do anything at all with computers until the launch of C++14.
|
# ? Mar 19, 2019 10:17 |
|
nielsm posted:Octal is all you'll ever need. Folks write a shitload of octal constants. Like they might even be a plurality in some programs.
|
# ? Mar 19, 2019 16:42 |
|
xtal posted:I don't write C or C++ but isn't one of its most important features that you can drop in C code transparently? No but also yes.
|
# ? Mar 19, 2019 19:36 |
|
Falcorum posted:No but also yes. To clarify, good C isn't necessarily good C++. C++ has OOP (abstraction/polymorphism/etc), like smart pointers and containers etc. Dropping in C unmolested is generally not good practice - I mean it's done and there are use cases for it, and it'll generally work, but it's gross. The C compatibility is useful from a historical standpoint to explain how C++ became commonly used as it was easy for C shops to convert and start using C++ features incrementally.
|
# ? Mar 19, 2019 20:06 |
|
C compatibility is used all the time. C is the lingua franca of cross-language bindings, even for C++. If your C header file has any inline functions in it, you better hope your C++ compiler can compile them.
|
# ? Mar 19, 2019 20:31 |
|
Bruegels Fuckbooks posted:To clarify, good C isn't necessarily good C++. C++ has OOP (abstraction/polymorphism/etc), like smart pointers and containers etc. Dropping in C unmolested is generally not good practice - I mean it's done and there are use cases for it, and it'll generally work, but it's gross. You make it sound as if "C shops" are small and reluctant mom&pop operations skeptical to the C++ skyscrapers over yonder. But it's good for perspective to remember that the whole world of business and science was running on C and that C++ was made to make C better. "Dropping in C unmolested" was a matter of saving millions or billions of dollars in massive finance projects, space projects etc etc. Just imagine how many times this head has been scratched:
|
# ? Mar 19, 2019 21:35 |
|
Ola posted:Just imagine how many times this head has been scratched: Enough to wear away most of his hair
|
# ? Mar 19, 2019 21:54 |
|
|
# ? May 17, 2024 02:10 |
|
Hah I remember inlining ASM into Turbo Pascal. Now that was a feature!
|
# ? Mar 19, 2019 22:04 |