|
NihilCredo posted:I share your distaste for macros, but why do they pose a problem for IDEs? If they can run the compiler in the background to type check your code, I don't see why it would be any harder to run the preprocessor followed by the compiler instead. Because the IDE usually doesn't run the full compiler to produce type-checking and autocomplete information? Stuff like generating object code, linking and optimizing takes O(minutes) on even moderately-sized projects, and to make those features useful you really want something that's O(seconds).
|
# ? Oct 28, 2015 13:52 |
|
|
# ? Jun 7, 2024 19:26 |
|
Volte posted:For the 'functions' this may well be true, but inlining constant variables is something that even baby's first compiler can usually do, This is true for static const but not for const generally (assuming C, C++'s const semantics may be different enough that the compiler can infer more without the static storage duration)
|
# ? Oct 28, 2015 14:07 |
|
Athas posted:Mapping the preprocessor-generated code back to the source text may not be trivial. The IDE doesn't need to use the compiler to do it's own parsing. If PyCharm can find the original definition of a field through a bunch of inherited classes in a language without static typing, it can probably find where a macro was defined.
|
# ? Oct 28, 2015 14:10 |
|
KernelSlanders posted:The IDE doesn't need to use the compiler to do it's own parsing. If PyCharm can find the original definition of a field through a bunch of inherited classes in a language without static typing, it can probably find where a macro was defined. Parsing and analyzing Python is vastly, vastly easier than doing similar resolution work on C++.
|
# ? Oct 28, 2015 15:00 |
|
Blotto Skorzany posted:This is true for static const but not for const generally (assuming C, C++'s const semantics may be different enough that the compiler can infer more without the static storage duration)
|
# ? Oct 28, 2015 16:17 |
|
Volte posted:Anywhere a #define will work is either local to the translation unit, or a static const in a header file, both of which would take a fairly braindead compiler to miss as an optimization opportunity When Fefe tested this across four compilers for Linux Kongress a few years ago, the static const was reliably inlined (as you and I both seem to agree ought to be the case) and the non-static const was not even when it was obviously local to the translation unit, I assume because at the stage the optimization was done the toolchain couldn't prove that something elsewhere didn't reference it as extern.
|
# ? Oct 28, 2015 16:40 |
|
Blotto Skorzany posted:When Fefe tested this across four compilers for Linux Kongress a few years ago, the static const was reliably inlined (as you and I both seem to agree ought to be the case) and the non-static const was not even when it was obviously local to the translation unit, I assume because at the stage the optimization was done the toolchain couldn't prove that something elsewhere didn't reference it as extern.
|
# ? Oct 28, 2015 18:41 |
|
Non-extern const global definitions are implicitly static in C++. C++ also says that syntactically-immediate loads from a statically-initializable constant are not "ODR-uses", which has a precise technical meaning but in practice does mean that the value will be "inlined" as an immediate. I do not know why a C compiler would not reliably load from an extern const definition. Clang certainly does, even at -O0. Maybe the embedded world should stop wasting millions of dollars maintaining lovely ports of ancient compilers instead of just paying somebody to do it properly in a modern compiler framework?
|
# ? Oct 28, 2015 19:40 |
|
Is there a way to just, like, turn off all of LLVM's undefined behavior-based optimizations? Is there even a list of them? That would be one reason for the embeddlords to stick with their own compilers.
|
# ? Oct 28, 2015 19:57 |
|
sarehu posted:Is there a way to just, like, turn off all of LLVM's undefined behavior-based optimizations? Is there even a list of them? That would be one reason for the embeddlords to stick with their own compilers. You think that the ancient ports of GCC for embedded world allow it?
|
# ? Oct 28, 2015 20:01 |
|
sarehu posted:Is there a way to just, like, turn off all of LLVM's undefined behavior-based optimizations? Is there even a list of them? That would be one reason for the embeddlords to stick with their own compilers. Well, C has an appendix listing undefined behavior. We basically just take that as a challenge. You can certainly pay a compiler developer to disable optimizations if you find writing valid C code to be a deeply unreasonable requirement. It is not fundamentally more expensive than paying a compiler developer to fix bugs in GCC 3.3. In fact, it is probably much cheaper!
|
# ? Oct 28, 2015 20:04 |
|
rjmccall posted:You can certainly pay a compiler developer to disable optimizations if you find writing valid C code to be a deeply unreasonable requirement. It's true, if Hi-Tech C suddenly started enforcing strict aliasing or whatever probably every device with a PIC in it (that is to say, every device in the world) would spontaneously combust ;_;
|
# ? Oct 28, 2015 20:08 |
|
Volte posted:Anywhere a #define will work is either local to the translation unit, or a static const in a header file, both of which would take a fairly braindead compiler to miss as an optimization opportunity. My compiler is complaining about non-static init trying to use static const uint32_t instead of #define on a global init like this: static uint32_t statusRegList[1] = {STATUS_ADDR}; But I'm sure I'm the horror. I'm just not sure if it's because I wanted a #def'd pretty register name or how I'm declaring this list.
|
# ? Oct 28, 2015 20:16 |
|
How is STATUS_ADDR defined?
|
# ? Oct 28, 2015 20:22 |
|
rjmccall posted:How is STATUS_ADDR defined? code:
code:
|
# ? Oct 28, 2015 20:57 |
|
Nice, gcc rejects that and clang accepts it. rjmccall at it with embrace, extend, extinguish
|
# ? Oct 28, 2015 21:53 |
|
Oh, yes, that's an extension in C — a pretty common one, but an extension nonetheless. You're stuck with macros then. EDIT: GCC doesn't accept that? Are you compiling in strict standards mode?
|
# ? Oct 28, 2015 21:53 |
|
No, just babby's first gcc invocation: gcc foo.c I was generally under the impression that const in C isn't all that meaningful and the whole thing where it turns variables into compile-time constants is only a C++ thing, but I haven't really looked closely at C in a while. Also I just remembered that silly workaround: enum { STATUS_ADDR_REDEF = 0x1f }; static uint32_t statusReg[1] = {STATUS_ADDR_REDEF};
|
# ? Oct 28, 2015 21:57 |
|
At the very least a const char * will get written into rodata (on my machine) with a simple "gcc foo.c", you'll get a segfault if you try modifying the buffer. So it has some meaning at least some of the time.
|
# ? Oct 28, 2015 21:59 |
|
sarehu posted:Is there a way to just, like, turn off all of LLVM's undefined behavior-based optimizations? Is there even a list of them? That would be one reason for the embeddlords to stick with their own compilers. That's not really how it works. Compilers don't go looking for undefined behaviour, and they don't have a list or anything of the sort. Usually it just crops up as an assumption made to permit a (nice) optimisation. For example, removing NULL-checks once a pointer has been dereferenced once already. Few people write this kind of code directly, but it often shows up once you combine several functions that are then inlined. Another typical case (and one I ran into while developing my own optimising compiler for my research) is array bounds check elimination. Here, it is very useful to say that n <= n+i - but this is only the case if i+m does not overflow. Now, nobody in their right mind would write code that depends on overflowing array indices (or else they'd end up in this thread), but a compiler has to be conservative. It's very nice that C specifies that signed overflow is undefined, as this means that the compiler can assume that n+1 is always greater than n if n is signed. It's not that the compiler goes out of its way to do something destructive in the case of overflow; it's just that it uses its knowledge that overflow cannot (or should not) happen to perform an optimisation.
|
# ? Oct 28, 2015 22:19 |
|
Hammerite posted:I still don't understand why you'd use #define rather than using const variables for the constants and functions for the macros-with-arguments. One reason for using macros in preference to functions is that when you use __FILE__ and __LINE__ inside a macro, they refer to the location where the macro is used. This is really handy for tracking down bugs using a log of HW register reads & writes. To do something similar with a function, you'd have to pass __FILE__ & __LINE__ to every invocation.
|
# ? Oct 28, 2015 22:32 |
|
Athas posted:Now, nobody in their right mind would write code that depends on overflowing array indices (or else they'd end up in this thread) That reminds me of the story of Mel: some brilliant dumbass wrote a blackjack program with a loop that didn't have a termination test, instead relying on the loop index overflowing and breaking out of the loop. quote:The RPC-4000 computer had a really modern facility called an index register. It allowed the programmer to write a program loop that used an indexed instruction inside; each time through, the number in the index register was added to the address of that instruction, so it would refer to the next datum in a series. He had only to increment the index register each time through. Mel never used it.
|
# ? Oct 28, 2015 22:40 |
|
Athas posted:That's not really how it works. Compilers don't go looking for undefined behaviour, and they don't have a list or anything of the sort. Usually it just crops up as an assumption made to permit a (nice) optimisation. For example, removing NULL-checks once a pointer has been dereferenced once already. Few people write this kind of code directly, but it often shows up once you combine several functions that are then inlined. Removing NULL-checks, in particular, inferring that a dereference implies non-NULLness is one good example of a bad undefined behavior optimization, and one that could plausibly be wrong in the embedded space. It can and did break existing code. Another would be treating passing a pointer to memcpy as an excuse to cancel a NULL check. Another is treating signed overflow as impossible, particularly when people use unsigned types all the time anyway for array accesses, it's not an important optimization. And believe it or not, no developer will get offended if their if (x != NULL) code gets compiled into having a branch, or if they did it's only where the optimization is safe anyway, where x = &variable; or x = NULL.
|
# ? Oct 28, 2015 23:27 |
|
rjmccall posted:Well, C has an appendix listing undefined behavior. We basically just take that as a challenge. Or buy a copy Intel's compiler for like $500.
|
# ? Oct 29, 2015 04:05 |
|
KernelSlanders posted:Or buy a copy Intel's compiler for like $500. I'm sorta doubting Intel's compiler is going to play nice with my Atmel and TI parts. It's the same odds as a thread of conversation maintaining the bare minimum context for half a page.
|
# ? Oct 29, 2015 04:11 |
|
JawnV6 posted:I'm sorta doubting Intel's compiler is going to play nice with my Atmel and TI parts. It's the same odds as a thread of conversation maintaining the bare minimum context for half a page. Just use const.
|
# ? Oct 29, 2015 04:15 |
sarehu posted:Removing NULL-checks, in particular, inferring that a dereference implies non-NULLness is one good example of a bad undefined behavior optimization, and one that could plausibly be wrong in the embedded space. Could you give an example of when assuming that dereferencing implies non-NULL would be wrong?
|
|
# ? Oct 29, 2015 04:33 |
|
Someone on the Linux kernel had a bad idea and it eventually completely predictably turned into a security bug. Linus blames a GCC optimization instead of his own poor judgment.
|
# ? Oct 29, 2015 04:44 |
|
It wasn't a bad idea, it was a simple bug that (iirc) didn't cause any kind of compiler diagnostic and the damage was magnified from possible crash to security hole by the compiler optimization.
|
# ? Oct 29, 2015 05:09 |
|
pseudorandom name posted:It wasn't a bad idea, it was a simple bug that (iirc) didn't cause any kind of compiler diagnostic and the damage was magnified from possible crash to security hole by the compiler optimization. Do you have a link? Even restricted to LWN my searches have been fruitless.
|
# ? Oct 29, 2015 05:13 |
|
Subjunctive posted:Do you have a link? Even restricted to LWN my searches have been fruitless. https://lwn.net/Articles/342330/
|
# ? Oct 29, 2015 05:24 |
|
Thank you! That's quite the combinatorial failure.
|
# ? Oct 29, 2015 05:28 |
|
VikingofRock posted:Could you give an example of when assuming that dereferencing implies non-NULL would be wrong? A null pointer is generally a pointer of value 0. 0 is a valid address in many embedded systems which run without protected memory - on the 68000, for example, if we want to get all , it's where the interrupt vectors live, so it's entirely possible to want to modify memory at address 0 to set up an interrupt handler.
|
# ? Oct 29, 2015 11:44 |
|
Zopotantor posted:One reason for using macros in preference to functions is that when you use __FILE__ and __LINE__ inside a macro, they refer to the location where the macro is used. This is really handy for tracking down bugs using a log of HW register reads & writes. Oh, that's neat and now that you mentioned it I believe my company's C++ code (we use a mix of C++ and .NET, most new stuff being .NET) uses macros for debugging purposes for this reason (perhaps among others).
|
# ? Oct 29, 2015 11:50 |
|
C# code:
|
# ? Oct 29, 2015 14:53 |
|
idgi
|
# ? Oct 29, 2015 15:29 |
|
Someone just added a service for (dependency) injecting constants. Seems overboard to me.
|
# ? Oct 29, 2015 15:46 |
|
sarehu posted:Removing NULL-checks, in particular, inferring that a dereference implies non-NULLness is one good example of a bad undefined behavior optimization, and one that could plausibly be wrong in the embedded space. It can and did break existing code. It seems to me that if you want to exploit platform-dependent quirks, which is fine (I do it all the time), don't use a general-purpose compiler. Write in assembly or use a specialised (or properly configured) C compiler that knows about the dialect of C that you want to code in. sarehu posted:And believe it or not, no developer will get offended if their if (x != NULL) code gets compiled into having a branch, or if they did it's only where the optimization is safe anyway, where x = &variable; or x = NULL. Again, that's not really how compilers work. Removing NULL-checks and branch checks are enabling optimisations that usually do not contribute a great deal, but can simplify control flow, resulting in fewer basic blocks (or whatever), which can enable very important optimisations like vectorisation and all sorts of other things. When a compiler is doing simple enable optimisations, it has no idea whether each specific simplification is going to enable something important down the road - it's just too complex a problem space. Hence, compilers simplify as much as they are able, just to give later optimisations code that is easier to analyse. Compilers also have no idea whether some sequence of basic blocks is an important inner loop, or some bit-twiddling that runs very rarely but uses undefined behaviour intentionally. If you don't want optimisations, just turn them off. Treat -O as an optional deal where you give the compiler code that follows some (strict) rules, and in return it gives you faster code. I think most extant compilers will then let you get away with benign undefined behaviour (as I said, compiler writers are not actually out to get you).
|
# ? Oct 29, 2015 15:49 |
|
Munkeymon posted:Someone just added a service for (dependency) injecting constants. Seems overboard to me.
|
# ? Oct 29, 2015 16:27 |
|
|
# ? Jun 7, 2024 19:26 |
|
Munkeymon posted:Someone just added a service for (dependency) injecting constants. Seems overboard to me. It could be. It depends on where the constants are and where they come from. If the constant is "list of US states", then it's probably dumb. Volte posted:Depends on what the constants are for. If they are version-dependent and loaded from a JSON file or even from the web (e.g., the loot tables in Bloodbourne) in practice then it makes sense to abstract them, especially for unit tests. If it just provides the value of pi and e, then it's stupid. Get out of my head.
|
# ? Oct 29, 2015 16:28 |