|
wanna mark this thread __attribute__((noreturn))
|
# ? Mar 5, 2018 20:59 |
|
|
# ? May 30, 2024 12:16 |
|
Does this function have any missing returns?code:
|
# ? Mar 5, 2018 21:00 |
|
c# will complain if a path through a non-void method exists that reaches the end without returning/throwing/etc. Sometimes it gets things wrong - for example, this doesn't compile:code:
|
# ? Mar 5, 2018 21:00 |
|
HappyHippo posted:It's not really a halting problem issue. Which is why other languages can easily handle this and enforce the rule that all code paths that return actually return a value. vOv is right that the problem does reduce from the halting problem (as does the problem of finding unused code paths more generally), and so getting around it requires accepting some false positives, which is what "the last statement must be a return" does it comes down to how much value you put on allowing code like the example Xarn mentions. I personally can't see any problem with automatically inserting an error statement at the end of any (non-main) function without a return, but maybe it causes some problem that's not occurring to me right now?
|
# ? Mar 5, 2018 21:01 |
|
It is a philosophy thing, just like why std::vector::operator[] is not range checked. C generally errs wayyy too much on the side of trusting the programmer, including in this. so it is assumed that if there is no return statement, then that means the programmer knows we cannot hit this path through function. Other languages (definitely at least Java and C#) err on the side of not trusting the programmer, so if it cannot absolutely, 100% guaranteed, prove that the path with no return statement is not taken, then it tells the programmer to gently caress off and provide return statement (or a call to noreturn declared function).
|
# ? Mar 5, 2018 21:19 |
|
Hammerite posted:c# will complain if a path through a non-void method exists that reaches the end without returning/throwing/etc. Sometimes it gets things wrong - for example, this doesn't compile: code:
|
# ? Mar 5, 2018 21:23 |
|
Xarn posted:It is a philosophy thing, just like why std::vector::operator[] is not range checked. but refusing to do range checks is about performance. what would be the issue with calling abort on reaching the end of a function without a return, instead of returning garbage? it doesn't add any extra work for the programmer, and in the case where the programmer is right would make absolutely no difference (besides a tiny increase in executable size) Jeb Bush 2012 fucked around with this message at 21:26 on Mar 5, 2018 |
# ? Mar 5, 2018 21:24 |
|
I could see a 'nice' compiler doing it at -O0 just as a convenience thing since it's undefined behavior.
|
# ? Mar 5, 2018 21:24 |
|
Xarn posted:It is a philosophy thing, just like why std::vector::operator[] is not range checked. As a programmer, it is my earnest belief that I should not be trusted
|
# ? Mar 5, 2018 21:34 |
|
Bongo Bill posted:As a programmer, it is my earnest belief that I should not be trusted thats why all of my C++ code uses all the warnings and enables Werror
|
# ? Mar 5, 2018 21:44 |
|
Jeb Bush 2012 posted:but refusing to do range checks is about performance. what would be the issue with calling abort on reaching the end of a function without a return, instead of returning garbage? it doesn't add any extra work for the programmer, and in the case where the programmer is right would make absolutely no difference (besides a tiny increase in executable size)
|
# ? Mar 5, 2018 21:58 |
|
Spatial posted:Code size affects performance. no poo poo. the question is, would the resulting increase in code size represent a non-negligible performance issue? we're talking about inserting one function call into a fairly small subset of functions
|
# ? Mar 5, 2018 22:06 |
|
c/c++ is all about having undefined behavior that trips everyone up and causes thousands of bugs so that the complier writers can eke out tiny performance increases.
|
# ? Mar 5, 2018 22:09 |
|
That's the reason you'll be given for why it's not done. In reality this isn't an issue, this is something nerds like to argue about on the internet. The defacto standard is to warn when not all control flow paths return, even the crappy embedded compilers do it. This is the ideal situation for braindamaged C lovers: no overhead at all, while still maximising opportunities to snidely talk down to others about their unawareness of undefined behaviour
|
# ? Mar 5, 2018 22:12 |
|
Just tried it in whatever version of gcc is on my work machine and got no warning at all.
|
# ? Mar 5, 2018 22:20 |
|
HappyHippo posted:Just tried it in whatever version of gcc is on my work machine and got no warning at all.
|
# ? Mar 5, 2018 22:27 |
|
HappyHippo posted:Just tried it in whatever version of gcc is on my work machine and got no warning at all. Yeah, this is the second really hosed up part of gcc: it has essentially no warnings on by default, and even -Wall -Wextra is a small subset of what it can tell you. What I am saying is, screw gcc, use clang. Jeb Bush 2012 posted:no poo poo. the question is, would the resulting increase in code size represent a non-negligible performance issue? we're talking about inserting one function call into a fairly small subset of functions If you want honest answer, then it doesn't until it does Essentially as long your hot loops fit into icache, their size being different tends to have only marginal effect*. But once you no longer fit, the perf hit is pretty drastic. * If you are doing e.g. high-end matrix multiplication, you can tell difference of every single superfluous instruction. But that is the sort of code that will not have trouble with missing return values.
|
# ? Mar 5, 2018 22:28 |
|
HappyHippo posted:Just tried it in whatever version of gcc is on my work machine and got no warning at all. Spatial posted:even the crappy embedded compilers do it.
|
# ? Mar 5, 2018 22:29 |
|
If you compile with -Wreturn-type it will bitch at you. Really you should be compiling with -Wall -Werror anyways, but yeah, gcc should bitch by default.
|
# ? Mar 5, 2018 22:34 |
|
A couple of days ago I came across this line which made me double-take:C++ code:
|
# ? Mar 5, 2018 22:38 |
|
Spatial posted:A couple of days ago I came across this line which made me double-take: Today I had to fix a bug that turned out to be something reading a register via a hardcoded address like that, but using the address that would have been correct in a previous generation of the product.
|
# ? Mar 5, 2018 23:19 |
|
Spatial posted:A couple of days ago I came across this line which made me double-take: Oh look, a legitimate use of volatile. Never thought I'd see one
|
# ? Mar 5, 2018 23:25 |
|
I read the datasheet for the microcontroller, the memory controller, and the memory module and I still have no idea where the hell that "<<13" came from or why it does anything.
|
# ? Mar 6, 2018 00:10 |
|
JawnV6 posted:wanna mark this thread __attribute__((noreturn))
|
# ? Mar 6, 2018 01:07 |
|
Spatial posted:A couple of days ago I came across this line which made me double-take: It works out to 0x28064000...is that address aliased to a region that maps to the SDRAM with those specific parameters? edit: Jabor posted:Are you sure it's 3 and 4 that are the parameters there? I would have thought that it's the 3 and 2, and the shift by 4 is to put the 3 in the right bits. I would think that too but I also wouldn't think that an MCU would do what I suggested above anyway. The parts I've worked with have discrete modes for setting the timing parameters prior to reading and writing. csammis fucked around with this message at 03:06 on Mar 6, 2018 |
# ? Mar 6, 2018 02:44 |
hackbunny posted:Oh look, a legitimate use of volatile. Never thought I'd see one volatile is for hardware registers that might have their value changed by something other than your code, right?
|
|
# ? Mar 6, 2018 02:44 |
|
csammis posted:It works out to 0x28064000...is that address aliased to a region that maps to the SDRAM with those specific parameters? Yeah usually peripherals in embedded systems, or well pretty much any system to be honest it's just hidden are mapped somewhere in physical memory. it's either that or designated instructions.
|
# ? Mar 6, 2018 02:50 |
|
Are you sure it's 3 and 4 that are the parameters there? I would have thought that it's the 3 and 2, and the shift by 4 is to put the 3 in the right bits.
|
# ? Mar 6, 2018 02:52 |
|
JawnV6 posted:wanna mark this thread __attribute__((noreturn)) You'll be glad to know that this attribute is standard in C++11 as [[noreturn]].
|
# ? Mar 6, 2018 03:21 |
|
VikingofRock posted:volatile is for hardware registers that might have their value changed by something other than your code, right? Yes, it prevents certain compiler optimizations like loading the value from memory, storing it in a register and never checking the actual memory address again when the compiler deduces that your code does not change the variable. The problem was people trying to use it for multithreading, which doesn't work except on, like...IA64? There's no guarantee of atomicity provided by volatile in C and C++.
|
# ? Mar 6, 2018 04:06 |
|
csammis posted:It works out to 0x28064000...is that address aliased to a region that maps to the SDRAM with those specific parameters? Before that read happens, the memory controller is instructed to put the SDRAM into the "load mode register" state so it can be configured. This register must be written via the address pins of the SDRAM chip. The only way to do that is to make the memory controller issue a read so that it puts the appropriate values on the address pins. Hence the crazy code. Jabor posted:Are you sure it's 3 and 4 that are the parameters there? I would have thought that it's the 3 and 2, and the shift by 4 is to put the 3 in the right bits.
|
# ? Mar 6, 2018 04:12 |
|
Man I guess Marvel movies have gotten really weird
|
# ? Mar 6, 2018 05:26 |
|
The Phlegmatist posted:The problem was people trying to use it for multithreading, which doesn't work except on, like...IA64? There's no guarantee of atomicity provided by volatile in C and C++. It's complicated! volatile forces the compiler to emit the exact access requested in the exact order requested. You are right that there technically aren't any requirements about how the access is performed, but in practice there's a handshake agreement that volatile accesses will be implemented in an "obvious" way, i.e. with an appropriate single load/store instruction (or addressing mode where applicable). Most architectures do guarantee that aligned loads and stores will be atomic, in the basic sense that such loads will never observe partial results from such stores. So in practice you can get a certain level of atomicity from volatile accesses. And the compiler is required to assume that the access might be weird in unspecifiable ways; e.g. if there are two immediately-consecutive volatile loads from the same variable, the compiler not only has to perform both loads, but it has to assume that they might have read different values; that's also somewhat useful. What you do not get is even the slightest statement about memory ordering. Volatile accesses do not synchronize with other threads on any level. On processors with weak memory ordering (like ARM or PowerPC), reading a value from a volatile variable is not sufficient at a hardware level to ensure that subsequent loads will see the result of stores that occurred before the store that put that value there. Even on processors with stronger ordering at the hardware level (like x86 and x86-64), nothing stops the compiler from reordering non-volatile accesses around volatile ones. It is almost always the case that such memory ordering is necessary for correctness, and even if it isn't, there's no real guarantee that the compiler will honor that handshake agreement about emitting your access in an obvious way. So people rightly tell you to never use volatile for threads.
|
# ? Mar 6, 2018 06:11 |
|
rjmccall posted:...in practice there's a handshake agreement that volatile accesses will be implemented in an "obvious" way... Is this one of those rare scenarios where C/C++ compiler writers don't call their language lawyers and go for maxxximum performance by stretching the specification nearly to breaking point?
|
# ? Mar 6, 2018 10:05 |
|
Even a malevolently-correct volatile implementation would prevent most of the optimizations which break incorrect code, since the whole point is to force the compiler to emit code which it can "prove" is unnecessary.
|
# ? Mar 6, 2018 18:53 |
|
VikingofRock posted:volatile is for hardware registers that might have their value changed by something other than your code, right? Hardware registers do more than that: you are not expected to "read back" the values you "wrote". Each address in a range of memory mapped registers is, essentially, a function declaration, and each read or write a function call. volatile pretty much turns memory accesses into function calls. Kinda
|
# ? Mar 7, 2018 00:29 |
|
A few months ago I solved an issue by throwing the volatile keyword at some Java code in idiotic fashion. I remembered it as "the thing that really makes the variable read the proper value when it isn't" from some C I wrote for a microprocessors class in college. Multiple threads and a status flag. Strange thing is the code would work in a debugger if you set a breakpoint in the right spot. The volatile keyword in Java tells threads to hit main memory instead of checking local thread cache, a thing that I wasn't even aware of. So the status flag would get set and the other thread would just never see the update because it was checking against its cache.
|
# ? Mar 7, 2018 06:09 |
|
I don't know about C, but in Java it's simple. If you got any sort of variable that's being accessed by multiple threads, slap volatile on it. E: Or use the synchronized keyword on the accessing methods in a sensible manner. On the other hand, if you NEED the volatile keyword at all, it might be time to take a step back and rethink your code's architecture. Ideally you use multithreading to split up the code in such a way that the parts never need to interact again, except perhaps at the end where you synchronously wait for everything to finish. I think such a design should be possible in any program that doesn't need very high performance. Anyway, I came here to post this: https://everythingfunctional.wordpress.com/2018/03/07/haskelly-fizzbuzz/ Gotta love the way functional programmers think. Carbon dioxide fucked around with this message at 07:56 on Mar 7, 2018 |
# ? Mar 7, 2018 07:46 |
|
I just found this line of C# to check that two ints (noDetails, noPictures) are equal. it took me a second to even parse it's purpose.code:
|
# ? Mar 7, 2018 09:22 |
|
|
# ? May 30, 2024 12:16 |
|
chippy posted:I just found this line of C# to check that two ints (noDetails, noPictures) are equal. it took me a second to even parse it's purpose. This is amazing
|
# ? Mar 7, 2018 09:41 |