Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Linear Zoetrope
Nov 28, 2011

A hero must cook
What if I don't want to recompile? Could we maybe fetch 2 from a web server somewhere?

Adbot
ADBOT LOVES YOU

Absurd Alhazred
Mar 27, 2010

by Athanatos

Jsor posted:

What if I don't want to recompile? Could we maybe fetch 2 from a web server somewhere?

That's way too specific. You need an XML and a protocol-agnostic API library which itself links to a generic io library which loads it. Otherwise, what if you're not prepared for Web 3.9 or IPv24?! :gonk:

xzzy
Mar 5, 2009

What we need is for CIPM to develop a universal way for us to calculate 2 based on some universal constant.

Linear Zoetrope
Nov 28, 2011

A hero must cook

xzzy posted:

What we need is for CIPM to develop a universal way for us to calculate 2 based on some universal constant.

Maybe we could calculate 2 from the number two? No wait that's stupid sorry.

Absurd Alhazred
Mar 27, 2010

by Athanatos

xzzy posted:

What we need is for CIPM to develop a universal way for us to calculate 2 based on some universal constant.

Oh, that's easy: divide the circumference of a circle of unit radius by its area.

Jsor posted:

Maybe we could calculate 2 from the number two? No wait that's stupid sorry.

Well, you could always write "1+1". That's always 2.

Except when it's 0. poo poo. poo poo. drat.

Klades
Sep 8, 2011

You guys are overthinking this again

code:
int two;
std::cout << "Input a number representing 'two': " << std::flush;
std::cin >> two;

int zero << "Input a number representing 'zero': " << std::flush;
std::cin >> zero;

if (foo % two == zero)
Oh poo poo, I forgot to check for division by zero

code:
if (two == zero) throw std::runtime_error("Can't math");
else if (foo % two == zero)
Perfect.

Linear Zoetrope
Nov 28, 2011

A hero must cook

Klades posted:

You guys are overthinking this again

code:
int two;
std::cout << "Input a number representing 'two': " << std::flush;
std::cin >> two;

int zero << "Input a number representing 'zero': " << std::flush;
std::cin >> zero;

if (foo % two == zero)
Oh poo poo, I forgot to check for division by zero

code:
if (two == zero) throw std::runtime_error("Can't math");
else if (foo % two == zero)
Perfect.

That has a race condition. What if the numbers two or zero change between the user's input and the test?

Spatial
Nov 15, 2007

Documentation horror.

When I read the hardware manual for the power management unit 3 months ago:

quote:

...blah blah this *ALWAYS* happens regardless of whether you write blah blah before entering the low power state blah blah...
Emphasis unchanged. Anyway, silicon came back and the behaviour is not as expected. Welp. I talk to to the guy who designed that hardware block, who wrote the manual, and I show him what we did in the firmware ROM. He says:

:grin: posted:

No no... it doesn't really work that way. It doesn't always do that, actually it's really a debug thing you shouldn't ever use... hmm... yeah... [long pause] It actually works like this *draws diagram that explains everything with perfect clarity. it's not in the manual*
gently caress youuuuuuuuu

Absurd Alhazred
Mar 27, 2010

by Athanatos
Ah, hardware manuals.

Reminds me of when I had this really obscure C code, written by the manufacturer's engineers, from which I was trying to understand how to implement I/O communications with their card. I see this loop, and think it's supposed to be a delay, perhaps expecting CPU's clock speeds as they were in the 1990's or whenever this came out to provide enough to allow the hardware to respond to a signal. There was also some read from another port that I didn't quite understand.

Well, after playing around with the various tools for real-time timing in Windows XP (hint: not very good nor reliable), and trying to extend the loop by how much I thought a modern CPU would be faster than an old one (wasteful and also did not work), I run into an old Linux hardware HOWTO which explains what that was: turns out the loop was completely irrelevant. Instead, if you read from the parallel port (regardless of whether anything is plugged in), you know you're going to get a 1ms delay. BAM! Now I can write my interfacing code.

I get the impression that there's a lot of cargo cult in the coding world.

JawnV6
Jul 4, 2004

So hot ...

Absurd Alhazred posted:

Well, you could always write "1+1". That's always 2.

Except when it's 0. poo poo. poo poo. drat.
Now I want to start throwing this garbage at compilers and seeing who handles 1-bit overflow
code:
typedef struct onebit {
  char bit : 1;
} onebit_t;
e: My last big hardware manual mistake was a good chunk of registers being inaccessible from a particular interface. The protocol was obviously short 2 upper bits, but I didn't really put it all together until the first rev boards came back.

JawnV6 fucked around with this message at 02:29 on May 13, 2016

KernelSlanders
May 27, 2013

Rogue operating systems on occasion spread lies and rumors about me.

TheresaJayne posted:

I don;t know how bad this actually is, I personally don;t like this but its a rule we have at work - no Magic numbers so this

return data[3] & 0xFF | (data[2] & 0xFF << 8) | (data[1] &0xFF << 16) | (data[0] & 0xFF << 24);

which takes the 4 bytes passed in and turns them into an int;

is now

return data[FOURTH_BYTE] & BYTE_MASK | (data[THIRD_BYTE] & BYTE_MASK << BYTE_SIZE_SHIFT) | (data[SECOND_BYTE] & BYTE_MASK << DOUBLE_BYTE_SIZE_SHIFT) | (data[FIRST_BYTE] & BYTE_MASK << TRIPLE_BYTE_SIZE_SHIFT);

with all the NAMES being private static final int at the top of the class.

and if the same number is used elsewhere, its called something different.

Concern over magic numbers is solving entirely the wrong problem, because that is terrible code either way.

code:
int *p = (int*)(&data);
return *p;

vOv
Feb 8, 2014

KernelSlanders posted:

Concern over magic numbers is solving entirely the wrong problem, because that is terrible code either way.

code:
int *p = (int*)(&data);
return *p;

Only on a little-endian big-endian system, and even then I'm pretty sure this violates strict aliasing.

vOv fucked around with this message at 06:35 on May 13, 2016

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
The fourth byte is low and the first byte is high, so that there int is big-endian. So that cast is almost certain to produce something that will have to be byte-swapped.

Also, in addition to being a probable strict-aliasing violation, casting to int* is quite likely to violate alignment rules unless you're exceptionally careful about your buffering.

Also the original code was clearly Java, so it would perhaps not quite be fair to criticize it for not using C features, even if the features were being used correctly.

eth0.n
Jun 1, 2012

vOv posted:

Only on a little-endian system, and even then I'm pretty sure this violates strict aliasing.

As long as data is a char array, there's no strict aliasing issue.

Bigger problem is the original post seemed to be about Java (static final int as constants).

Absurd Alhazred
Mar 27, 2010

by Athanatos
It also involved loading bytes individually from a hardware register.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

eth0.n posted:

As long as data is a char array, there's no strict aliasing issue.

This is one of those big open questions, because on the one hand it would obviously be ridiculous to invalidate the huge amount of existing code that writes structured stuff into char arrays, and on the other hand that is not actually what the standard says.

The C standard has a formal concept called an object, which is basically space for a value in memory. An object has an "effective type", and you're allowed to access it through an l-value of that type, or the same type with the wrong signedness, or a character type, all ignoring qualifiers. It's not always clear what constitutes an object, except that a declaration definitely creates an object whose effective type is its declared type. Note in particular that a declared variable of type char[1024] is an object with an effective type of char[1024], and the rule doesn't work in reverse: you're allowed to access an int object using a char l-value, but not a char object using an int l-value.

The C committee has been really, really bad about coming up with a sensible rule here mostly because they can't agree about what should be forbidden. They can definitely agree that dumb things should be forbidden, and that reasonable things should be allowed, but they are really not sure about how to define either of those things in a way that actually permits either optimization or the writing of code.

KernelSlanders
May 27, 2013

Rogue operating systems on occasion spread lies and rumors about me.
There's no alignment issue. The following works just fine.

code:
#include <stdio.h>

int main(int argc, char**argv) {
  unsigned char data[5] = {3, 2, 0, 0, 1};

  int *p = (int*)(&data[1]);
  printf("%d\n", *p);

  p = (int*)(&data[0]);
  printf("%d\n", *p);

  return 0;
}
Loading bytes individually from a hardware register in Java confuses me a bit, but if you're using com.sun.unsafe then you can do a similar thing, although I don't recall the syntax. Either way, the prior code presumed they were already in an array.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
It isn't guaranteed to. The start of any particular array on the stack is likely to be aligned, but in real code you are probably reading the next four bytes, not the first four bytes, and that is not particularly likely to be aligned.

That said, most architectures are pretty forgiving about alignment, especially x86, so it is not something that will always bite you. On the other hand, the compiler knows that, too, and will generally emit an unaligned load on those architectures the same way, while actually emitting the right sequence on platforms with stronger requirements; so really, you might as well get the alignment rules right.

Edit: your example definitely only works on an architecture that doesn't enforce small alignments.

rjmccall fucked around with this message at 06:17 on May 13, 2016

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

KernelSlanders posted:

There's no alignment issue. The following works just fine.

code:
#include <stdio.h>

int main(int argc, char**argv) {
  unsigned char data[5] = {3, 2, 0, 0, 1};

  int *p = (int*)(&data[1]);
  printf("%d\n", *p);

  p = (int*)(&data[0]);
  printf("%d\n", *p);

  return 0;
}

"Works on my machine" is not the same as "works fine". (Especially if you're testing this on your desktop machine with a processor that will happily fix up your unaligned accesses for you.)

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

vOv posted:

Only on a little-endian system, and even then I'm pretty sure this violates strict aliasing.

are there any important computers left that aren't little-endian

Kazinsal
Dec 13, 2011

Suspicious Dish posted:

are there any important computers left that aren't little-endian

IBM z/Architecture, and ARM can be switched to big-endian data mode (pretty sure instructions are fixed little-endian though).

If SPARC counts, then it can switch endianness per-instruction! :pram:

TheresaJayne
Jul 1, 2011

KernelSlanders posted:

Concern over magic numbers is solving entirely the wrong problem, because that is terrible code either way.

code:
int *p = (int*)(&data);
return *p;

Doesn't quite work i am afraid, the data is actually 14k long and these are 4 bytes as a byte array, 4 bytes as int, 4 bytes as int, 4 bytes as int, (27 bits, 236 bits, 20 bits, 1 bit, 2 bits 5 bits, ) repeated n times

Also that looks like C this is Java we are talking about


Oh 0 1 and 2 are not magic numbers but 3 is :( (we know a song about that) https://www.youtube.com/watch?v=daWObuUptrQ

TheresaJayne fucked around with this message at 06:39 on May 13, 2016

vOv
Feb 8, 2014

Suspicious Dish posted:

are there any important computers left that aren't little-endian

I got it backwards, it actually only works on big-endian machines.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Kazinsal posted:

IBM z/Architecture

so no then

Soricidus
Oct 21, 2010
freedom-hating statist shill
The cast also assumes that int is 32 bits. It's bad and the original code was correct and good.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
Why use x % 2 == 0 instead of x & 1 == 0? Am I missing the joke or something?

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

dougdrums posted:

Am I missing the joke or something?

Um, I guess so? Posters are making a mountain out of a molehill in regards to how to "properly" code a parity check. That's the joke, such as it is.

darthbob88
Oct 13, 2011

YOSPOS

Absurd Alhazred posted:

It also involved loading bytes individually from a hardware register.

feedmegin
Jul 30, 2008

Kazinsal posted:

IBM z/Architecture, and ARM can be switched to big-endian data mode (pretty sure instructions are fixed little-endian though).

If SPARC counts, then it can switch endianness per-instruction! :pram:

IBM POWER boxes running AIX and stuff are still big-endian, so is SPARC (and if you're counting mainframes then SPARC definitely counts, it's the largest of the remaining server RISCs by market share), ARM instructions can be and originally only were big-endian, and indeed most RISCs were originally big-endian and even now can be run in either endianness - it's not like it's a hard thing to do in hardware if you have fixed-width instructions.

But yeah, in terms of sheer numbers it's either little-endian ARM or x86 by a landslide these days. Big-endian is definitely legacy at this point.

Edit: oh, and Java is big-endian (e.g. the format of constants in class files, when you write integers out over RMI, etc), too.

feedmegin fucked around with this message at 13:04 on May 13, 2016

feedmegin
Jul 30, 2008

Jabor posted:

"Works on my machine" is not the same as "works fine". (Especially if you're testing this on your desktop machine with a processor that will happily fix up your unaligned accesses for you.)

Yup, here is what happens when I run your 'works fine' code, KernelSlanders -

bash-2.05$ /opt/csw/bin/gcc test.c
bash-2.05$ ./a.out
Bus Error (core dumped)
bash-2.05$ uname -m
sun4u

ExcessBLarg!
Sep 1, 2001

rjmccall posted:

Note in particular that a declared variable of type char[1024] is an object with an effective type of char[1024], and the rule doesn't work in reverse: you're allowed to access an int object using a char l-value, but not a char object using an int l-value.
We might have talked about it before, but what's your opinion on unions in C99 and type punning?

The last time I had to do something like this I used union that was basically:
code:
union int32bytes_u {
    int32_t value;
    unsigned char bytes[sizeof(int32_t)];
};
Then wrote to bytes and read from value. It was also used in such a way that multiple write access didn't happen, and if multiple read accesses did happen it didn't matter.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)

Hammerite posted:

Um, I guess so? Posters are making a mountain out of a molehill in regards to how to "properly" code a parity check. That's the joke, such as it is.

I've just seen a lot of smart people do it (irl), so I've always been a little confused. But after compiling both on windows and diassembling them, the latter is more efficient. Modulus emits a cdq and xor and then subs from a register that should always be zero in this case, but still uses and imm to get the result.

Using and directly results in and, test, jne, which is what you'd want...

I don't know why people keep doing it. It's been a bit of a pet peeve. I thought it made no difference at this point, but I guess it still does.

Sebbe
Feb 29, 2004

dougdrums posted:

I don't know why people keep doing it.

Because it's a perfectly acceptable way of determining if a number is even.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

dougdrums posted:

I've just seen a lot of smart people do it (irl), so I've always been a little confused. But after compiling both on windows and diassembling them, the latter is more efficient. Modulus emits a cdq and xor and then subs from a register that should always be zero in this case, but still uses and imm to get the result.

Using and directly results in and, test, jne, which is what you'd want...

I don't know why people keep doing it. It's been a bit of a pet peeve. I thought it made no difference at this point, but I guess it still does.

Have you tried compiling with ... any level of optimization at all?

raminasi
Jan 25, 2005

a last drink with no ice

dougdrums posted:

I've just seen a lot of smart people do it (irl), so I've always been a little confused. But after compiling both on windows and diassembling them, the latter is more efficient. Modulus emits a cdq and xor and then subs from a register that should always be zero in this case, but still uses and imm to get the result.

Using and directly results in and, test, jne, which is what you'd want...

I don't know why people keep doing it. It's been a bit of a pet peeve. I thought it made no difference at this point, but I guess it still does.

When you say it "makes a difference," do you mean one that an end-user (or even your profiler) notices while using a fully-optimized build? It sure makes a difference in code readability, and not a good difference.

ExcessBLarg!
Sep 1, 2001

dougdrums posted:

I've just seen a lot of smart people do it (irl), so I've always been a little confused. But after compiling both on windows and diassembling them, the latter is more efficient.
Even if there's a potential for microoptimization, it honestly makes no difference unless your code consists of a busy-loop of evenness checks. And if that's honestly the case, you might be better off hand-writing the assembly anyways.

Otherwise, the main reason folks use "x % 2 == 0" is because it's a pretty direct encoding of the question "is x divisible by 2?". Conversely "x & 1 == 0" is more asking "is the first (one-indexed) bit of x unset?" Yes the results are equivalent*, but it's more about viewing x as an integer rather than a bag of bits.

* equivalent in two's complement. They are not equivalent in ones' complement for negative integers.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

ExcessBLarg! posted:

We might have talked about it before, but what's your opinion on unions in C99 and type punning?

My opinion is that obvious reinterpretation through unions is an important language feature that it's important to not mis-compile.

My read of the standards here is :downswords: :suicide:.

Apparently the C++ committee is trying to improve this, but their current efforts are basically just attempting to clean up the formal objects model and improve the wording on unions; the intended language rule will still be that a union has one active member and you must take action to change it before you are allowed to read from a different member.

BigPaddy
Jun 30, 2008

That night we performed the rite and opened the gate.
Halfway through, I went to fix us both a coke float.
By the time I got back, he'd gone insane.
Plus, he'd left the gate open and there was evil everywhere.


A horror story for a day such a this...

It twas Friday the 13th and the hands of the clock did tick down to the time of doom, a deadline that hung over the heads of the team in much a way the sword of Damocles did sit over the head of the commoner. Slowly the completion of the build drew near... 98%... 99%... 100% but wait! The indicator was red like the blood that spurts from a fatal wound to the heart from the dagger of a hated foe. Feverishly the eyes of those gathered dropped to the message and saw horror upon horror the truth of their dire situation.

quote:

Code Coverage Failures:
1. Average test coverage across all Apex Classes and Triggers is 74%, at least 75% test coverage is required.

Foolishly in haste test classes had not been written for many a month to cover their work to ensure that it was divine and functioned as the Masters had insisted. This problem was taken to the Masters and their response was swift.

quote:

There is always something with Salesforce, can't we throw them some cash or something to remove this stupid limit

The developers entreated that it could not be done and if they could have more time to write the missing test classes everything would be better for all. The Masters looked upon them with scorn, test classes were just a techie thing to waste time away from the important task of making new things for the Holy Warriors of Sales to use to sell as much product as they could. The developers were beaten harshly with words about timelines, profit margins and how technical debt was not a real thing. Finally the leader of the developers agreed to do what he could to make the build work and deploy on time.

As he slowly trudged, his pride fatally wounded and instructed his team to write test classes that execute the new code but to make all asserts pass without doing any real tests. After this command he retreated to his enclave and removed a bottle filled with a dark brown liquid, pouring it into a glass and weeping softly as if he believed no one could here him.

BUT READER THIS IS NO FICTION! That lead developer works where I do and this very interaction happened on this very day! As we speak the developers write their sinful tests to meet a deadline that has no value other than for the Masters to get a bigger bonus this quarter. Let this tale be a cautionary one and never specifiy time in your development plans to write test classes and always role it into your normal dev time to hide it from the eyes of those who weild MBAs as one would a sword at the head of common sense.

Series DD Funding
Nov 25, 2014

by exmarx

dougdrums posted:

I've just seen a lot of smart people do it (irl), so I've always been a little confused. But after compiling both on windows and diassembling them, the latter is more efficient. Modulus emits a cdq and xor and then subs from a register that should always be zero in this case, but still uses and imm to get the result.

Using and directly results in and, test, jne, which is what you'd want...

I don't know why people keep doing it. It's been a bit of a pet peeve. I thought it made no difference at this point, but I guess it still does.

i don't know what bad compiler you used but clang 3.7 emits the right thing unless I set -O0

Adbot
ADBOT LOVES YOU

raminasi
Jan 25, 2005

a last drink with no ice

BigPaddy posted:

A horror story for a day such a this...

It twas Friday the 13th and the hands of the clock did tick down to the time of doom, a deadline that hung over the heads of the team in much a way the sword of Damocles did sit over the head of the commoner. Slowly the completion of the build drew near... 98%... 99%... 100% but wait! The indicator was red like the blood that spurts from a fatal wound to the heart from the dagger of a hated foe. Feverishly the eyes of those gathered dropped to the message and saw horror upon horror the truth of their dire situation.


Foolishly in haste test classes had not been written for many a month to cover their work to ensure that it was divine and functioned as the Masters had insisted. This problem was taken to the Masters and their response was swift.


The developers entreated that it could not be done and if they could have more time to write the missing test classes everything would be better for all. The Masters looked upon them with scorn, test classes were just a techie thing to waste time away from the important task of making new things for the Holy Warriors of Sales to use to sell as much product as they could. The developers were beaten harshly with words about timelines, profit margins and how technical debt was not a real thing. Finally the leader of the developers agreed to do what he could to make the build work and deploy on time.

As he slowly trudged, his pride fatally wounded and instructed his team to write test classes that execute the new code but to make all asserts pass without doing any real tests. After this command he retreated to his enclave and removed a bottle filled with a dark brown liquid, pouring it into a glass and weeping softly as if he believed no one could here him.

BUT READER THIS IS NO FICTION! That lead developer works where I do and this very interaction happened on this very day! As we speak the developers write their sinful tests to meet a deadline that has no value other than for the Masters to get a bigger bonus this quarter. Let this tale be a cautionary one and never specifiy time in your development plans to write test classes and always role it into your normal dev time to hide it from the eyes of those who weild MBAs as one would a sword at the head of common sense.

This is amazing.

I have my own testing story to share - while trying to figure out why some eight-line unit tests were taking upwards of four seconds each to run, I found out that some gibbering idiot had implemented a view model in such a way that when it was instantiated it automatically spun up a background thread that immediately blocked and was then forgot about forever. Forty unit tests that each create one of these view models, forty bored threads doing nothing, and the .NET task scheduler reduced to a mess of blood and tears because it has no idea what the gently caress I'm trying to do. In a twist you all saw coming, the gibbering idiot was me.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply