Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ultrafilter
Aug 23, 2007

It's okay if you have any questions.


OneEightHundred posted:

What kind of situations does that happen in? I thought the assumption was that all unspecified digits were zero.

1 could be anything between 0.5 and 1.5, but 1.0 can only be between 0.95 and 1.05.

Adbot
ADBOT LOVES YOU

Zopotantor
Feb 24, 2013

...und ist er drin dann lassen wir ihn niemals wieder raus...

OneEightHundred posted:

What kind of situations does that happen in? I thought the assumption was that all unspecified digits were zero.

A trailing zero tells you that rounding occurred at that point, and not at the previous digit. I don’t remember specific details or examples, though.

e: ^this

Computer viking
May 30, 2011
Now with less breakage.

If I remember my science classes right, trailing zeroes indicate the number of significant digits. 1.00 is three significant digits, as is 1.00e-3 (0.00100). Scientific notation makes it easier, since it's otherwise impossible to know how precise e.g. 1000000 is meant to be. (The rule in those cases is to only count the nonzero digits - which may be an underestimate, it could be 1.00e6)

This is useful when you want to know how precise your answer is - if you estimate there to be 3e2 bugs per square meter, and you have exactly 2.000000 square meters, that does not mean you have exactly 6.000000e2 bugs. The rule is that in multiplication, you go down to the lowest number of significant digits - so it's just 6e2 (550 to 649, effectively).

And I know using 2e6 instead of 2x10^6 is a compSci thing - but it's also much easier to write on a phone keyboard.

Foxfire_
Nov 8, 2010

Computer viking posted:

This is useful when you want to know how precise your answer is - if you estimate there to be 3e2 bugs per square meter, and you have exactly 2.000000 square meters, that does not mean you have exactly 6.000000e2 bugs. The rule is that in multiplication, you go down to the lowest number of significant digits - so it's just 6e2 (550 to 649, effectively).
Significant figures math isn't a thing in actual science. It's a set of rules-of-thumb for when you're too lazy to do real uncertainty propagation.

Like if I measure the distance from thing A to thing B as 5.5cm +/- 0.5cm and from thing B to thing C as 1.2cm +/- 0.3cm, my calculated distance from A to C is 6.7cm +/- sqrt(0.5^2 + 0.3^2)cm. If I want to write that as a decimal, 6.700+/-0.583cm is a more correct answer than trying to do significant figure things. Actual measurements can have non power-of-ten uncertainties and combinations of uncertain measurements are more uncertain than any of their components.

Computer viking
May 30, 2011
Now with less breakage.

Last time I saw it done like that was for high-school level chemistry (I think - it may have been physics), so yeah. What I do for a living probably qualifies as science, and nothing we do at work involves that sort of uncertainty calculation at all. It all sort of works itself out as standard deviations and correlation scores and the like instead. Doing actual precise propagation of precision like that feels more like an engineering thing to me, though I wouldn't be surprised if the more precisely measurable sciences (particle physics and the like) find it useful.

Anyway, the simple "counting digits" version is still useful to keep yourself grounded when estimating things, if nothing else.

Computer viking fucked around with this message at 04:32 on Dec 7, 2020

fourwood
Sep 9, 2001

Damn I'll bring them to their knees.

Foxfire_ posted:

Significant figures math isn't a thing in actual science. It's a set of rules-of-thumb for when you're too lazy to do real uncertainty propagation.

Like if I measure the distance from thing A to thing B as 5.5cm +/- 0.5cm and from thing B to thing C as 1.2cm +/- 0.3cm, my calculated distance from A to C is 6.7cm +/- sqrt(0.5^2 + 0.3^2)cm. If I want to write that as a decimal, 6.700+/-0.583cm is a more correct answer than trying to do significant figure things. Actual measurements can have non power-of-ten uncertainties and combinations of uncertain measurements are more uncertain than any of their components.
That... goes against everything I was taught. If all your uncertainties are of order 10^-1 there’s no way you combine them to get an uncertainty that’s of order 10^-3. But probably a de-rail, I guess.

Absurd Alhazred
Mar 27, 2010

by Athanatos

fourwood posted:

That... goes against everything I was taught. If all your uncertainties are of order 10^-1 there’s no way you combine them to get an uncertainty that’s of order 10^-3. But probably a de-rail, I guess.

Where are you getting 10-3? The sum's uncertainty (0.583) is larger than either of the elements` uncertainties (0.5 and 0.3), as you'd expect. They're all around 10-1. (all in cm)

fourwood
Sep 9, 2001

Damn I'll bring them to their knees.

Absurd Alhazred posted:

Where are you getting 10-3? The sum's uncertainty (0.583) is larger than either of the elements` uncertainties (0.5 and 0.3), as you'd expect. They're all around 10-1. (all in cm)
Supposedly 0.583 means you know it’s not +/- 0.584 or 0.582, so you’re confident in the 10^-3 place. But that’s likely untrue since you only knew the input measurements to a tenth. (Should you have put 0.5 into your uncertainty, or was it really 0.52 if you could have just measured it better?)

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

fourwood posted:

Supposedly 0.583 means you know it’s not +/- 0.584 or 0.582, so you’re confident in the 10^-3 place. But that’s likely untrue since you only knew the input measurements to a tenth. (Should you have put 0.5 into your uncertainty, or was it really 0.52 if you could have just measured it better?)

What? An uncertainty measurement doesn't mean "I know it's exactly either 0.5 or 1.5", it's "I know it's somewhere between these values".

An uncertainty of ±0.51 is less certain than ±0.5

Zopotantor
Feb 24, 2013

...und ist er drin dann lassen wir ihn niemals wieder raus...
Here's a talk from last year showing some of the examples I couldn’t remember, and incidentally might also help with the original question.
https://www.youtube.com/watch?v=4P_kbF0EbZM

Foxfire_
Nov 8, 2010

± symbol means multiple things. Sometimes it is 'replace this with either a + or -' (e.g. quadratic formula, trig identities, ...)

It is also the symbol for tolerances and uncertainties. Like an analytical balance with an accuracy of ±0.03mg is promising that its measurements will be within 0.03mg of the true value. If you mass something and the balance reads 0.27mg, you would write that down as 0.27 ± 0.03mg. The true mass could be anywhere between 0.24mg to 0.30mg. It would never be 0.22mg unless the balance was out of calibration.

fourwood
Sep 9, 2001

Damn I'll bring them to their knees.

Jabor posted:

What? An uncertainty measurement doesn't mean "I know it's exactly either 0.5 or 1.5", it's "I know it's somewhere between these values".

An uncertainty of ±0.51 is less certain than ±0.5
Starting from the original example, if there are 2 measurements that are “5.5” and “1.2” you only have an accuracy of ~0.1 cm. And since the uncertainties were 0.5 and 0.3, we clearly don’t know the 2nd digit past the decimal in the measurements since the errors are of the order 10^-1. The sum of the two values is less certain than the values themselves (but better than straight adding of uncertainties) so there’s no way you can be sure about the sum all the way down to the 3rd decimal place. (“6.700” is an unreasonable amount of precision.) A better final number is probably 6.7 +/- 0.6, with “6.700 +/- 0.583” having more precision than is justified by the data.

That’s how it’s been ingrained in me, anyway. :shrug:

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

fourwood posted:

Starting from the original example, if there are 2 measurements that are “5.5” and “1.2” you only have an accuracy of ~0.1 cm. And since the uncertainties were 0.5 and 0.3, we clearly don’t know the 2nd digit past the decimal in the measurements since the errors are of the order 10^-1. The sum of the two values is less certain than the values themselves (but better than straight adding of uncertainties) so there’s no way you can be sure about the sum all the way down to the 3rd decimal place. (“6.700” is an unreasonable amount of precision.) A better final number is probably 6.7 +/- 0.6, with “6.700 +/- 0.583” having more precision than is justified by the data.

That’s how it’s been ingrained in me, anyway. :shrug:

±0.583 is unequivocally less precise than ±0.5. ±0.5 never ever ever means it could be +0.51 or -0.53 - if errors of that size are within your range, your uncertainty should be larger. If you know that your uncertainty is better than ±0.6, but it could be worse than ±0.5, it's totally appropriate to use a number in between as your measure of the accuracy of the actual value.

Looking at the number of digits in the uncertainty is the wrong way to judge precision - the magnitude (relative to the underlying value) is what defines the precision of the measurement.

feelix
Nov 27, 2016
THE ONLY EXERCISE I AM UNFAMILIAR WITH IS EXERCISING MY ABILITY TO MAKE A POST PEOPLE WANT TO READ
If you don't have to propagate uncertainty using derivatives are you even really doing science?

Absurd Alhazred
Mar 27, 2010

by Athanatos

fourwood posted:

Starting from the original example, if there are 2 measurements that are “5.5” and “1.2” you only have an accuracy of ~0.1 cm. And since the uncertainties were 0.5 and 0.3, we clearly don’t know the 2nd digit past the decimal in the measurements since the errors are of the order 10^-1. The sum of the two values is less certain than the values themselves (but better than straight adding of uncertainties) so there’s no way you can be sure about the sum all the way down to the 3rd decimal place. (“6.700” is an unreasonable amount of precision.) A better final number is probably 6.7 +/- 0.6, with “6.700 +/- 0.583” having more precision than is justified by the data.

That’s how it’s been ingrained in me, anyway. :shrug:

5.5cm +/- 0.5cm usually is assumed to mean "5.5cm with an uncertainty range of 0.5cm", or, more rigorously, if you want to use the fancy derivative tools to compose uncertainties, "a gaussian probability distribution peaking at 5.5cm with standard deviation 0.5cm". The extra digits in 6.7 aren't to express higher precision, it's to account for the fact that the uncertainty now has more digits: "6.700+/- 0.583". You have been ingrained with a more simplistic understanding of expressing precision than is being used here. It happens.

Computer viking
May 30, 2011
Now with less breakage.

feelix posted:

If you don't have to propagate uncertainty using derivatives are you even really doing science?

The variation between our samples is so much larger than the measurement uncertainty that it doesn't really matter. Add in the "70 000 measurements per sample, 24 samples" dimensionality mismatch, and :shrug: - we sort of have bigger problems than the minor variability of the instruments.

(Genetics and cancer biology. There is also the project where someone hopes to see statistical differences between the ductal structure in mice mammaries with different cancers, based on 3D scans of a single digit number of mice. At least the laser lighsheet microscopy is really cool.)

fourwood
Sep 9, 2001

Damn I'll bring them to their knees.

Absurd Alhazred posted:

5.5cm +/- 0.5cm usually is assumed to mean "5.5cm with an uncertainty range of 0.5cm", or, more rigorously, if you want to use the fancy derivative tools to compose uncertainties, "a gaussian probability distribution peaking at 5.5cm with standard deviation 0.5cm". The extra digits in 6.7 aren't to express higher precision, it's to account for the fact that the uncertainty now has more digits: "6.700+/- 0.583". You have been ingrained with a more simplistic understanding of expressing precision than is being used here. It happens.
Nah. Don’t report 3+ sig figs on your uncertainties. Please give me reputable sources to change my mind/undo my years of “simplistic” learning.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


fourwood posted:

Nah. Don’t report 3+ sig figs on your uncertainties. Please give me reputable sources to change my mind/undo my years of “simplistic” learning.

See propagation of uncertainty and references therein.

Xerophyte
Mar 17, 2008

This space intentionally left blank

Absurd Alhazred posted:

5.5cm +/- 0.5cm usually is assumed to mean "5.5cm with an uncertainty range of 0.5cm", or, more rigorously, if you want to use the fancy derivative tools to compose uncertainties, "a gaussian probability distribution peaking at 5.5cm with standard deviation 0.5cm".

I wouldn't normally interpret ± as specifying a normal distribution either, at least in most fields. Certainly in a technical drawing 55mm ± 1mm means 54mm and 56mm are strict limits and you are giving or being given a strong guarantee that the value falls between them. If it were a normal distribution with a mean of 55 mm and a standard deviation of 1 mm you're saying that 70% of measurements are within 54 mm and 56 mm, which is much weaker. If you have to interpret ±1mm statistically then it's probably closer to specifying a 5σ limit, although I don't really want to say anything about the distribution from the tolerances. Possibly this is partly my blinkers and there are fields where ± means exactly σ, but definitely don't assume it in general.

I agree that in casual use you shouldn't report more significant digits than you need to, but in practice if you're in a situation where you care about measurement uncertainty then you specify the level of uncertainty in exact terms of variance, standard deviation, moments, or whatever. You definitely do this when dealing with uncertainty in code. You do not rely on loosey-goosey significant digit rules.

feelix
Nov 27, 2016
THE ONLY EXERCISE I AM UNFAMILIAR WITH IS EXERCISING MY ABILITY TO MAKE A POST PEOPLE WANT TO READ

Computer viking posted:

The variation between our samples is so much larger than the measurement uncertainty that it doesn't really matter. Add in the "70 000 measurements per sample, 24 samples" dimensionality mismatch, and :shrug: - we sort of have bigger problems than the minor variability of the instruments.

(Genetics and cancer biology. There is also the project where someone hopes to see statistical differences between the ductal structure in mice mammaries with different cancers, based on 3D scans of a single digit number of mice. At least the laser lighsheet microscopy is really cool.)

You also probably don't have neat algebraic functions to work with like you would in an undergrad homework problem

Foxfire_
Nov 8, 2010

Does it help if you think of the CENTER ± HALFWIDTH form as using infinite precision numbers to specify an interval on a number line that the true value is somewhere inside?

e: example

Suppose I measure something as 1.0±0.1in. Then I want to report in cm because :metric:
After multiplying by (2.54cm/1in), I get 2.54±0.254cm.

Sig fig rules say to render that as something like 2.5±0.3cm, but that's obviously wrong. Choosing a new unit didn't make me less certain about the length or move the center of the interval.
Especially since if I decide I want inches after all, convert back, and end up with 0.98±0.1in

Foxfire_ fucked around with this message at 02:37 on Dec 8, 2020

Absurd Alhazred
Mar 27, 2010

by Athanatos

Xerophyte posted:

I wouldn't normally interpret ± as specifying a normal distribution either, at least in most fields. Certainly in a technical drawing 55mm ± 1mm means 54mm and 56mm are strict limits and you are giving or being given a strong guarantee that the value falls between them. If it were a normal distribution with a mean of 55 mm and a standard deviation of 1 mm you're saying that 70% of measurements are within 54 mm and 56 mm, which is much weaker. If you have to interpret ±1mm statistically then it's probably closer to specifying a 5σ limit, although I don't really want to say anything about the distribution from the tolerances. Possibly this is partly my blinkers and there are fields where ± means exactly σ, but definitely don't assume it in general.

I was specifically stating that you should do this if you want to use the fancy derivative tools for combining uncertainties. The calculation for 5σ vs what we're talking about here, 1σ, a lot of the calculations are going to be the same, although once you go beyond products it might be more difficult than just using the same formula (can't be bothered to look up the various equations).

For strict tolerance ranges with no further information about the error distribution you're stuck with worst-case analysis, but for measurement problems that might needlessly weaken your results for compound quantities.

fourwood
Sep 9, 2001

Damn I'll bring them to their knees.

ultrafilter posted:

See propagation of uncertainty and references therein.
I’m not seeing a section here talking about digits of precision in uncertainties here, mostly just analytical formalism. Can you point me to the right spot?

Conversely, see e.g. Taylor’s “An Introduction to Error Analysis” 2e., p. 15:

quote:

2.5 Experimental uncertainties should almost always be rounded to one significant figure.
...
The rule (2.5) has only one significant exception. If the leading digit in the uncertainty dx is a 1, then keeping two significant figures in dx may be better.
A lot of other search results can be found to this effect Google searching for significant figures for uncertainty analysis. If we’re measuring something and adding errors in quadrature then we’re almost surely talking about Gaussian-distributed experimental uncertainties so the above would apply as far as anyone would trust this book.

I’m literally and unironically :allears: to learn more about measurement statistics so if someone has something they can let me read about why “0.583” is a reasonable error estimate for measuring “5.5 + 1.2” in, and I quote, “actual science”, I honestly wish to be educated because this stuff interests me! And I want to know more/be better!

So throw me your links which I promise to read but at this point I’ll probably peace out and end my driving of this derail.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I've worked a bit more on this unit testing problem to be able to give some more information about what's going on. I'm just generally stuck both specifically and generally here and we can get into X instead of Y kinds of things.

The codebase wraps around a bunch of kernel stuff and the code in question is "mostly userspace" code that is mixed in with that. I'm hoping I can isolate that code enough to use a conventional, non-kernel unit testing framework against it. However, I'm not really a kernel guy and I'm struggling to even compile isolated code.

Let's say that the first order problem is even being able to isolate code for unit testing and have it compile--let alone run. Let's say I can compile the kernel in question and all the related code I want to test in its native habitat. That code isn't very kernel-intensive, but it will do a few things that necessitate including lower-level stuff in its piles of header. My isolated build attempt will fail in those details. A Makefile I tried to create for it has to add includes for all kinds of kernel headers, and I haven't even seen what would happen with linkage. This isn't even getting into proper preprocessor definitions, which I know I'm not doing right.

I thought I'd be cute and include the Makefile that was used to compile the kernel, but it then couldn't find a bunch of paths it needed. I guess there was a issue with including it from a different path. I'm not very good at that either. I'm hoping there's some methodology I can use here to get the proper build settings for the normal kernel compile taken over to this extracted build.

The second order problem is how practical or useful this would be. I was hoping I could isolate builds of some of these files and then hit them with Catch2 tests. I assume I'd have to replace whatever lower level calls that were there with mocks of some kind, but I've never tried any of that in C (this might as well be ANSI C). There's a kernel-level unit testing framework I found (kunit?) that would just run it in its normalish environment. However, it would mean running the tests in a special booted environment, and that turnaround is kind of gross. I'm hoping a developer of a new feature can update their unit test build after writing 20 lines and find out quickly what they hosed up. I'm hoping to use this in a faster development cycle instead of as, say, a remote commit-gating test.

Edit: I really does smell like I have to just lean into kunit for this kind of thing since that is meant for kernel stuff anyways. I'm hoping that it isn't too tedious though and that some tests can be set to run without going through the whole song and dance of running them from a special boot like I had read.

Rocko Bonaparte fucked around with this message at 08:22 on Dec 9, 2020

Xarn
Jun 26, 2015
Isolate kernel touching functions into their own TU, and then compile in mocks of those functions when testing, or real version when compiling release.

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man
When you say “part of it is kernel and part of it is user space”, is this an architecture like this:

- kernel module compiled from c source tree that does some coprocessor integration or special comms bus bullshit. Defines either a chardev and a bunch of poorly thought out ioctls or a bunch of poorly thought out sysfs paths. May have always been better off as a userspace program running eg spidev
- user mode program that has parts that are normal and do domain specific stuff or general purpose stuff, but also parts whose job is to talk to the kernel module(s) above and are thus tightly implicitly bound to them and also maybe tightly explicitly bound through shared headers to establish ioctl magics

Because i think the post above mine is true for the second item there; for the user mode thing, replace the interaction functions with mocks either through change-the-code dependency injection or LD_PRELOAD fuckery. For the kernel module itself though, I do think you need to use kunit, but if youre concerned about the time it takes you should look into qemu. I think you could get a qemu precompiled and a kernel precompiled, then spin up qemu and inject the compiled module as needed for kunit. I’ve definitely used slower unit tests.

Xarn
Jun 26, 2015
Right, I got the impression that it was more of the latter.

A similar-ish example I worked on recently was with Prusa3D's controller SW, which obviously has to deal with reading stuff from HW a lot, but the really important parts to test are in the logic, so the solution was to compile-in fake HW reading functions and set them up at the start of each test.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I am sorry if this is too vague but I am talking about work stuff.

What is the general intent in using the -C flag with make? All I see is literally what it does but I can't figure out what it means to the build I am trying to do.

That flag is pointing to a source code package's path so it's hitting the Makefile in there. I am apparently supposed to run the make -C command from the directory with the Makefile I actually want to use.

The Makefile in the source code package does its own recursive making steps that I don't really understand. Is there a good method to step through what it is doing with all of this hopping around?

My main issue is the command can't find the rule I put in the Makefile in my current directory. It finds the other stuff that is a part of that Makefile. So I am trying to figure out how I get one but not the other. Actually, my main issue is that the whole process is very arcane and stubborn, but I've already spent my daily life force venting about it.

Edit: A funny thing to consider: If you actually manage to google up some stuff on usages of the -C flag with make then I'd like to see what you find. I literally only got two results at the top that gave the help text for it; I couldn't find anybody using it for anything and why they'd do it. It was pretty amazing.

Rocko Bonaparte fucked around with this message at 09:23 on Dec 16, 2020

Dren
Jan 5, 2001

Pillbug
code:
make -C $DIR
is equivalent to
code:
cd $DIR && make
you can find this out by typing man make

you can also search the internet for man make. Generally, i do not recommend finding man pages on the internet as the results you get may be inconsistent with the version of software on your system. In this case it’s probably fine, make is pretty standard.

https://linux.die.net/man/1/make

quote:

-C dir, --directory=dir
Change to directory dir before reading the makefiles or doing anything else. If multiple -C options are specified, each is interpreted relative to the previous one: -C / -C etc is equivalent to -C /etc. This is typically used with recursive invocations of make.

Dren fucked around with this message at 11:24 on Dec 16, 2020

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
The dry explanation of the flag isn't enough for me to understand why I would need to run make -C dir1 from dir2 when I ultimately want the output from dir2's Makefile. These two paths are not directly referencing each other so I don't get how that even works. Alternately, it doesn't work and I am getting fooled into thinking it does due to previous behavior. My problem after all is that it complains it has nothing to make the rule I am issuing that is sitting in dir2's Makefile.

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man

Rocko Bonaparte posted:

The dry explanation of the flag isn't enough for me to understand why I would need to run make -C dir1 from dir2 when I ultimately want the output from dir2's Makefile. These two paths are not directly referencing each other so I don't get how that even works. Alternately, it doesn't work and I am getting fooled into thinking it does due to previous behavior. My problem after all is that it complains it has nothing to make the rule I am issuing that is sitting in dir2's Makefile.

Yeah you can run make -C from anywhere, it just runs in the directory you specify. So either
- Specifically being in dir2 is a shibboleth that doesn't actually do anything that was encoded in layers and layers of onboarding documents, or
- dir1's makefile is doing something very gross and wants to include dir2's Makefile and is doing this for some reason by parsing the invocation directory of the call to make rather than encoding a relative directory traverse

If it straight up does not work if you run make -C dir1 from some other random directory (including some directory outside of the tree with the absolute path to dir1) then it's probably doing the second thing. In dir1's makefile, look for include statements or submake invocations (i.e. calling make whatever if they're very dumb/confused or $MAKE whatever if they're a normal level of dumb/confused) and figure out where the hell that path is coming from.

Dren
Jan 5, 2001

Pillbug
I can’t help explain the system you’re on but in the past I’ve used make -C from within a makefile to recurse into directories that also have makefiles in them. So like
code:
dir1/
|-Makefile
|-dir2/
  |-Makefile
Where dir1’s makefile calls make -C dir2

The situation you’re describing sounds confusing. You should be able to get make to give you the output of everything it does, look through it for calls to make -C and you should be able to follow how it’s jumping around directories. Once you figure it out, never tell anyone. You don’t want to become the make guy.

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man

Dren posted:

Once you figure it out, never tell anyone. You don’t want to become the make guy.

This is the single best piece of advice anybody has given on this whole saga. I cannot agree with it enough. Do whatever you must do to avoid being the make guy. Do not minimize it. Do not think "ehh it wasn't that bad this time". Do not think "how often do we really change them anyway". Do the bare minimum to fix it now (or possibly move it to cmake or something) and try and immediately forget what you learned.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
It is very easy, and very common, for makefiles to depend on being run from the directory they’re in. So make -C is just a convenient way to recursively invoke a makefile instead of having to cd yourself, which in a script is somewhat annoying.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

Dren posted:

The situation you’re describing sounds confusing. You should be able to get make to give you the output of everything it does, look through it for calls to make -C and you should be able to follow how it’s jumping around directories. Once you figure it out, never tell anyone. You don’t want to become the make guy.
Yeah it's a strange thing and I definitely risk becoming the make guy here because I think the person that built this jenga tower is long gone.

Regarding the directory confusion, the original invocation is also using the M environment variable, and it looks like that's what gives it a hook back to where I started in order to actually build my stuff. I mean, except that it doesn't recognize my new build rule somewhere in between and decides to give up instead. Well, that's at least my current theory on it. I think I do some more cavediving with it tomorrow.

Beef
Jul 26, 2004
Don't move to cmake if you can get away with sticking with make. Just, don't.

Dren
Jan 5, 2001

Pillbug

Beef posted:

Don't move to cmake if you can get away with sticking with make. Just, don't.
Hard disagree. cmake is much nicer than a pile of crusty make garbage.

cmake’s problem is that kitware doesn’t publish a “this is how you are supposed to do it” book. So you have to scrape through 15 different blogs to figure out how to do anything and unless you find and read them all you’ll probably choose wrong.

After using several different make systems I’ve come to feel they’re all pretty bad and I assume it’s inherent complexity since they’re all that way. I empathize with people who don’t want to learn how to use the make system but I also hate them for not taking their medicine because they always screw up their makefiles.

edit: apparently there is a book i just never bought it. wish they would throw some basic snippets of what to do on their site so everyone who doesn’t have the book could look at it and make something decent instead of cobbling together hot garbage.

Dren fucked around with this message at 13:45 on Dec 17, 2020

Xarn
Jun 26, 2015

Beef posted:

Don't move to cmake if you can get away with sticking with make. Just, don't.

so wrong, wow

OddObserver
Apr 3, 2009

Xarn posted:

so wrong, wow

"can get away with" is doing a lot of work there.

Adbot
ADBOT LOVES YOU

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
I guess it is technically true. If you can get away with make you either have something very simple and it doesn't matter what you use or you have a very constrained environment where it might actually be okay.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply