|
I'm gonna stand up a new embedded project at work pretty soon. I have a reasonable amount of embedded experience but one thing I've never gone deep on is static analysis. What static analysis tools do you consider essential? Haven't decided whether to use C or C++ (leaning towards C though). The only difference that "being embedded" makes to me is - Probably don't care about malloc/free usage checking - Need to be able to bound what the analysis checks below and above or at least make selective carveouts so you can isolate all the "writing to random hardcoded addresses" and "this function doesn't look like it's ever called (because it's an ISR)" stuff - Ideally is, or can be made to be, rtos aware with freertos or something but this is a big ask
|
# ¿ Nov 8, 2020 16:05 |
|
|
# ¿ May 2, 2024 18:14 |
|
Sweeper posted:are you look for free software only? obviously there is clang-tidy which does some cool stuff, tracks use-after-free in some cases, different modernizations, etc. I've found it pretty good overall. I've used sonarqube a bit and I don't get much out of it vs. clang-tidy. The code I work with most was written without so there are about a million warnings that we aren't going to clean up at this point so it is just noise. Costs money, fancy (if slow) UI. Run your test suite under valgrind to see if you are borking up memory somewhere? Open source software is better for us not just because of the cost element but because we're an open source company so it's nice to use open source tooling where possible. WRT to clang/llvm, I definitely had defaulted to gcc (or arm-gcc or whatever their spin on it was) but apparently llvm is basically fully mature on cortex-m at this point so hell yeah full speed ahead, probably clang and its static analysis family just straight up answers my questions.
|
# ¿ Nov 8, 2020 17:31 |
|
A nice lil guy that does some heating and some medium complex motion control stuff (mostly in a library) and some serial-ish communication on some cortex m4, using an rtos. Nice and standard stuff. Sweeper, why do you run builds with gcc and a separate clang-tidy pass? Don't trust clang-tidy, or it's just been compiled on gcc from the beginning and you don't feel like requalifying it, or what?
|
# ¿ Nov 8, 2020 19:09 |
|
When you say “part of it is kernel and part of it is user space”, is this an architecture like this: - kernel module compiled from c source tree that does some coprocessor integration or special comms bus bullshit. Defines either a chardev and a bunch of poorly thought out ioctls or a bunch of poorly thought out sysfs paths. May have always been better off as a userspace program running eg spidev - user mode program that has parts that are normal and do domain specific stuff or general purpose stuff, but also parts whose job is to talk to the kernel module(s) above and are thus tightly implicitly bound to them and also maybe tightly explicitly bound through shared headers to establish ioctl magics Because i think the post above mine is true for the second item there; for the user mode thing, replace the interaction functions with mocks either through change-the-code dependency injection or LD_PRELOAD fuckery. For the kernel module itself though, I do think you need to use kunit, but if youre concerned about the time it takes you should look into qemu. I think you could get a qemu precompiled and a kernel precompiled, then spin up qemu and inject the compiled module as needed for kunit. I’ve definitely used slower unit tests.
|
# ¿ Dec 10, 2020 12:29 |
|
Rocko Bonaparte posted:The dry explanation of the flag isn't enough for me to understand why I would need to run make -C dir1 from dir2 when I ultimately want the output from dir2's Makefile. These two paths are not directly referencing each other so I don't get how that even works. Alternately, it doesn't work and I am getting fooled into thinking it does due to previous behavior. My problem after all is that it complains it has nothing to make the rule I am issuing that is sitting in dir2's Makefile. Yeah you can run make -C from anywhere, it just runs in the directory you specify. So either - Specifically being in dir2 is a shibboleth that doesn't actually do anything that was encoded in layers and layers of onboarding documents, or - dir1's makefile is doing something very gross and wants to include dir2's Makefile and is doing this for some reason by parsing the invocation directory of the call to make rather than encoding a relative directory traverse If it straight up does not work if you run make -C dir1 from some other random directory (including some directory outside of the tree with the absolute path to dir1) then it's probably doing the second thing. In dir1's makefile, look for include statements or submake invocations (i.e. calling make whatever if they're very dumb/confused or $MAKE whatever if they're a normal level of dumb/confused) and figure out where the hell that path is coming from.
|
# ¿ Dec 16, 2020 17:26 |
|
Dren posted:Once you figure it out, never tell anyone. You don’t want to become the make guy. This is the single best piece of advice anybody has given on this whole saga. I cannot agree with it enough. Do whatever you must do to avoid being the make guy. Do not minimize it. Do not think "ehh it wasn't that bad this time". Do not think "how often do we really change them anyway". Do the bare minimum to fix it now (or possibly move it to cmake or something) and try and immediately forget what you learned.
|
# ¿ Dec 16, 2020 23:10 |
|
I think at that point you might as well run it in qemu or something. Probably a similar amount of work to get set up and no worrying about whether you mocked things right.
|
# ¿ May 28, 2021 00:19 |
|
Twerk from Home posted:Can somebody direct me towards best practices for C project organization? I keep just seeing a ton of .c and .h files in a flat directory, and thinking there must be a better way to do this. Honestly just do the same poo poo as you would in any other language. Separate your functionality logically into separatable modules that hold contracts with each other, separate the files along the same lines, etc. Just like you would in python or anything else. Could do worse than looking at big c projects like the linux kernel or systemd or something.
|
# ¿ Aug 27, 2021 19:38 |
|
Another c++ on baremetal rtos person here. It’s really nice. There’s even more and more of the stl you can actually use in modern standards. For instance, std::array doesn’t allocate and lets you get range and span. std::variant is great for message passing architectures. from_chars and to_chars provide non-allocating parse/emit for the kinds of things you actually parse/emit on micros, right in the standard. There’s a ton of stuff you get for free. Even with stl containers that normally allocate, the thing you need to avoid in memory constrained real-time environments is a shared heap with possibly-unbounded allocations - the shared part is where you run into trouble with non deterministic allocation times busting real time needs because you might need a heap scan to compact or find appropriate holes, and the unbounded part is obviously bad. But because those are the specific problems, if you’re comfortable writing and using custom allocators, you can use object containers like deque, list, or map with a per-container object pool allocator of bounded size backed by a static buffer just fine. There’s some problematic stuff still - formally it’s a really bad idea to turn off exceptions since they provide the only language safe signaling mechanism in places like constructors, and you probably aren’t actually doing as good a job of making everything noexcept as you think, I know I’m not - but it’s a really nice environment particularly if you have the sort of brain worms that like to write c++ that asymptotically approaches rust.
|
# ¿ May 11, 2022 14:48 |
|
ExcessBLarg! posted:What's the difference between a SEU that changes a vtable entry, versus one that changes a PLT entry, versus one that changes a jsr instruction for code paged into RAM? To be fair, most of the micros we're talking about here are XIP nonvolatile code storage, so that last one isn't happening. The first two are fair though.
|
# ¿ May 12, 2022 02:27 |
|
I mean it very well may be? Like anybody here who's worked in embedded for a while, how many people have you met that still use PICs or only use assembler or don't believe in undefined behavior or don't believe in unit tests. Handwaving is endemic
|
# ¿ May 12, 2022 04:17 |
|
And it's sort of hard to blame people that much for it. It's an environment where every random 3 to 6 month project usually comes with a different programming environment, completely different set of requirements and goals and compute available, different sets of hundred/thousand page pdfs to understand, different levels of abstraction, testability, amount of feedback, math and science requirements, support and documentation, and a lot of it really not actually aimed at being helpful; and in a lot of those environments once code's released it's never touched again. At a certain level it's hard to blame people for falling back on stuff they know, and know works, and don't see a point in changing.
|
# ¿ May 12, 2022 04:21 |
|
baby puzzle posted:I want to do something with dates and times but I don't know where to start. You might want to check out boost's calendar for this, I think that would work.
|
# ¿ May 30, 2022 20:32 |
|
leper khan posted:Err.. union wouldn't bit pack that way. Not sure it's well represented without macros in C C code:
you'd have to do some casting but you could put that in helper functions. also not sure if you really get anything out of this "optimization" because either it has to take up the same pace as a pointer-size pair or you always have to pointer-pass it and dynamically cast
|
# ¿ Aug 25, 2022 01:41 |
|
In theory if you have a MyVariant = variant<Base, Derived1, …, DerivedN> you should be able to write a Base& view(MyVariant mv) { return visit(mv, [](auto& elem) -> Base& { return dynamic_cast<Base&>(elem);}); } or something
|
# ¿ Nov 16, 2022 14:30 |
|
Twerk from Home posted:Is there any easy way that I'm missing in C++ to quickly allocate memory for a vector of vectors? I'm doing a lot of relatively small allocations and it's slow. can you split the ingest into - pass 1: determine the number of datasets and the size of each dataset; heap-allocate a single chunk of memory as a std::array or using array-new that is large enough for both the data index vector and all the data vectors. placement-new the index vector into it at the beginning and reserve the required space. then go placement new all the data vectors in the rest of the space using your own pointer math and put their addresses in the index vector, same deal with reserving space - pass 2: fill in the data the benefit to this is it should cost fewer sbrk syscalls/pagefaults/whatever since you're just getting a big slab of memory for everything. there's also probably better ways and libraries to do this but really the core thing you need to do is to try and figure out how much memory you need before you need it.
|
# ¿ Dec 6, 2022 15:33 |
|
AgentF posted:Question: how come this code compiles and works fine with gcc but not msvc? First I was gonna blame msvc until I noted that they added really good ranges support. Then I was gonna blame the docs on ranges::max saying that it doesn't participate in ADL, which msvc might be using under the hood, so I tried using std::max instead of std::ranges::max and there, while it doesn't have the problems with internal generated functions, it still can't deduce the template arguments, so I have no drat idea
|
# ¿ Dec 18, 2022 15:15 |
|
I cannot imagine trying to self study stuff like that without motivating examples or an actual teacher honestly. Here are some motivating examples: - what somebody said (phone can’t see previous posts while typing I forgot who it was) about storing yes/no or present/not present results in less space - in general, storing values that can be fully represented in lengths other than 8, 32, or 64 bits more efficiently. This is valuable for optimization like that poster said but it’s not just high end stuff you’ll never do - some on-the-wire serialization formats use this, or the low level equivalent of IPC in embedded systems, register-model interfaces that look like the next example - controlling a system where different bits mean different things, which is common in embedded device memory-mapped io (bit N of the byte at 0x8008354 controls pin N of the gpio port) - walking or accessing a buffer of packed values
|
# ¿ Feb 2, 2023 18:32 |
|
Turns out “this language is Turing complete” is not a sufficient standard for using it
|
# ¿ Feb 6, 2023 19:42 |
|
PDP-1 posted:I'm working on a project that would benefit from having a good function call trace system, after reading up on stuff around the internet it seems like this is a format that can do the job: well... I think more typically you do this: code:
code:
code:
|
# ¿ Nov 1, 2023 02:42 |
|
That's a much better explanation, thanks!
|
# ¿ Nov 1, 2023 03:38 |
|
ultrafilter posted:You can as others have shown, but if this is code that anyone else will ever need to read without talking to you, you'd need to think very carefully about how to document that the entry point is not what they're expecting. Doing something with the preprocessor is a lot more intelligible. One of arduinos uncountably many crimes is this poo poo
|
# ¿ Dec 12, 2023 18:09 |
|
PDP-1 posted:Thanks for the replys! if you want to be that paranoid, declare a custom section the size of your fifo in each linker script that doesn't get initialized, then declare the fifo data structure by value in each place with an attribute(section) and take the address of that instead of binding a pointer to a linker symbol. the compiler will treat that memory as used and fail to compile because it doesn't have enough room if you forget and try to put something else there.
|
# ¿ Apr 13, 2024 21:00 |
|
|
# ¿ May 2, 2024 18:14 |
|
Twerk from Home posted:
I think that - if you’re not controlling the armadillo source (like it’s an open source project and the reader says “install armadillo-dev before building” imo you’re stuck with using a define, and probably you should do what ultrafilter said, and probably you should have a full definition set there not just that one - if you’re controlling the armadillo source, like downloading it with external_project or a prescript, then sed it in place in the bits header, i guess Idk I kind of hate configuration like that but if it’s all under my control I’d do the second
|
# ¿ Apr 29, 2024 23:11 |