Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Pan Et Circenses
Nov 2, 2009

evensevenone posted:

What kind of strategies do people use for sticking things like serial numbers in a binary? I'm using gcc, which produces elfs. I had assumed that if I just made a global variable, it would be easy to find a tool that would let me edit the .elf file in some sort of scriptable way, but amazingly there doesn't seem to be anything.

The only thing I've found so far is making a .hex file and using the symbol table to figure out what offset to edit.

I also tried writing my own tool using libelf, but for some reason the elf file I get is half the size of the original and the libelf docs are pretty bad.

Just discovered this thread (I love and work in embedded systems), so sorry if this is too late to be useful, but just use a constant with a known value and search/replace. No need to putz around with elf files, if you declare a "static const char[32]..." somewhere it doesn't really matter what's actually in there when the program runs as long as it's valid data. In other words, try the following:
code:
#include <stdio.h>

static const char serial[] = "SERIAL";

int main (int argc, const char* argv[])
{
    printf("%s\n", serial);
}
Prints out "SERIAL" when run obviously. Then just use sed to replace SERIAL with whatever you actually want the value to be, padding unused space with nulls:

sed -i "s/SERIAL/v1.1\x0\x0/g" mybinary

Run it again and it prints "v1.1". Use a simple bash script or whatever to customize the process to your heart's content. Just make sure you have a string that's sure to be unique in your executable file. Did I get the question right I hope?

Adbot
ADBOT LOVES YOU

Pan Et Circenses
Nov 2, 2009
Maybe transient voltage spike or something like that during power cycle is knocking the PICKit out? Have access to a scope you can hook the circuit->PICKit connections up to so you can watch for naughty spikes or other weirdness on power up? Or maybe add some protection diodes to those signals? Only thing I can guess off the top of my head.

Edit: Now that I think, that's the sort of behavior I usually see when I connect a USB thing's ground to not-ground (which I do embarrassingly often with my USB scope).

Pan Et Circenses fucked around with this message at 22:31 on Jul 28, 2013

Pan Et Circenses
Nov 2, 2009

Switzerland posted:

I meant more like in software, e.g. you have a list of CPUs, ports, I/O, etc., and the software figures out how to interconnect all of that. Also, I have no idea how easy or hard that would be.

No, even the really expensive EDA software out there is terrible at automatic layout. And that's not even considering that a lot of components have specific requirements on how they have to be physically laid out (e.g. power pins X, Y and Z need bypass caps within 10mm laid out just so). I've had bespoke eval boards designed for my company by some of the huge names in hardware vendors out there that were broken if you tried to do much more than turn them on because they weren't laid out just right.

Long story short, if you're using a little PIC or something running at 4MHz it's not so bad, but you want USB/HDMI/(D?)RAM/etc... and that kind of high data rate stuff very easy to mess up. For example, just take the DRAM. You're going to need to match trace lengths for all its IO pins to minimize clock skew, and even that's something EDA software is terrible at.

Sorry for the bad news :)

Pan Et Circenses
Nov 2, 2009

more like dICK posted:

Thanks. I grabbed a MSP430 LaunchPad, and one of the Tiva C ones as well. Code Composer Studio has versions for Windows and Linux, is the Linux version up to snuff or should I be setting up a Windows vm?

It'll be pretty quick to find out; if you can successfully connect the over JTAG and see useful debugging information (local disassembly, for example) odds are it'll work just fine in Linux. The only reason I've had to use Windows with some (unreleased) TI products is that the JTAG drivers were Windows-only, and with that I was successful in a VM. That said, I haven't actually used the MSP430.

Pan Et Circenses
Nov 2, 2009

Joda posted:

I'm pretty sure I'm already doing this. My problem is more related to hardware and how I declare my variables to save on SRAM than C++ convention. (i.e. I have a superclass function that uses a constant, but the constant is different for every subclass so I can't just use a macro, which is how I've been going about using program memory so far.)

I'm not sure I entirely/precisely understand what you're trying to do, but I'll try to give an answer that might be useful even if I'm wrong about your question. Assuming you're using a GCC toolchain, the standard way to specify where specific variables go is by using linker scripts, which can arbitrarily divide your binary into sections and specify which memory those sections go into (.bss goes into SRAM, .text goes into flash, .myspecialconstants into SRAM, etc...). Once you have a section defined to reside in SRAM, you can use __attribute__((section("mysection"))) on any variable to put it in that section (and by extension into SRAM).

Is that the sort of thing you were going for? If so I can give more pointers if needed.

Pan Et Circenses
Nov 2, 2009
I got an STM32 up and running with just the ST-Link programmer, GCC, and some open source Linux based ST-Link flasher/gdbserver software. The only vendor tool there was the hardware programmer. That project was actually C++, and the whole setup worked fine... never understood the embedded obsession with sicking to straight C!

If you don't have a good bit of experience with embedded systems, I wouldn't really recommend it at all as a way of starting out. If you're looking to get a little deeper, it's probably a good project. Make sure you understand linker scripts and why you'll need to use one to produce your binary.

Pan Et Circenses
Nov 2, 2009
Oh, I've been doing this for over 20 years. Perhaps not as long as some, but I've used a wee little system or two. I'd wager to say that if you treat C++ as nothing more than C with encapsulation and support for true RAII semantics, you're way WAY ahead of the ANSI C game, and those features don't take any more overhead than their equivalent C implementations (essentially none). Remember, C++ generally doesn't add any overhead that you wouldn't get from implementing an equivalent system in C--it just allows you to implement much more complex systems without really realizing it.

Anyway, as for putting together your own toolchain, getting it all linking right is going to be the real meat of the work.

Pan Et Circenses
Nov 2, 2009
In my experience in the industry pretty much any software that runs on a system to which the end user does not have direct access is called "firmware." "Embedded" I would say is just the engineering of such systems, so I guess you could say in my experience embedded engineers always write firmware.

Just my observation from what I see other engineers say, and I usually do the same.

Pan Et Circenses
Nov 2, 2009

poeticoddity posted:

Alright, further clarification:

I build vision testing equipment (at the moment for lab use, but eventually I'm hoping to get into clinical equipment). I build the equipment myself (so it's not getting interfaced into existing prototypes or off-the-shelf hardware) so I have as full control over specifications as possible. Each device has something that's very specialized to that particular device (usually at least the actual generation of the visual target) but also has a lot of generalizable components attached to it (LCDs, knobs, buttons, etc.). I would like to start moving the generalizable components to their own microcontrollers instead of running everything from one overtaxed microcontroller. If it makes any difference, I'm using Arduinos at the moment, but I'm quickly outgrowing them and the code is more and more just straight AVR-C, so I'll likely be transitioning away from the Arduino platform in the next few years.

My hope is that eventually people could say "I want specialized device 1 and specialized device 2, but only want this one control panel, so I'll move it back and forth between these devices as I use them" and also be able to say, "I like using general device A in this circumstance and general device B in this circumstance, and I'd like to be able to hook up one, the other, or both to this specialized piece of equipment at my discretion without having to reconfigure a lot of stuff manually".

Assume I have complete control over the code base and the hardware design. I'm looking for best practices, design guidelines, common standards, etc. for having systems with a single master uC and one or more slave uCs (and probably some slave ICs that aren't uCs) be as modular as possible. Hot-swappable is ideal, but not necessary. I understand in abstract how to do this, but I would prefer to start with best practices rather than try to fix them later.

I'd say you're going to have to pay even more than the usual amount of attention to the total capacitance of your bus. Especially if you're doing something hot-swappable, you'll have to choose your connectors carefully and watch your trace lengths. I2C, for example, specifies a maximum bus capacitance of 400pF I believe. With good impedance matching in your connectors and proper termination, you might be able to push this a bit, but I wouldn't if you're looking to produce anything high reliability.

If price is less of a concern (which I imagine is probably the case for specialized gear like this), liberal use of bidirectional buffers can mostly solve this issue (something like this: http://www.ti.com/lit/ds/symlink/p82b96.pdf).

Obviously you can always trade bandwidth for robustness here; operate your bus at a lower clock rate and you can get away with higher capacitance and worse impedance matching.

Also, there's the basic stuff like making sure there's no bus address conflicts among the parts you choose.

EDIT: This is assuming your whole setup is all relatively tightly packed together. If you're talking about separation between your devices on the order of feet, you'll be looking into totally different kinds of connectivity.

Pan Et Circenses fucked around with this message at 21:39 on Dec 11, 2014

Pan Et Circenses
Nov 2, 2009
Just to give a comparison with oscillators, 50ppm is pretty much the bog standard in stability for decent external oscillators. Compare that to internal oscillators that tend to be in the 10000ppm or more. Basically, if you're using an internal oscillator, you need to be VERY sure that you're not communicating with anything that has any kind of strict timing requirement. UART for example is almost certainly okay. USB of any kind and you'll probably have trouble.

Aside from that, there are application specific reasons why having an oscillator so far off would be unacceptable. I make mid-range to expensive audio equipment at my job, for example, and obviously the frequencies produced by your DAC are going to be out by however much the clock driving your DAC is out. Being off by 1% or even more is basically a non-starter.

Pan Et Circenses
Nov 2, 2009

Popete posted:

So I have a binary blob cross compiled on my Ubuntu build machine and I run crc32 on it to get the original value. Then I boot my system (Android with 2.6.35 kernel) and attempt to recalculate the CRC across the same binary blob. I ported over U-Boots crc32 command line function into Linux user land for my board and it never gives me the same crc32. If I use the same crc32 at the U-Boot command line it comes out correct but for whatever reason my port doesn't work...

I'm assuming this code is C? Does the CRC algorithm make any assumption about the actual sizes of data types (int, short, etc...)? Does the code comply with strict aliasing rules? Have you tried running the various CRC on a single word or other small datum to get a better idea how its output diverges?

Pan Et Circenses
Nov 2, 2009

Rescue Toaster posted:

Things like you can do a binary tree inside an array where the child of node n are nodes (n*2) and (n*2)+1, as long as you keep it balanced and establish a max depth. Queues are generally simple with a max array size and head & tail pointers. Those sorts of things.

Here's one that maybe everybody knows, but I use it somewhere in basically every embedded system I write. With queues like that, make their size a power of two, then you can wrap them with no branching and just a bitwise and:

head = head & (size - 1)

...equivalent to:

if(head >= size) head = 0;

Especially fast if size-1 is a compile time constant.

Pan Et Circenses
Nov 2, 2009

Aurium posted:

The msp430's have internal flash you can read/write to, so you don't actually need an external chip.

The MSP430FR series uses ferroelectric ram, which is nonvolatile, faster and lower power than flash, doesn't have any meaningful endurance problem, and is accessed just like regular old SRAM.

Pan Et Circenses
Nov 2, 2009
I'm looking for an MCU with some very specific capabilities, and I thought I'd put it out here to see if anybody here knows of parts that might fit the bill:

ESSENTIAL
- 12-bit+ ADC
- 10-bit+ DAC
- 2 op-amp
- SPI w/ hardware CS
- Very low power (<1mA @ 4MHz run mode)
- Low cost (<$2.00, hopefully something that could be pushed to ~$1.00 @ 100k+)

PREFERRED
- ARM architecture
- Capacitive touch sense wakeup from very low power mode

So far I have only found a single part anywhere that seems to check all the essential boxes, the EFM32 Tiny Gecko:
http://www.silabs.com/products/mcu/32-bit/efm32-tiny-gecko/pages/efm32-tiny-gecko.aspx

Anybody know of others? It's always nice to have alternatives to explore.

EDIT: Lower power ADC operating modes and lower price would be the two main things I'd like to improve on compared to the Tiny Gecko.

Pan Et Circenses fucked around with this message at 15:41 on Sep 21, 2015

Pan Et Circenses
Nov 2, 2009
The PICs are a good suggestion, but it looks like the ones that hit all the other marks clock in at around 600uA/MHz, which is very high; compare with around 150uA/MHz for the EFM32. Also, the PICs are a little expensive, at least at comparable quantities from distributors.

A little more information on the project:

- Powered by a 3V watch battery ideally, so I'm looking at 25-35mAh (think I'll be able to run it unregulated).
- Target battery life 48 hours constant use, 1 year standby.
- Probably not viable at much more than $30 retail.
- Using Sharp memory LCD as display, which eats pretty much all my costs right off the top in order to hit the price point (assuming I won't be able to get them for under $6 even in large quantity, but who knows)
http://www.sharpmemorylcd.com/

I did a little prototyping with a Kinetis L0 (which are ridiculously cheap/low power) before realizing that the SPI interface on those things is so terrible as to be almost useless and giving up on it out of disgust (even though I could probably trick it into half-working). So, now I'm looking for processors that integrate all my external components into a single package, hopefully to save a bit on cost.

It's still just a little side hobby project, but it's interesting enough to start really trying to nail down the ideal parts.

Pan Et Circenses
Nov 2, 2009

Slanderer posted:

Anyone know how I can convince the IAR compiler to load two different applications onto the same processor with a debugger? Specifically, I have a boot loader application and a main application, which I manage as separate IAR projects, and load individually. For reasons not worth going into, the debugger is the only way to load either application right now, and I would like to ensure that people don't accidentally forget to update the bootloader by forcing it to be downloaded by the debugger every time. Anyone know if this is possible?

Is this what you're looking for?

Pan Et Circenses
Nov 2, 2009

travelling wave posted:

Does anyone know why people define bit flags like this:

It's frequently useful to have easy access to the bit position as well as the mask. The former gives you both, the latter doesn't. In many systems where the designer has really been thorough, you'll get both definitions.

Just as an example, Kinetis processors have a fancy bit-banding system they call the "bit manipulation engine" that can perform a number of useful atomic operations on individual bits in peripheral registers. These bit manipulation commands use the bit number, not its mask.

Pan Et Circenses
Nov 2, 2009

Mr. Powers posted:

I'm pretty sure this is a feature of most Cortex Ms. I know ST has it in at least the F2 line.

The ones on the Kinetis are somewhat more sophisticated than usual. Normally you just get bitwise read/write, but Kinetis gives you all the basic logical operators as well as bit insert/clear and stuff like that in atomic forms (basically hardware-atomic BFI/BFC instructions). It's quite nice really. Too bad I've found the peripherals on the lower power models really lacking, because they do some things nicely.

sund posted:

In case anyone else is on ARM or suitable platform, look into the ctz builtin for decoding masks in one instruction.

Keep in mind that you're still using two instructions to get the value in memory: LDR -> CTZ, rather than just the LDR if you're loading the value directly. This kind of thing can really make a difference when you're trying to run at 30 uA average draw in 8k of flash.

Pan Et Circenses
Nov 2, 2009
I haven't been able to figure out the appeal of IAR, but I'm sure I must be missing something, and I do use it regularly at work for one of our platforms that exclusively supports it. What's the advantage of it compared to one of the GCC based tools with a much more modern IDE, like TrueSTUDIO or Crossworks? Keil I can understand not using because, oddly enough, everything I've seen indicates their compiler produces inferior code in size and efficiency.

Also, I really like the idea of PICs, but even with MPLAB X and their better debug probe, I haven't found the experience of developing on one anywhere near as clean and pleasant as ARM with one of the above-mentioned IDEs and a J-Link.

Pan Et Circenses
Nov 2, 2009
PICs are an odd thing. I can't really figure where they belong in the modern market now that ARM is what it is, aside from extreme low end stuff like the PIC10 (won't find a 6-pin arm in a SOT-23 package anywhere I've ever seen). They do have a lot of very specialized DSP options though, so there might be something there which I just never had a specific use for. Every time I look I find an ARM that's cheaper, lower power, and more powerful than the equivalent PICs.

I will say though that I just keep a ton of PIC10 around to extremely simple glue logic in prototypes.

Mr. Powers posted:

STM32 Discovery boards are super cheap. If you want M4 or M0/M0+, I'd suggest the Freescale/NXP Freedom boards which are also super cheap.

Second both of these. Also I've recently been playing with the Silicon Labs EFM (used to be Energy Micro) processors and I really like them a lot, even more than ST or Kinetis parts so far, both of which I've used extensively. It's just a very well thought out processor. They have low cost dev boards too and seem to hit every mark perfectly (price, low power, functionality).

Pan Et Circenses
Nov 2, 2009
Well, they definitely had packages small enough for met to fit in a keychain I designed just fine, but not a huge selection of WLCSP or other ridiculously tiny packages that you'd need for super high density stuff (although they do have some). They are lower on RAM on average than something like STM32, if you're doing something beefy that needs all that, and they don't have the high memory with a small package like ST does.

Most of the projects I get involved in I either need only a few KB, or I need a few megs of RAM.

What I've really been happy with though are the peripherals. They just do what I want, and have a lot of good analog stuff for sensors. That may be coming from getting burned repeatedly by the absolutely terrible peripherals on the low end Kinetis though...

Pan Et Circenses
Nov 2, 2009
What are you doing that you need actual JTAG for?

Normally for ARM I'd suggest getting a J-Link instead of whatever proprietary debug probe the company produces. It's supported by every embedded IDE out there or you can use the bundled GDB server, and works with anything that's a proper ARM core. To be frank I wouldn't even consider using OpenOCD unless you seriously can't afford the $60 for the edu version (non-commercial use).

Pan Et Circenses
Nov 2, 2009
If a non-commercial project turns into a commercial one, there's really no chance you'd get in trouble for buying the J-Link commercial version at the time you actually start commercializing it. Also, the only reason you'd really need a specialized adapter like that is if you're connecting/disconnecting the debug probe a bunch because it's a pain to reconnect pins one by one over and over.

ARM JTAG connections are totally standardized across every single ARM core in existence, so all you need to do is get some female/female jumper wires and play connect the dots (assuming that board brings the signals out to a header; otherwise it's soldering time). Connect the following pins of your generic JTAG probe to the same pins on the board and it will Just Work: TDI, TDO, TMS, TCK, V+ (reference), GND. You may also need to connect RESET (that's CPU reset, not tap reset) so the probe can hold the board in reset while connecting, allowing you to debug the boot process.

Adbot
ADBOT LOVES YOU

Pan Et Circenses
Nov 2, 2009

Spatial posted:

Also, I don't have an oscilloscope or logic analyser. :v:

This is any embedded systems person's first and last mistake. You need to see your signals. Depressingly many embedded systems do not even perform to spec, let alone how you'd expect them to. USB logic analyzers are so cheap these days it's just silly not to have one, unless you like banging your head against buggy silicon for 8 hours.

  • Locked thread