Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Slanderer
May 6, 2007

Yeah, I knew that adding a serial number on my device would let me use the UniqueID device capability, which would then my device instance could persist across USB ports. However, for our application, I don't want new devices of this type to enumerate new COM ports in the future---while its merely annoying for engineers like me who interact with multiple devices, its unworkable for production test station computers (that would see tons of new devices connected per day).

This would presumably be solvable with additional software that directly messed with the registry (or alternatively, I think there is something you can add to the registry to make windows treat every instance of a device with a certain VID+PID is the same device, but that seemed like a weird hack).

It's frustrating since lots of USB-Serial drivers work they way I want, but I don't know if they were only able to do that by writing their own driver. The MSDN documentation on all of this is straight bullshit and mostly not helpful for doing anything practical.

Adbot
ADBOT LOVES YOU

Hunter2 Thompson
Feb 3, 2005

Ramrod XTreme
As an added note, be aware that Windows will store information about every unique USB device plugged into it in the registry. If you're running an assembly line or a test station it'll lead to the computer becoming unusable sooner than later.

Maybe this isn't an issue anymore, but it's worth being aware of.

Slanderer
May 6, 2007

meatpotato posted:

As an added note, be aware that Windows will store information about every unique USB device plugged into it in the registry. If you're running an assembly line or a test station it'll lead to the computer becoming unusable sooner than later.

Maybe this isn't an issue anymore, but it's worth being aware of.

I hadn't even thought of that. It's hard to say how many devices it would take to cause an issue, but this might be worth making a note of. Thanks.

Pan Et Circenses
Nov 2, 2009
I'm looking for an MCU with some very specific capabilities, and I thought I'd put it out here to see if anybody here knows of parts that might fit the bill:

ESSENTIAL
- 12-bit+ ADC
- 10-bit+ DAC
- 2 op-amp
- SPI w/ hardware CS
- Very low power (<1mA @ 4MHz run mode)
- Low cost (<$2.00, hopefully something that could be pushed to ~$1.00 @ 100k+)

PREFERRED
- ARM architecture
- Capacitive touch sense wakeup from very low power mode

So far I have only found a single part anywhere that seems to check all the essential boxes, the EFM32 Tiny Gecko:
http://www.silabs.com/products/mcu/32-bit/efm32-tiny-gecko/pages/efm32-tiny-gecko.aspx

Anybody know of others? It's always nice to have alternatives to explore.

EDIT: Lower power ADC operating modes and lower price would be the two main things I'd like to improve on compared to the Tiny Gecko.

Pan Et Circenses fucked around with this message at 15:41 on Sep 21, 2015

Blotto Skorzany
Nov 7, 2008

He's a PSoC, loose and runnin'
came the whisper from each lip
And he's here to do some business with
the bad ADC on his chip
bad ADC on his chiiiiip
Maybe one of Atmel's SAM L21 variants?

Jerry Bindle
May 16, 2003

Pan Et Circenses posted:

I'm looking for an MCU with some very specific capabilities, and I thought I'd put it out here to see if anybody here knows of parts that might fit the bill:

ESSENTIAL
- 12-bit+ ADC
- 10-bit+ DAC
- 2 op-amp
- SPI w/ hardware CS
- Very low power (<1mA @ 4MHz run mode)
- Low cost (<$2.00, hopefully something that could be pushed to ~$1.00 @ 100k+)

PREFERRED
- ARM architecture
- Capacitive touch sense wakeup from very low power mode

So far I have only found a single part anywhere that seems to check all the essential boxes, the EFM32 Tiny Gecko:
http://www.silabs.com/products/mcu/32-bit/efm32-tiny-gecko/pages/efm32-tiny-gecko.aspx

Anybody know of others? It's always nice to have alternatives to explore.

EDIT: Lower power ADC operating modes and lower price would be the two main things I'd like to improve on compared to the Tiny Gecko.

The PIC24FJ128GC010 hits those reqs, except for not having an arm-core and pricing for 100k+ isn't on the web site.

feedmegin
Jul 30, 2008

Hadlock posted:

It looks like the 68000 has (introduced?) SPI support. I2C obviously didn't exist in 1979. I guess the AT328 will oscillate on it's own at 1Mhz. The 68000 probably needs an external oscilator, external RAM, etc?

The 68000 proper needs both of those things, sure, I've breadboarded one. It's not a bad architecture to write assembly for, though.Theoretically it's CISC I guess? But it's nice and orthogonal and has a bunch of registers. Way nicer than 16-bit x86, for certain.

Comparing original 1979 68000 to modern microcontrollers is a bit pointless though, all-in-one chips like we have now just didn't exist back then.

BattleMaster
Aug 14, 2000

Pan Et Circenses posted:

I'm looking for an MCU with some very specific capabilities, and I thought I'd put it out here to see if anybody here knows of parts that might fit the bill:

ESSENTIAL
- 12-bit+ ADC
- 10-bit+ DAC
- 2 op-amp
- SPI w/ hardware CS
- Very low power (<1mA @ 4MHz run mode)
- Low cost (<$2.00, hopefully something that could be pushed to ~$1.00 @ 100k+)

PREFERRED
- ARM architecture
- Capacitive touch sense wakeup from very low power mode

So far I have only found a single part anywhere that seems to check all the essential boxes, the EFM32 Tiny Gecko:
http://www.silabs.com/products/mcu/32-bit/efm32-tiny-gecko/pages/efm32-tiny-gecko.aspx

Anybody know of others? It's always nice to have alternatives to explore.

EDIT: Lower power ADC operating modes and lower price would be the two main things I'd like to improve on compared to the Tiny Gecko.

Numerous 8-bit and 16-bit PICs will do that. Not sure if the DACs are terrible fast though.

Sinestro
Oct 31, 2010

The perfect day needs the perfect set of wheels.

feedmegin posted:

The 68000 proper needs both of those things, sure, I've breadboarded one. It's not a bad architecture to write assembly for, though.Theoretically it's CISC I guess? But it's nice and orthogonal and has a bunch of registers. Way nicer than 16-bit x86, for certain.

Comparing original 1979 68000 to modern microcontrollers is a bit pointless though, all-in-one chips like we have now just didn't exist back then.

The question is more why that architecture hasn't been used for micro controllers more recently.

feedmegin
Jul 30, 2008

Sinestro posted:

The question is more why that architecture hasn't been used for micro controllers more recently.

Well yes, to which the answer is 'it has' (Coldfire etc) - but those microcontrollers obviously don't require external memory or clock signal. The OP seemed to be half-asking about the original 68k in that regard.

Pan Et Circenses
Nov 2, 2009
The PICs are a good suggestion, but it looks like the ones that hit all the other marks clock in at around 600uA/MHz, which is very high; compare with around 150uA/MHz for the EFM32. Also, the PICs are a little expensive, at least at comparable quantities from distributors.

A little more information on the project:

- Powered by a 3V watch battery ideally, so I'm looking at 25-35mAh (think I'll be able to run it unregulated).
- Target battery life 48 hours constant use, 1 year standby.
- Probably not viable at much more than $30 retail.
- Using Sharp memory LCD as display, which eats pretty much all my costs right off the top in order to hit the price point (assuming I won't be able to get them for under $6 even in large quantity, but who knows)
http://www.sharpmemorylcd.com/

I did a little prototyping with a Kinetis L0 (which are ridiculously cheap/low power) before realizing that the SPI interface on those things is so terrible as to be almost useless and giving up on it out of disgust (even though I could probably trick it into half-working). So, now I'm looking for processors that integrate all my external components into a single package, hopefully to save a bit on cost.

It's still just a little side hobby project, but it's interesting enough to start really trying to nail down the ideal parts.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

TI's MSP430 series has a lot of low power options.
http://www.ti.com/lsds/ti/microcontrollers_16-bit_32-bit/msp/products.page

Slanderer
May 6, 2007
Does anyone have any good references/reading on data versioning in an embedded systems context? I need to manage a nonvolatile data store that is more mutable than the one's I've used in the past as it is tied in with our system for passing data structures between threads (and processors).

We have a system in place, but it relies heavily on engineers knowing all the side effects of modifying any of the data structure definitions shared between multiple CPUs, and also how to write conversion functions for upgrading the nonvolatile memory store from an older version (which, in turn, requires fun stuff like manually grabbing data offsets via the debugger). This system is still way early in prototype, but already I am fed up with it and I'm hoping to be able to point the engineers responsible in a more productive direction. Ideally, we would like a robust system that makes it hard for us to accidentally break something without generating a ton of errors at compile-time. However, I've only had limited experience with this, and none of the references I can find are geared to embedded systems (or at least to C/C++ software).

Tan Dumplord
Mar 9, 2005

by FactsAreUseless
I've implemented a ring buffer for flushing messages to a UART. The flushing is driven by the FIFO Empty interrupt handler, while message insertion occurs in synchronous code.

One of my goals is to be able to displace a lower priority message with a higher priority one when the ring buffer is full. I use a status variable to detect whether the buffer is still full when I've searched for the message I want to replace. Once I have the address, if the interrupt has not fired, I replace the message.

My problem is that I can only narrow down this operation to a few instructions, which leaves room for the interrupt to mess things up.

code:
if (read_addr == ringbuf_read_addr) { // true only if ISR has not run
    *lowest = message;
}
Is my only option to disable the interrupt while I update the ring buffer? Is there any way to make this atomic on ARM?

Slanderer
May 6, 2007

sliderule posted:

Is my only option to disable the interrupt while I update the ring buffer? Is there any way to make this atomic on ARM?

I don't see how. Is there any particular reason you don't want to disable the ISR? This is roughly similar to an RTOS disabling ISRs briefly to allow atomic access to its internals, and the problem of needing to safely access a message is particularly relevant.

Tan Dumplord
Mar 9, 2005

by FactsAreUseless
If I don't have to ever disable the ISR, I don't have to worry about restarting the buffer flush (also I don't have to ever question the throughput). I was trying to avoid complication where possible. It's easy enough to solve by disabling the ISR, I was just thinking wishfully. :allears:

carticket
Jun 28, 2005

white and gold.

sliderule posted:

If I don't have to ever disable the ISR, I don't have to worry about restarting the buffer flush (also I don't have to ever question the throughput). I was trying to avoid complication where possible. It's easy enough to solve by disabling the ISR, I was just thinking wishfully. :allears:

On every ARM micro I've used, you can clear the RX interrupt enable (fifo or otherwise), update your buffer as needed and then reenable and it should fire the interrupt at that point. It sounds like your buffer is complete messages rather than a character buffer, though, and I've always used character buffers so you can update the contents uncritically, and you only need a critical section on the read/write pointer and count updates.

I guess I'm not entirely sure of your concern about disabling the interrupt.

Tan Dumplord
Mar 9, 2005

by FactsAreUseless
It's the TX FIFO Empty interrupt that I'm working with -- whenever the FIFO gets empty, feed it from the ring buffer.

One of my design goals is to maximize throughput. My concern here is that if the FIFO becomes empty while the interrupt is disabled and the ring buffer is full, there may be a loss in throughput if it takes too long to resume filling the FIFO. It's not a super huge concern, as 1 symbol at the data rate is hundreds of clock cycles, but it's a concern nonetheless.

You suggest that the interrupt will fire if I enable it while the interrupt condition exists. I can't find anything to suggest this in the docs, but NXP docs are pretty bad.

Edit: Oh wait I can't read:

quote:

A THRE interrupt is set immediately if the UART THR FIFO has held two or more characters at one time and currently, the U0THR is empty

Tan Dumplord fucked around with this message at 13:06 on Sep 22, 2015

Popete
Oct 6, 2009

This will make sure you don't suggest to the KDz
That he should grow greens instead of crushing on MCs

Grimey Drawer
Regarding the "why didn't X chip take off?" discussion. Atmel documentation is second to none it should be the model for any device manufacturer. I've convinced my boss to advocate for using AVRs in any project needing a micro because I've spent so much hobby time with them. He's a former Motorola engineer and when he used an AVR for the first time with the easy toolchain and access to Arduino libraries he was convinced.

Also never underestimate engineers laziness. If you know how to design for X architecture you don't want to switch to a new one every new project. Its time consuming to learn and get developers up to speed. A new architecture has to either have huge advantages or be very easy to pick up for anyone to want to switch.

carticket
Jun 28, 2005

white and gold.

sliderule posted:

It's the TX FIFO Empty interrupt that I'm working with -- whenever the FIFO gets empty, feed it from the ring buffer.

One of my design goals is to maximize throughput. My concern here is that if the FIFO becomes empty while the interrupt is disabled and the ring buffer is full, there may be a loss in throughput if it takes too long to resume filling the FIFO. It's not a super huge concern, as 1 symbol at the data rate is hundreds of clock cycles, but it's a concern nonetheless.

You suggest that the interrupt will fire if I enable it while the interrupt condition exists. I can't find anything to suggest this in the docs, but NXP docs are pretty bad.

Edit: Oh wait I can't read:

Yeah, don't know how I got RX/TX backwards. Anyway, if throughput is your concern, rather than keeping a queue of messages, keep a queue of pointers to messages. That way an incoming hipri message just has its address inserted at the front of the queue which should be just a few instructions to accomplish inside the critical section.

Edit: you can also keep two queues, one hipri, and when servicing the TX interrupt decide which queue to pull from.

carticket fucked around with this message at 01:58 on Sep 23, 2015

Tan Dumplord
Mar 9, 2005

by FactsAreUseless
Unfortunately, memory is also a concern. Pointers would double RAM usage :( I've thought about using a second queue before, which would actually improve memory efficiency by 50% for those messages at the cost of space for lower priority messages.

This got me thinking about it, though. Each message source can only send 1 message per instant, so maybe I'll just "register" every sender and peek at their data from the ISR to see what needs sending. It will scale memory linearly at half efficiency (two 32-bit pointers per sender (linked list) compared to one 32-bit int per message), but at least it will exactly accommodate the number of senders in the system, which is user-specified. Priority would be achieved via insertion sort into the LL.

It will save memory for many simple cases but use more past half of the previous fixed buffer size. The ISR might chew more CPU, too, with a few more dereferences and a guaranteed O(n) case to scan the LL whenever there is any message to be sent. Also I'll need a global flag to idle the ISR.

Time to put together a reference implementation and benchmark, I guess.

edit:

Actually I could use a flat array of pointers if I'm crafty at registration time.

Tan Dumplord fucked around with this message at 15:41 on Sep 23, 2015

JawnV6
Jul 4, 2004

So hot ...
If I'm understanding correctly, you're going to traipse through a linked list in an ISR context? I generally don't try to do anything unbounded like that in an ISR, normally just setting some state for application code to fix up later. Even adding to the head of a doubly-linked-list that's consumed by the tail end seems better.

Tan Dumplord
Mar 9, 2005

by FactsAreUseless
No, I changed my mind on that one. Even so it's not as bad as it sounds. The list would have been fixed, and the important (audio) code runs in an ISR with the highest priority.

I've come up with a design that has me traversing a dynamically-allocated flat array of pointers to messages (as suggested by Mr. Powers), as it occurred to me that each of the message-producing objects already store their messages internally.

As the priority of the message is actually determined by the priority of the module (which is fixed at compile time), I can just order the list by the priority of the modules. Traversing this list from the beginning will see me filling the FIFO in priority sequence.

It will consume the same amount of memory as the ring buffer design (per potential message per instant), but will be sized according to the number of modules instead of some fixed size. I just have to be careful about holes in the heap.

Thanks everyone for helping me talk this out.

Internet Janitor
May 17, 2008

"That isn't the appropriate trash receptacle."
Hopefully this isn't too off-topic, but I'm helping to organize a little game jam that may be of interest to readers of this thread:

http://octojam.com

Armed with 3.5kb of program space and a 64x32 pixel display, your mission, should you choose to accept it, is to write a video game for a virtualmachine from 1977. I've provided a high-level assembly language called Octo lets contestants write structured code and use syntax familiar to most curly-bracket language users, but if it's not to your taste there are a wealth of other tools and resources available.

JawnV6
Jul 4, 2004

So hot ...
I'm selecting a protocol for an embedded sensing device to package up readings and deliver them to the cloud. The embedded side is a m4 in C, nothin' fancy like a file system. The cloud is python, java, or scala. The options I'm considering, along with my windbag opinions:

1) MessagePack - C support is good. Restricted to JSON isn't. I have 2-byte unique identifiers for most of the data types, going to single or double-character field names for the protocol to compress seems like a loss.

2) Protocol Buffers - Leading right now. Requires specifying format ahead of time, which isn't too bad. Someone kindly linked the spec for the wire format and I'm confident I can suffer the headache of linking that once or generate it to a specific target.

3) Cap'n Proto - The engineer who wrote protobufs wrote this afterwards. Meant as an in-memory storage as well as network transmission. I'm not really sold on it. No C support, less other-lang support than protobufs proper. As much as I like the idea of network transmission being as simple as a single pointer, mucking up every in-memory structure doesn't seem worth it.

I'm going to spin up one or more of these and see how it does with my real data. But I'd welcome other opinions about solving this problem.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:


Internet Janitor's Octo is really nice and polished, everyone should check it out if you haven't seen it before.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

JawnV6 posted:

I'm selecting a protocol for an embedded sensing device to package up readings and deliver them to the cloud. The embedded side is a m4 in C, nothin' fancy like a file system. The cloud is python, java, or scala. The options I'm considering, along with my windbag opinions:

1) MessagePack - C support is good. Restricted to JSON isn't. I have 2-byte unique identifiers for most of the data types, going to single or double-character field names for the protocol to compress seems like a loss.

2) Protocol Buffers - Leading right now. Requires specifying format ahead of time, which isn't too bad. Someone kindly linked the spec for the wire format and I'm confident I can suffer the headache of linking that once or generate it to a specific target.

3) Cap'n Proto - The engineer who wrote protobufs wrote this afterwards. Meant as an in-memory storage as well as network transmission. I'm not really sold on it. No C support, less other-lang support than protobufs proper. As much as I like the idea of network transmission being as simple as a single pointer, mucking up every in-memory structure doesn't seem worth it.

I'm going to spin up one or more of these and see how it does with my real data. But I'd welcome other opinions about solving this problem.

How are you transmitting the data? Neither ProtoBuf and MsgPack have ways of framing data -- that's usually communicated out of band.

Cap'n Proto isn't what you want -- it's more focused on RPC than data transfer.

JawnV6
Jul 4, 2004

So hot ...
e: either a REST endpoint or a socket. idc

The cloud side will have a dedicated upload endpoint for whatever we decide. The data is rich enough that any packet will have enough information for the cloud to act on it.

JawnV6 fucked around with this message at 18:49 on Sep 28, 2015

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
So, what's the downside of protobuf from your perspective? There's a tool to generate some pretty simple C code which you can use.

JawnV6
Jul 4, 2004

So hot ...

Suspicious Dish posted:

So, what's the downside of protobuf from your perspective? There's a tool to generate some pretty simple C code which you can use.

Hadn't seen that and I'm generally wary of things that claim to be C++ only. It's failing to install on one of those delightful autoconf warnings now.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

JawnV6 posted:

I'm selecting a protocol for an embedded sensing device to package up readings and deliver them to the cloud. The embedded side is a m4 in C, nothin' fancy like a file system. The cloud is python, java, or scala. The options I'm considering, along with my windbag opinions:

1) MessagePack - C support is good. Restricted to JSON isn't. I have 2-byte unique identifiers for most of the data types, going to single or double-character field names for the protocol to compress seems like a loss.

2) Protocol Buffers - Leading right now. Requires specifying format ahead of time, which isn't too bad. Someone kindly linked the spec for the wire format and I'm confident I can suffer the headache of linking that once or generate it to a specific target.

3) Cap'n Proto - The engineer who wrote protobufs wrote this afterwards. Meant as an in-memory storage as well as network transmission. I'm not really sold on it. No C support, less other-lang support than protobufs proper. As much as I like the idea of network transmission being as simple as a single pointer, mucking up every in-memory structure doesn't seem worth it.

I'm going to spin up one or more of these and see how it does with my real data. But I'd welcome other opinions about solving this problem.

protobuf is pretty rock solid.

for other fun binary encoding times, consider avro, thrift or bond!

JawnV6
Jul 4, 2004

So hot ...

Malcolm XML posted:

protobuf is pretty rock solid.
I've written a lot of binary encoding schemes and I'm really glad to be rid of it. Now I'm agonizing if I'll need integers or if everything floats.

Malcolm XML posted:

for other fun binary encoding times, consider avro, thrift or bond!
But I'm not australian, macklemore, or dapper??

Slanderer
May 6, 2007

Internet Janitor posted:

Hopefully this isn't too off-topic, but I'm helping to organize a little game jam that may be of interest to readers of this thread:

http://octojam.com

Armed with 3.5kb of program space and a 64x32 pixel display, your mission, should you choose to accept it, is to write a video game for a virtualmachine from 1977. I've provided a high-level assembly language called Octo lets contestants write structured code and use syntax familiar to most curly-bracket language users, but if it's not to your taste there are a wealth of other tools and resources available.

This seems dope

Slanderer
May 6, 2007
I'm working with a device with two STM32 devices, and I tested out creating a JTAG chain with them last week solely through the existing programming headers. It worked, so now I'm having a cable made up so I can do simultaneous debugging of these two processors. However, I'm running into a little confusion with regards to the nJTRST line---specifically, whether it needs a pull-up or a pull-down. My STM32 reference manual says that there is an internal pullup for that line (along with JTDI and JTMS), because the IEEE standard recommends pull ups on those lines. However, looking at other sources, they have pull-downs on nJTRST. Should I just leave my devices (with there internal nJTRST pull-ups) alone, or do I need to add an external pullup or pulldown?

Also, can I safely ignore the nRESET signal from the JTAG connector? Both of my programming headers have nRESET (along with nJTRST), but I assume I only need that when I'm using Serial Wire debugging instead? I assume that I should be able to reset both CPUs via JTAG without any trouble.

EDIT: I'm also somewhat confused by the numbering of devices in the JTAG scan chain. Different sources are giving me different ordering, so I can't figure out of JTDI goes to the *first* device or the *last* device. This is not super-relevant, except when it comes to making sure my jtag chain adapter cable has a properly terminated JTCK and an inline resistor on the proper point of JTDO.

From http://www.xjtag.com/support-jtag/dft-guidelines.php :


And from the manual for my J-Link debugger (https://www.segger.com/admin/uploads/productDocs/UM08001_JLink.pdf):


From my temporary test setup last week, I ended up with JTDI connected to *my* device 0, and JTDO going to device 1. In the debugger, device 0 was setup with 0 preceeding bits, and device 1 was setup with 9 preceding bits (5 for the boundary scan TAP + 4 for the Cortex-M4 debug TAP). However, looking at the tech ref, it shows JTDI going into the boundary scan TAP first---does this mean my debugger is just reversing the order of the JTAG chain, so that instead of counting up from the device connected to JTDI, it counts down from the device connected to JTDO?

Slanderer fucked around with this message at 20:11 on Oct 12, 2015

JawnV6
Jul 4, 2004

So hot ...

Internet Janitor posted:

Hopefully this isn't too off-topic, but I'm helping to organize a little game jam that may be of interest to readers of this thread:

http://octojam.com
I started messing around with this over the weekend, got as far as making graphics for dominoes. I'm just a simple man with simple terminals, what's the easiest way to load a local file into the browser emulator? The provided stuff looks like it'll only pull from a gist.

I've been copy-pasting my code into the index.html emulator because I'm a caveman web programmer.

My Rhythmic Crotch
Jan 13, 2011

JawnV6 posted:

I've written a lot of binary encoding schemes and I'm really glad to be rid of it. Now I'm agonizing if I'll need integers or if everything floats.

But I'm not australian, macklemore, or dapper??
I'm way late, but you want nanopb. I've been playing with it and it rocks. I made an RPC system, and then wrote a command line client so I can command/control stuff over usb or ethernet.

Internet Janitor
May 17, 2008

"That isn't the appropriate trash receptacle."

JawnV6 posted:

I started messing around with this over the weekend, got as far as making graphics for dominoes. I'm just a simple man with simple terminals, what's the easiest way to load a local file into the browser emulator? The provided stuff looks like it'll only pull from a gist.

I've been copy-pasting my code into the index.html emulator because I'm a caveman web programmer.

Honestly, copy and paste is the most straightforward approach right now if you prefer working in an external editor. Short of hacking up a local copy of Octo to read from a file, it's also possible to make your own Gist manually instead of via the "share" button, and then you can use git to push changes to it. If you install Node.js, Octo comes with a CLI wrapper for the compiler. Pairing the command line Octo compiler with most other Chip8 emulators would make it possible to entirely take the web browser out of the equation during development. (If you try this, be wary of emulation bugs- nearly everything in the wild gets the shift and load/store instructions subtly wrong!) I would like to make a curses-style CLI frontend for the emulator as well, but I don't expect to have one done any time soon.

My apologies for any inconvenience with the current tooling.

JawnV6
Jul 4, 2004

So hot ...

My Rhythmic Crotch posted:

I'm way late, but you want nanopb. I've been playing with it and it rocks. I made an RPC system, and then wrote a command line client so I can command/control stuff over usb or ethernet.
Thanks! I haven't integrated it yet, but I stumbled on that one too. It compiles cleanly, but I don't have anything set up to consume the output. I'm still messing around with the input side

Internet Janitor posted:

My apologies for any inconvenience with the current tooling.
I've dealt with far worse :) It's been fun to play around with even a little. Thanks!

Slanderer
May 6, 2007
Anyone know how I can convince the IAR compiler to load two different applications onto the same processor with a debugger? Specifically, I have a boot loader application and a main application, which I manage as separate IAR projects, and load individually. For reasons not worth going into, the debugger is the only way to load either application right now, and I would like to ensure that people don't accidentally forget to update the bootloader by forcing it to be downloaded by the debugger every time. Anyone know if this is possible?

Adbot
ADBOT LOVES YOU

muon
Sep 13, 2008

by Reene

Slanderer posted:

Anyone know how I can convince the IAR compiler to load two different applications onto the same processor with a debugger? Specifically, I have a boot loader application and a main application, which I manage as separate IAR projects, and load individually. For reasons not worth going into, the debugger is the only way to load either application right now, and I would like to ensure that people don't accidentally forget to update the bootloader by forcing it to be downloaded by the debugger every time. Anyone know if this is possible?

I haven't used IAR, but the way I did it at a previous job was to have the bootloader build a binary file. That .bin was added to the main application project, so hitting debug would download the .bin in addition to the application code. This still forces you to build the bootloader to actually have the latest version though.

  • Locked thread