Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Luigi Thirty
Apr 30, 2006

Emergency confection port.

hendersa posted:

You'll get all that and more through his Patreon. This is the sort of stuff that I am happy to be a patron for. :eng101:

Here's the page describing it.

Adbot
ADBOT LOVES YOU

Volguus
Mar 3, 2009
Earlier in the thread I have posted how i was implementing protobuf on a synopsys-chipset board over wifi. Nanopb got recommended, I was using lwip, everything worked and life was good. Now I got the task to implement the same protobuf protocol over bluetooth, using the same board and some Roving Networks bluetooth module. Got the basics to work, I can talk with the chipset in command mode, the chipset talks back and all was fine.
But, when I'm not in command mode, that is ... I'm ready to send and receive data for protobuf, life is suddenly not so rosy anymore.
I am using Android to talk to the board over bluetooth (same application i was using before over wifi, but now just using the bluetooth socket). I can pair with the bluetooth and I can connect to it (for some reason a few tries are needed, but whatever, it works).
The problem I'm having is that the bytes that I'm getting over the wire from Android don't make any sense and protobuf throws decoding errors. Sample errors that I see in minicom on the board are: "Decode failed: invalid wire_type" "Decode failed: io_error". Looking at nanopb source, it simply gets the wrong bits, therefore it has no idea what to do with them.

What could possibly be causing this? Does the bluetooth serial transmission (RFCOMM it says on Android documentation side) adds additional bits to the data that I'm sending? Is it possible that when I'm asking the device uart_read(buf, count) for it to read more bytes than I told it to? Or less bytes but not bothering to tell me so?

Somewhere, somehow the drat bits get messed up and I have no idea where. Does anyone have any experience with this?

Thanks.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

What happens when you send non protobuf data through the same pipe? Try sending more bytes than a packet, a small number of bytes, just a single packet, lots of packets really fast, etc and make sure the input and output match. According the the super quick reading on rfcomm I did, it is supposed to be reliable and emulate an RS232 port, so I wouldn't expect any extra data. Are you possibly running into a threading/synchronization issue? For example, reading a buffer while it is being written to.

Volguus
Mar 3, 2009

taqueso posted:

What happens when you send non protobuf data through the same pipe? Try sending more bytes than a packet, a small number of bytes, just a single packet, lots of packets really fast, etc and make sure the input and output match. According the the super quick reading on rfcomm I did, it is supposed to be reliable and emulate an RS232 port, so I wouldn't expect any extra data. Are you possibly running into a threading/synchronization issue? For example, reading a buffer while it is being written to.

Well, I shouldn't have any threading/synchronization issues, but i better double check. I will, however, try your idea first, to send known simple data over the air and see what i get on the other side.

JawnV6
Jul 4, 2004

So hot ...

Volguus posted:

send known simple data over the air and see what i get on the other side.

Great place to start. Another handy thing for bringup with protobuf is to get a Python or other scripting language binding so you can quickly check byte patterns on a command line.

Volguus
Mar 3, 2009
Holy mother ....
So, I started with basic things: send 4 bytes to the board and see what i get. So I sent 1,2,3 and 4.
On the board, all I got was 1 byte (nothing else available on the RX line): values ranging from 243 to 241, seemingly random.
From the board, then i sent 4 bytes back to android (1,2,3 and 4 again). On android I did get 4 bytes back, but they were: -128, 2, 3, 4 (-128 is java-speak for 255).
That's a head scratching WTF right there.
Then I remember that the chipset says that by default it is running at 115,200 baud rate. But, and I quote:

quote:

The set UART baud rate command sets the baud rate where <value> is 1200, 2400,
4800, 9600, 19.2, 28.8, 38.4, 57.6, 115K, 230K, 460K, or 921K. You only need to spec-
ify the first 2 characters of the desired baud rate.
So I set it (few hours ago) to 921k since i thought i am sending quite a bit of data over the air. Why not go with the highest speed this thing supports? So, among other things that I am trying to debug this thing, I factory-reset-it. This set it back to 115K. Re-tried the application and now
nicely I get 4 bytes on the board side, the correct 4 bytes (1,2,3,4) and I get 4 bytes on the android side, the correct 4 bytes (1,2,3,4).

WTF chipset? Is it then safe to assume that probably this thing doesn't support 921K? Maybe it doesn't support anything above 115K and the manual is just full of poo poo (like usual)?

SeXTcube
Jan 1, 2009

I've never had good results getting rando hardware functional over 115.

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man
My current place uses kinetis studio and processor expert. This is the eclipse rebrand and codegen setup for NXP. It succeeded CodeWarrior which was also iirc a rebranded eclipse with processor expert. We're still using it even though after NXP was bought by Freescale they moved on to yet another setup which I assume is another rebranded eclipse. Meanwhile, processor expert doesn't allow you to do some things like enable the 48mhz oscillator to run USB independently from the main clock, has seven options for everything with half of them marked DEPRECATED, and is incredibly slow.

Sometimes I hate my job. At least it's not Psoc or fpga tooling. We need to switch to plain old God Damned gcc, it's just a CM4

ToxicFrog
Apr 26, 2008


Volguus posted:

WTF chipset? Is it then safe to assume that probably this thing doesn't support 921K? Maybe it doesn't support anything above 115K and the manual is just full of poo poo (like usual)?

It's always this one.

csammis
Aug 26, 2003

Mental Institution

Embedded Programming Microthread: Another Rebranded Eclipse

Popete
Oct 6, 2009

This will make sure you don't suggest to the KDz
That he should grow greens instead of crushing on MCs

Grimey Drawer

Phobeste posted:

My current place uses kinetis studio and processor expert. This is the eclipse rebrand and codegen setup for NXP. It succeeded CodeWarrior which was also iirc a rebranded eclipse with processor expert. We're still using it even though after NXP was bought by Freescale they moved on to yet another setup which I assume is another rebranded eclipse. Meanwhile, processor expert doesn't allow you to do some things like enable the 48mhz oscillator to run USB independently from the main clock, has seven options for everything with half of them marked DEPRECATED, and is incredibly slow.

Sometimes I hate my job. At least it's not Psoc or fpga tooling. We need to switch to plain old God Damned gcc, it's just a CM4

Kinetis Processor Expert/CodeWarrior/Whatever the gently caress they call it now. It all sucks rear end, automated code generators are often more trouble then they are worth and they never properly deprecate or support old code. You'll upgrade your IDE one day and your project will be horribly broken because it forces you to use their new auto-generated libraries and you'll spend a week digging through the generated code/headers to find what is hosed now and you'll then forever have to remember to go in and make that fix every time you click the "generate code" button.

This isn't just a Freescale problem, tons of IDE's with code generators or built in libraries are terribly broken and never properly supported. But I ran into a bunch of annoying bugs with Kinetis Studio the one time I had to use it and it made me hate the processor expert with a fiery passion.

carticket
Jun 28, 2005

white and gold.

Keil MDK5 is a steaming pile, but it sure is handy for testing CDC-NCM stuff. If only the USB worked.

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man

Popete posted:

Kinetis Processor Expert/CodeWarrior/Whatever the gently caress they call it now. It all sucks rear end, automated code generators are often more trouble then they are worth and they never properly deprecate or support old code. You'll upgrade your IDE one day and your project will be horribly broken because it forces you to use their new auto-generated libraries and you'll spend a week digging through the generated code/headers to find what is hosed now and you'll then forever have to remember to go in and make that fix every time you click the "generate code" button.

This isn't just a Freescale problem, tons of IDE's with code generators or built in libraries are terribly broken and never properly supported. But I ran into a bunch of annoying bugs with Kinetis Studio the one time I had to use it and it made me hate the processor expert with a fiery passion.

This is my exact experience after being there for only about a month. Here are some of my least favorite parts of it, which are mixed between its eclipseness and its use of processor expert, in no particular order:

KDS/ProcExp is incredibly unfriendly to collaboration:
- Because Eclipse stores paths inside its project files, and because some files are stored in the kinetis local library inside the eclipse install, making many small modifications causes eclipse to change its project settings files. These have to be checked in to source control because more important things like build settings and what files are where are in them so you're constantly pointlessly churning these god damned project files
- When you create new components or edit components in processor expert, it will autogenerate control files for you and physically put them in your source tree (these are files that are named after your component and contain boilerplate for enabling/disabling them, they're fine, whatever). But it will also add initialization and common files as essentially symlinks in your project to files that are stored in the Kinetis install, without copying them, and giving no indication of that fact. Their behavior is usually controlled by preproc macros generated elsewhere. And these have some important stuff in them that it's frequently necessary to look at because the docs are trash or nonexistent and you occasionally might want to modify - the vector table is defined here, the low power modes and clock generation is defined here, and it's good to look at what your options are doing. If you want, you can even edit this code! And it's edited, forever, in your eclipse install, applying to all projects that symlink in that file, and invisible to source control so nobody else has your changes. Have fun debugging why other people can't build when you're totally sure you checked everything in!

KDS/ProcExp is incredibly unfriendly to build automation:
- Basically all the above applies here, too: doing a build, especially if your build server runs linux and your devs run windows or vice versa, will often include rerunning processor expert's code generation, which changes a whole lot of files in the source tree, which is not something I'm comfortable with and is a pain to work around anyway
- Eclipse does not have very good support in general for running headless builds and ProcessorExpert has even less; it used to have more with codewarrior but that's literally two rebranded eclipses from NXP ago

ProcessorExpert is mediocre at best at code generation:
- When you're making a configurator tool like this, you have two options you can take for it to be good: either you make drat sure it covers everything, or you document what it doesn't cover and expose well defined and supported places and hooks for users to write their own configuration. ProcExp takes the third way of covering about 95-98% of things (IME, YMMV, SMDFTB) but makes you think it has everything, leading to days of searching through slow mouse only unsearchable dialogs to find something like how to enable the internal 48MHz dedicated oscillator for the USB before realizing it just can't do it, and having to hack in configuration in and around what it's generated
- It has two version of most components, which seem to work equally well but have differences in how they're called, like whether they take a pointer to some private data (which of course is always NULL) or not, and half of them say (Legacy) next to them. There's also some components that can only be configured as legacy components, like port settings for things like internal pull resistors.

Finally, it's very difficult to search for solutions for these problems because when you search for processor expert the results you get are about 40% codewarrior, 40% people telling you this is deprecated and use the SDK, and then 20% actually useful.

We're going to try and switch to the new route, which is a static SDK and (if we listen to them) something called MCUXpresso which is yet another rebranded eclipse, or (if we listen to me) biting the bullet and writing our build system in CMake so we are freed from this hell and able to do automated builds. This should be better but I'm suspicious because when you go to their website and try to get an SDK, you have to sign in and request an "SDK build" that involves specifying your host operating system. This better just be for analytics or I'll literally explode.

tl;dr Tooling is the absolute worst part of embedded development.

carticket
Jun 28, 2005

white and gold.

Kinetis talk: on the upside, the standalone KSDK is pretty good without generated code.

iospace
Jan 19, 2038


Has anyone here have the joy of working with board that used this crap?



Because holy poo poo CPCI sucks.

Popete
Oct 6, 2009

This will make sure you don't suggest to the KDz
That he should grow greens instead of crushing on MCs

Grimey Drawer

iospace posted:

Has anyone here have the joy of working with board that used this crap?



Because holy poo poo CPCI sucks.

Last place I worked at developed COTS embedded form factor units that had these and other insane connectors and where often plugged into a backplane like that. It was hell trying to pull a board out and often you'd cut up your hands when it finally came out.

Also pray to God you don't bend or break a pin.

iospace
Jan 19, 2038


Popete posted:

Last place I worked at developed COTS embedded form factor units that had these and other insane connectors and where often plugged into a backplane like that. It was hell trying to pull a board out and often you'd cut up your hands when it finally came out.

Also pray to God you don't bend or break a pin.

Why do you think I hate cPCI so much. VME was, ok, it was pins but it felt like they were durable. cPCI you could look at them funny and they'd bend.

Also, I'm pretty sure the one that drove you nuts was VPX, right?

Popete
Oct 6, 2009

This will make sure you don't suggest to the KDz
That he should grow greens instead of crushing on MCs

Grimey Drawer

iospace posted:

Why do you think I hate cPCI so much. VME was, ok, it was pins but it felt like they were durable. cPCI you could look at them funny and they'd bend.

Also, I'm pretty sure the one that drove you nuts was VPX, right?

Had to remember which one VPX was and it made my hands hurt thinking about pulling those things off their backplanes. Either your backplane had the 2 side posts to help guide the board into place without breaking the "fins" in which case the board was a bitch to pull back out, or you didn't have the side posts and the board was easy to pull out but you ran the risk of it bending and breaking pin fins or seating it at a bad angle.

iospace
Jan 19, 2038


Popete posted:

Had to remember which one VPX was and it made my hands hurt thinking about pulling those things off their backplanes. Either your backplane had the 2 side posts to help guide the board into place without breaking the "fins" in which case the board was a bitch to pull back out, or you didn't have the side posts and the board was easy to pull out but you ran the risk of it bending and breaking pin fins or seating it at a bad angle.

Yup, VPX was great because holy hell was it rugged unlike the other major types, but a pain in the rear end because of it. I remember having to wiggle it out.

Tan Dumplord
Mar 9, 2005

by FactsAreUseless
I administer a Motorola telecom switch that uses CPCI. Goddamnit those boards can be sharp.

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man

Mr. Powers posted:

Kinetis talk: on the upside, the standalone KSDK is pretty good without generated code.

Yeah and it almost adds to the frustration - we have a boot loader for the same product that uses the kinetis sdk and it's a comparative joy to work with, so simple. We're gonna switch the main project to it sooner or later because processor expert isn't really supported anymore and we already have an NXP soft device implementation that comes with its own CMSIS stack and a usb stack that also comes with its own sdk and CMSIS implementation and at least we'll be able to unify those.

The really frustrating thing is that there's just no value add! This is a CM4 part, the code is already built using gcc, the only thing that using processor expert and kinetis adds is an inability to do easily replicated builds that don't pollute the source tree. I'm mostly mad at my predecessors for using this garbage in the first place.

iospace
Jan 19, 2038


sliderule posted:

I administer a Motorola telecom switch that uses CPCI. Goddamnit those boards can be sharp.

At least the switch has guide rails. When you don't have them, you're more than likely to bend pins, which is :rip: and how.

movax
Aug 30, 2008

iospace posted:

Has anyone here have the joy of working with board that used this crap?



Because holy poo poo CPCI sucks.

I used to design cPCI backplanes + SBCs; granted we tacked on connectors to add PCIe and other higher-speed I/O as well, but definitely remember cPCI J1 and J2 very well.

Software group was always good for returning hardware with mangled / ripped out connectors on both the SBC and backplane side.

Volguus
Mar 3, 2009
Earlier in the thread I posted about my woes with a Bluetooth Pmod that uses UART, for which I have to write the driver for. Finally got it working from the application, though protobuf still is broken. After I took my mind off of it for a few days, came back to it to investigate more. My lack of knowledge in the embedded field is starting to show its teeth here.

The basic read function in my driver is implemented as follows:
code:
	uint32_t rd_avail;
	rn42_uart->uart_control(UART_CMD_GET_RXAVAIL, (void *)(&rd_avail));
	cnt = (cnt > rd_avail)?rd_avail:cnt;
	if (cnt > 0) {
		return rn42_uart->uart_read(buf, cnt);
	}
	return cnt;
That is, ask the RX line how many bytes are available to read. If requested amount (cnt) is bigger than what's available then just read what's available. It's pretty simple and I hope it doesn't have any bugs in it that escape me (since i've been looking at it for a week now).
Now, the strange thing that's happening is that no matter how many bytes I ask that function to read, it never reads more than 2. That is, the GET_RXAVAIL returns at most 2, even though I know for a fact that I have more in there.
However, if I ask to read only one byte, it returns to me that it read one byte. But (and here's where it messes with my head) it appears that when pushing 1K per second, and i read one byte at a time, bytes get lost (seemingly random). Like, when asked to read one byte, and there are a bunch of them incoming, it reads 2, reports that it read only one and the other one just vanishes. If I read an even number of bytes at a time, I can push even 3K per second, and everything is fine.

Is that ... normal? Has anyone ever seen something like this before? This particular behaviour causes me to not be able to run protobuf on the thing, since that protocol reads only one byte relatively often. Would then the solution be for me to implement some form of "caching" or a buffer or something, then read 2 bytes and if the caller only asked for one remember the other byte for later? For the next call?

The other strange thing that I noticed, is that every now and then it reports that there aren't any bytes to be read (GET_RXAVAIL returns 0). At the moment, I have made an algorithm that only gives up on reading after 500ms of the line returning zero. The timer resets the moment I got something. Again, same question: is that normal? Is the UART? The bluetooth chip? The bluetooth in itself. It is an over-the-air transmission, so I guess that can happen ?

If you guys have seen this before and consider it normal, then my read functionality will have to be quite a monstrous function. Buffers, timers and all kinds of checks. All am I just dumb and not seeing something obvious somewhere? Is it reading bytes from a line that complicated?

Volguus fucked around with this message at 12:09 on Jun 16, 2017

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man
That part does look fine so I'd go deeper into how that uart_control function pointer is defined and all the rest of the behind the scenes stuff. At some point as a basic sanity check you could also hook a logic analyzer (you can get a $100 one from https://www.saleae.com/ that rocks) up to those uart lines and see what's happening on the physical level.

JawnV6
Jul 4, 2004

So hot ...

Volguus posted:

Is that ... normal? Has anyone ever seen something like this before? This particular behaviour causes me to not be able to run protobuf on the thing, since that protocol reads only one byte relatively often. Would then the solution be for me to implement some form of "caching" or a buffer or something, then read 2 bytes and if the caller only asked for one remember the other byte for later? For the next call?
"Normal" isn't the word I'd use. Every part tries to support a breadth of available communication, everything's slightly different, and you end up burning a lot of time on weird subtle issues like this. Even assuming each chunk is 8 bits is a contrivance.

I would guess there's an internal FIFO that you're reading from. When it says it has 2b, you read 1b, and the other disappears, I'd check that the assembly access to that register is actually doing a single-byte access. One explanation could be that you're doing a double-word read, the HW thinks you've pulled both bytes, and the compiler just masks off the one you explicitly asked for without realizing that second byte isn't in the HW any more. With embedded work there's a lot of HW masquerading as memory-mapped addresses, a lot of careful attention must be paid to that interface. Sometimes it helps to step back from the immediate problem and run experiments. Just like you checked getting 0x01020304 across, try larger buffers with known quantities that aren't protobufs.

One thing, when you say protobufs won't work because of single-byte accesses, are you significantly memory constrained? If not I'd definitely just have the driver dump the full byte stream somewhere in memory then have the decoder unpack it after all the bytes are through the channel. Don't try to point the decoder at the raw UART MMIO unless it's absolutely necessary.

I'm not sure you have physical wires to use a LA, but that's another tool that should be in everyone's kit.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

It's definitely 'normal' to run into problems like this :)

Volguus
Mar 3, 2009
The entire UART control/reading code is available at https://github.com/foss-for-synopsys-dwc-arc-processors/embarc_osp/blob/master/device/designware/uart/dw_uart.c since they uploaded it there.

Phobeste posted:

That part does look fine so I'd go deeper into how that uart_control function pointer is defined and all the rest of the behind the scenes stuff. At some point as a basic sanity check you could also hook a logic analyzer (you can get a $100 one from https://www.saleae.com/ that rocks) up to those uart lines and see what's happening on the physical level.
That uart_control function is just a switch on the available control commands. My value ends up calling this function:
code:
int32_t rx_avail = 0;
	DW_UART_REG *uart_reg_ptr = (DW_UART_REG *)(uart_ctrl_ptr->dw_uart_regbase);

	if (uart_ctrl_ptr->rx_fifo_len <= 1) {
		if (dw_uart_getready(uart_reg_ptr) == 1) {
			rx_avail = 1;
		} else {
			rx_avail = 0;
		}
	} else {
		rx_avail = uart_reg_ptr->RFL;
	}
	return rx_avail;
Where that uart_reg_ptr is just a big structure that does the actual hardware access. So, I guess that the RFL member maybe just has the value 2 in there. Maybe that's the architecture.

JawnV6 posted:

I would guess there's an internal FIFO that you're reading from. When it says it has 2b, you read 1b, and the other disappears, I'd check that the assembly access to that register is actually doing a single-byte access. One explanation could be that you're doing a double-word read, the HW thinks you've pulled both bytes, and the compiler just masks off the one you explicitly asked for without realizing that second byte isn't in the HW any more. With embedded work there's a lot of HW masquerading as memory-mapped addresses, a lot of careful attention must be paid to that interface. Sometimes it helps to step back from the immediate problem and run experiments. Just like you checked getting 0x01020304 across, try larger buffers with known quantities that aren't protobufs.
That's exactly what I'm doing now. I have implemented my own protocol after all (bypassing protobufs) where I send/receive that 1-2K of data. When I try to read the entire chunk (with the aforementioned loops and timers and so on) everything is fine. I read what I sent on the line, nothing more, nothing less. When I read the same thing but 1 byte at a time is where i saw the problems. Which is why I concluded that probably this is what makes protobufs not work.

JawnV6 posted:

One thing, when you say protobufs won't work because of single-byte accesses, are you significantly memory constrained? If not I'd definitely just have the driver dump the full byte stream somewhere in memory then have the decoder unpack it after all the bytes are through the channel. Don't try to point the decoder at the raw UART MMIO unless it's absolutely necessary.

I'm not sure you have physical wires to use a LA, but that's another tool that should be in everyone's kit.
I am definitely not memory constrained. They say I have 128MB at my disposal, so .. yea, I can go to town. This is probably the best thing I can do, read as much as possible then send it to the caller and hold a buffer in memory.

Diving into the guts of the thing, in the same C file as above, the actual read function (dw_uart_read) is implemented like this:
code:
while (i < len) {
		p_charbuf[i++] = dw_uart_prcv_chr(uart_reg_ptr);
}
So, it itself, deep down, it reads on byte at a time. But dw_uart_prcv_chr is even better, it calls dw_uart_getchar, which is just:
code:
Inline int32_t dw_uart_getchar(DW_UART_REG *uart_reg_ptr)
{
	return (int32_t)uart_reg_ptr->DATA;
}
where DATA is uint32_t. So it masquerades reading a char by converting an uint to int, then to a char. I am completely confused.
Logical access is probably the best thing I can do, though we'd have to buy it since we don't have one. I guess I will have to learn how to use the thing, it looks like is a bit better than an oscilloscope (which I haven't used since university, more than 20 years ago).

Thanks for the help though, this conversation gives me ideas.

carticket
Jun 28, 2005

white and gold.

Some UARTs keep flags with characters in the fifo. The lower 8 bits will be data, the upper 24 could contain status flags like parity error, framing error, breaks, overruns, etc.

It sounds to me like you have a fifo depth of 2 and you're overflowing.

RFL is almost certainly the receiver fifo level.

Volguus
Mar 3, 2009

Mr. Powers posted:

Some UARTs keep flags with characters in the fifo. The lower 8 bits will be data, the upper 24 could contain status flags like parity error, framing error, breaks, overruns, etc.

It sounds to me like you have a fifo depth of 2 and you're overflowing.

RFL is almost certainly the receiver fifo level.

RFL is indeed /*!< Receive FIFO level */.
So, you are saying that when i read only 1 byte at a time, because i'm sending from the other end so many some bytes will simply get dropped? That actually makes a lot of sense, never thought about it that way. So then, what JawnV6 suggested, with an internally kept buffer would probably be the best way to go about it. Read a set amount from the pipe (1k? 4k?) and give the caller (protobuf) from there instead of the actual line.

carticket
Jun 28, 2005

white and gold.

Volguus posted:

RFL is indeed /*!< Receive FIFO level */.
So, you are saying that when i read only 1 byte at a time, because i'm sending from the other end so many some bytes will simply get dropped? That actually makes a lot of sense, never thought about it that way. So then, what JawnV6 suggested, with an internally kept buffer would probably be the best way to go about it. Read a set amount from the pipe (1k? 4k?) and give the caller (protobuf) from there instead of the actual line.

Yep. I always implement it in the receive interrupt (if there's a fifo trigger interrupt, that's even better). I empty the fifo into a circular buffer and use a semaphore to signal anyone waiting.

Volguus
Mar 3, 2009

Mr. Powers posted:

Yep. I always implement it in the receive interrupt (if there's a fifo trigger interrupt, that's even better). I empty the fifo into a circular buffer and use a semaphore to signal anyone waiting.

Hmm, there is a receive callback capability (UART_CMD_SET_RXCB). Haven't used it yet, but this may be the best place to take advantage of it. Thanks a bunch.

Volguus
Mar 3, 2009
Thank you very much for all your help, everyone. Indeed, a ringbuffer that constantly feeds itself from the RX line and everyone just reads from whatever is in that buffer makes protobuf work like a peach. No more bytes lost into ether, as long as the buffer is big enough for my application.

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man
I hate watchdog failures especially in RTOS systems. I hate dealing with them. God dammit. Some days this job makes me want to throw things

Spatial
Nov 15, 2007

No problem, just do what we do in our products. The very first line of code disables the watchdog! :downs:

Le0
Mar 18, 2009

Rotten investigator!
I've been working in embedded software for nearly 10 years in the same company but the big problem I have is that we work on a single type of CPU (Sparc for Space basically), I'd like to learn some of the stuff the cool kids use nowadays. Also because I'd like to change company in the not too distant future.
I started a course where we use FreeRTOS on an Arduino which is a good start but one of the problem I have is that I have a hard time finding ideas of project to build so I can learn stuff.
What do you guys usually build for learning purpose on a new architecture?

Tan Dumplord
Mar 9, 2005

by FactsAreUseless
Robotics ticks a lot of boxes for me. Embedded is all about the hardware that's connected to it.

Le0
Mar 18, 2009

Rotten investigator!

sliderule posted:

Robotics ticks a lot of boxes for me. Embedded is all about the hardware that's connected to it.

I've been thinking about this for a little while cause well robots are fun. However I never ever did anything like that, would you have any resource, books or websites to recommend?

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Le0 posted:

What do you guys usually build for learning purpose on a new architecture?
Just write an emulator.

Adbot
ADBOT LOVES YOU

JawnV6
Jul 4, 2004

So hot ...

peepsalot posted:

Just write an emulator.

Boring. The correct answer is "sumo robots:" https://www.youtube.com/watch?v=QCqxOzKNFks

  • Locked thread