Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Volguus
Mar 3, 2009
I am working with a Synopsys ARC board which will have to communicate with an Android application. Since i have a lot of memory on the board, and I have FreeRT OS and network stack and all, I wrote essentially an TCP server that can communicate with the android application, following my own protocol. And it ... works, as a basic thing. However, I would like (if possible) to avoid manually serializing my data structures as I'm doing now (sending doubles over the network and different languages/architectures is a peach). Is there any serialization library out there that I could use in my embedded system that would be able to read/write to a Java program? I looked at msgpack, but there are only 3rd party and (they say) not very reliable embedded implementations.

What are people using for this?

Adbot
ADBOT LOVES YOU

Volguus
Mar 3, 2009

muon posted:

Protobufs!

For embedded?

Volguus
Mar 3, 2009

feedmegin posted:

Good news, 64-bit IEEE floating point is standard absolutely everywhere these days unless you're, I dunno, talking to a frigging VAX or something; the only thing you've got to worry about is endianness.

Yes, endianess was the only thing i worried about so far. Good to know that I shouldn't care about anything else though :).

Volguus
Mar 3, 2009

JawnV6 posted:

Yeah? Sure, spend some time hand-rolling a serialization protocol in two separate languages in 2017 if you want to, but there are plenty of bindings for a variety of formats for different use cases and languages available.

nanopb for protobufs in C, capnproto is the same guy who did protobufs that does a compact in-memory format suitable for slamming out over a wire, and CoAP is a good fit if other parts of the system are REST.

But sure, embedded serialization is a special snowflake that nobody's tended to.

The entire reason why I asked the question is because I imagined that my problem is already solved. I thought (wrongly) that maybe protobufs were not quite designed for lovely CPUs. Is nanopb the recommended protobuf implementation for embedded? What are you using?

Volguus
Mar 3, 2009

JawnV6 posted:

I used nanopb at my last company, the server side had a scala library that inflated it to something native. We weren't really compute performance constrained (100+MHz Cortex-M4) so I can't speak to that. And if you genuinely have ~12 bytes you're trying to get across a channel that you're writing the other side of and have a PoC up and running, it might not be worth it. But it reduced the data by a surprising amount. The guts are worth a brief scan, i.e. it condenses uint32_t's into 8 bits if the value is small enough. That does make the size dependent on the content, which may or may not pose a problem.

It takes the .proto and spits out generated code. I had to fill in a few callbacks to populate the data. After that it was a binary blob, I'd fill in a flash page with as many as would fit and kick them up to the server. No other framing, stack blobs and shoot them out.

Thanks for the nanopb suggestion. Got it up and running on both my dev linux machine and on the board (i am really lucky, they implement almost everything in the POSIX specification. Only a couple of #ifdef's were needed). While the data and the amount of it that I'm sending right now is quite trivial (X*12 bytes, where X is variable), i do expect it to become more complex in the future. And not having to worry about endianess, protocol itself and everything related to it is surely a bonus.

Volguus
Mar 3, 2009
After a couple of days of figuring out the correct incantation to get the RN42 bluetooth module working, i finally saw the leds blinking, the commands being written and responses being read. It's loving black magic and the documentation writers are the laziest people on the planet. If you change UART pin 0 to be pin 4, please loving update the drat poo poo so that i don't lose my mind. Or that the default state of the device is X instead of Y, that's fine, but would letting me know about it kill ya?

Sigh. I'm on my way to finally have a bluetooth driver working. I need a beer :cheers: .

Volguus
Mar 3, 2009
Earlier in the thread I have posted how i was implementing protobuf on a synopsys-chipset board over wifi. Nanopb got recommended, I was using lwip, everything worked and life was good. Now I got the task to implement the same protobuf protocol over bluetooth, using the same board and some Roving Networks bluetooth module. Got the basics to work, I can talk with the chipset in command mode, the chipset talks back and all was fine.
But, when I'm not in command mode, that is ... I'm ready to send and receive data for protobuf, life is suddenly not so rosy anymore.
I am using Android to talk to the board over bluetooth (same application i was using before over wifi, but now just using the bluetooth socket). I can pair with the bluetooth and I can connect to it (for some reason a few tries are needed, but whatever, it works).
The problem I'm having is that the bytes that I'm getting over the wire from Android don't make any sense and protobuf throws decoding errors. Sample errors that I see in minicom on the board are: "Decode failed: invalid wire_type" "Decode failed: io_error". Looking at nanopb source, it simply gets the wrong bits, therefore it has no idea what to do with them.

What could possibly be causing this? Does the bluetooth serial transmission (RFCOMM it says on Android documentation side) adds additional bits to the data that I'm sending? Is it possible that when I'm asking the device uart_read(buf, count) for it to read more bytes than I told it to? Or less bytes but not bothering to tell me so?

Somewhere, somehow the drat bits get messed up and I have no idea where. Does anyone have any experience with this?

Thanks.

Volguus
Mar 3, 2009

taqueso posted:

What happens when you send non protobuf data through the same pipe? Try sending more bytes than a packet, a small number of bytes, just a single packet, lots of packets really fast, etc and make sure the input and output match. According the the super quick reading on rfcomm I did, it is supposed to be reliable and emulate an RS232 port, so I wouldn't expect any extra data. Are you possibly running into a threading/synchronization issue? For example, reading a buffer while it is being written to.

Well, I shouldn't have any threading/synchronization issues, but i better double check. I will, however, try your idea first, to send known simple data over the air and see what i get on the other side.

Volguus
Mar 3, 2009
Holy mother ....
So, I started with basic things: send 4 bytes to the board and see what i get. So I sent 1,2,3 and 4.
On the board, all I got was 1 byte (nothing else available on the RX line): values ranging from 243 to 241, seemingly random.
From the board, then i sent 4 bytes back to android (1,2,3 and 4 again). On android I did get 4 bytes back, but they were: -128, 2, 3, 4 (-128 is java-speak for 255).
That's a head scratching WTF right there.
Then I remember that the chipset says that by default it is running at 115,200 baud rate. But, and I quote:

quote:

The set UART baud rate command sets the baud rate where <value> is 1200, 2400,
4800, 9600, 19.2, 28.8, 38.4, 57.6, 115K, 230K, 460K, or 921K. You only need to spec-
ify the first 2 characters of the desired baud rate.
So I set it (few hours ago) to 921k since i thought i am sending quite a bit of data over the air. Why not go with the highest speed this thing supports? So, among other things that I am trying to debug this thing, I factory-reset-it. This set it back to 115K. Re-tried the application and now
nicely I get 4 bytes on the board side, the correct 4 bytes (1,2,3,4) and I get 4 bytes on the android side, the correct 4 bytes (1,2,3,4).

WTF chipset? Is it then safe to assume that probably this thing doesn't support 921K? Maybe it doesn't support anything above 115K and the manual is just full of poo poo (like usual)?

Volguus
Mar 3, 2009
Earlier in the thread I posted about my woes with a Bluetooth Pmod that uses UART, for which I have to write the driver for. Finally got it working from the application, though protobuf still is broken. After I took my mind off of it for a few days, came back to it to investigate more. My lack of knowledge in the embedded field is starting to show its teeth here.

The basic read function in my driver is implemented as follows:
code:
	uint32_t rd_avail;
	rn42_uart->uart_control(UART_CMD_GET_RXAVAIL, (void *)(&rd_avail));
	cnt = (cnt > rd_avail)?rd_avail:cnt;
	if (cnt > 0) {
		return rn42_uart->uart_read(buf, cnt);
	}
	return cnt;
That is, ask the RX line how many bytes are available to read. If requested amount (cnt) is bigger than what's available then just read what's available. It's pretty simple and I hope it doesn't have any bugs in it that escape me (since i've been looking at it for a week now).
Now, the strange thing that's happening is that no matter how many bytes I ask that function to read, it never reads more than 2. That is, the GET_RXAVAIL returns at most 2, even though I know for a fact that I have more in there.
However, if I ask to read only one byte, it returns to me that it read one byte. But (and here's where it messes with my head) it appears that when pushing 1K per second, and i read one byte at a time, bytes get lost (seemingly random). Like, when asked to read one byte, and there are a bunch of them incoming, it reads 2, reports that it read only one and the other one just vanishes. If I read an even number of bytes at a time, I can push even 3K per second, and everything is fine.

Is that ... normal? Has anyone ever seen something like this before? This particular behaviour causes me to not be able to run protobuf on the thing, since that protocol reads only one byte relatively often. Would then the solution be for me to implement some form of "caching" or a buffer or something, then read 2 bytes and if the caller only asked for one remember the other byte for later? For the next call?

The other strange thing that I noticed, is that every now and then it reports that there aren't any bytes to be read (GET_RXAVAIL returns 0). At the moment, I have made an algorithm that only gives up on reading after 500ms of the line returning zero. The timer resets the moment I got something. Again, same question: is that normal? Is the UART? The bluetooth chip? The bluetooth in itself. It is an over-the-air transmission, so I guess that can happen ?

If you guys have seen this before and consider it normal, then my read functionality will have to be quite a monstrous function. Buffers, timers and all kinds of checks. All am I just dumb and not seeing something obvious somewhere? Is it reading bytes from a line that complicated?

Volguus fucked around with this message at 12:09 on Jun 16, 2017

Volguus
Mar 3, 2009
The entire UART control/reading code is available at https://github.com/foss-for-synopsys-dwc-arc-processors/embarc_osp/blob/master/device/designware/uart/dw_uart.c since they uploaded it there.

Phobeste posted:

That part does look fine so I'd go deeper into how that uart_control function pointer is defined and all the rest of the behind the scenes stuff. At some point as a basic sanity check you could also hook a logic analyzer (you can get a $100 one from https://www.saleae.com/ that rocks) up to those uart lines and see what's happening on the physical level.
That uart_control function is just a switch on the available control commands. My value ends up calling this function:
code:
int32_t rx_avail = 0;
	DW_UART_REG *uart_reg_ptr = (DW_UART_REG *)(uart_ctrl_ptr->dw_uart_regbase);

	if (uart_ctrl_ptr->rx_fifo_len <= 1) {
		if (dw_uart_getready(uart_reg_ptr) == 1) {
			rx_avail = 1;
		} else {
			rx_avail = 0;
		}
	} else {
		rx_avail = uart_reg_ptr->RFL;
	}
	return rx_avail;
Where that uart_reg_ptr is just a big structure that does the actual hardware access. So, I guess that the RFL member maybe just has the value 2 in there. Maybe that's the architecture.

JawnV6 posted:

I would guess there's an internal FIFO that you're reading from. When it says it has 2b, you read 1b, and the other disappears, I'd check that the assembly access to that register is actually doing a single-byte access. One explanation could be that you're doing a double-word read, the HW thinks you've pulled both bytes, and the compiler just masks off the one you explicitly asked for without realizing that second byte isn't in the HW any more. With embedded work there's a lot of HW masquerading as memory-mapped addresses, a lot of careful attention must be paid to that interface. Sometimes it helps to step back from the immediate problem and run experiments. Just like you checked getting 0x01020304 across, try larger buffers with known quantities that aren't protobufs.
That's exactly what I'm doing now. I have implemented my own protocol after all (bypassing protobufs) where I send/receive that 1-2K of data. When I try to read the entire chunk (with the aforementioned loops and timers and so on) everything is fine. I read what I sent on the line, nothing more, nothing less. When I read the same thing but 1 byte at a time is where i saw the problems. Which is why I concluded that probably this is what makes protobufs not work.

JawnV6 posted:

One thing, when you say protobufs won't work because of single-byte accesses, are you significantly memory constrained? If not I'd definitely just have the driver dump the full byte stream somewhere in memory then have the decoder unpack it after all the bytes are through the channel. Don't try to point the decoder at the raw UART MMIO unless it's absolutely necessary.

I'm not sure you have physical wires to use a LA, but that's another tool that should be in everyone's kit.
I am definitely not memory constrained. They say I have 128MB at my disposal, so .. yea, I can go to town. This is probably the best thing I can do, read as much as possible then send it to the caller and hold a buffer in memory.

Diving into the guts of the thing, in the same C file as above, the actual read function (dw_uart_read) is implemented like this:
code:
while (i < len) {
		p_charbuf[i++] = dw_uart_prcv_chr(uart_reg_ptr);
}
So, it itself, deep down, it reads on byte at a time. But dw_uart_prcv_chr is even better, it calls dw_uart_getchar, which is just:
code:
Inline int32_t dw_uart_getchar(DW_UART_REG *uart_reg_ptr)
{
	return (int32_t)uart_reg_ptr->DATA;
}
where DATA is uint32_t. So it masquerades reading a char by converting an uint to int, then to a char. I am completely confused.
Logical access is probably the best thing I can do, though we'd have to buy it since we don't have one. I guess I will have to learn how to use the thing, it looks like is a bit better than an oscilloscope (which I haven't used since university, more than 20 years ago).

Thanks for the help though, this conversation gives me ideas.

Volguus
Mar 3, 2009

Mr. Powers posted:

Some UARTs keep flags with characters in the fifo. The lower 8 bits will be data, the upper 24 could contain status flags like parity error, framing error, breaks, overruns, etc.

It sounds to me like you have a fifo depth of 2 and you're overflowing.

RFL is almost certainly the receiver fifo level.

RFL is indeed /*!< Receive FIFO level */.
So, you are saying that when i read only 1 byte at a time, because i'm sending from the other end so many some bytes will simply get dropped? That actually makes a lot of sense, never thought about it that way. So then, what JawnV6 suggested, with an internally kept buffer would probably be the best way to go about it. Read a set amount from the pipe (1k? 4k?) and give the caller (protobuf) from there instead of the actual line.

Volguus
Mar 3, 2009

Mr. Powers posted:

Yep. I always implement it in the receive interrupt (if there's a fifo trigger interrupt, that's even better). I empty the fifo into a circular buffer and use a semaphore to signal anyone waiting.

Hmm, there is a receive callback capability (UART_CMD_SET_RXCB). Haven't used it yet, but this may be the best place to take advantage of it. Thanks a bunch.

Volguus
Mar 3, 2009
Thank you very much for all your help, everyone. Indeed, a ringbuffer that constantly feeds itself from the RX line and everyone just reads from whatever is in that buffer makes protobuf work like a peach. No more bytes lost into ether, as long as the buffer is big enough for my application.

Volguus
Mar 3, 2009

ratbert90 posted:

It's just eclipse. But at this point, if you aren't using pure eclipse-cdt you are doing it wrong.



What's wrong with eclipse-cdt? I converted to CLion a few months ago, but it doesn't support GNUMake projects, so when I do kernel/uboot dev, I go back to eclipse. It works fine and I haven't had any major issues with it since... mars I think. Oxygen is pgood. :shrug:

My only problem with the vendor IDE (eclipse+some plugin) was only that it was old as hell and I would have trouble running it on my latest Fedora. On CentOS though, it was working fine. And, depending on what you do, their plugins can be quite helpful, if not outright necessary. CLion ... that thing has trouble parsing C++ code last time I checked it. Throw a template or two and it goes belly up (to be fair, it's been a while since I checked it).

Adbot
ADBOT LOVES YOU

Volguus
Mar 3, 2009

Fanged Lawn Wormy posted:

ah yeah forgot to do the membering there.

And I wouldn't have to do a return, correct? Because this manipulating the original data, rather than passing a copy.

Yes you are manipulating the original data. No need for return (you could return an error code to tell the caller if you succeeded though). If you would have your method signature like this:
code:
void Read_Player(struct Player ThePlayer) {
}
Then a copy would be made for the method and then you would need to return the modified struct (which would again mean another copy made for the return). I am not sure if return-value-optimization would kick in and optimize that away in release mode, but that's compiler dependent.

  • Locked thread