|
I am working with a Synopsys ARC board which will have to communicate with an Android application. Since i have a lot of memory on the board, and I have FreeRT OS and network stack and all, I wrote essentially an TCP server that can communicate with the android application, following my own protocol. And it ... works, as a basic thing. However, I would like (if possible) to avoid manually serializing my data structures as I'm doing now (sending doubles over the network and different languages/architectures is a peach). Is there any serialization library out there that I could use in my embedded system that would be able to read/write to a Java program? I looked at msgpack, but there are only 3rd party and (they say) not very reliable embedded implementations. What are people using for this?
|
# ¿ May 25, 2017 18:25 |
|
|
# ¿ May 3, 2024 12:00 |
|
muon posted:Protobufs! For embedded?
|
# ¿ May 25, 2017 18:44 |
|
feedmegin posted:Good news, 64-bit IEEE floating point is standard absolutely everywhere these days unless you're, I dunno, talking to a frigging VAX or something; the only thing you've got to worry about is endianness. Yes, endianess was the only thing i worried about so far. Good to know that I shouldn't care about anything else though .
|
# ¿ May 25, 2017 19:06 |
|
JawnV6 posted:Yeah? Sure, spend some time hand-rolling a serialization protocol in two separate languages in 2017 if you want to, but there are plenty of bindings for a variety of formats for different use cases and languages available. The entire reason why I asked the question is because I imagined that my problem is already solved. I thought (wrongly) that maybe protobufs were not quite designed for lovely CPUs. Is nanopb the recommended protobuf implementation for embedded? What are you using?
|
# ¿ May 26, 2017 02:10 |
|
JawnV6 posted:I used nanopb at my last company, the server side had a scala library that inflated it to something native. We weren't really compute performance constrained (100+MHz Cortex-M4) so I can't speak to that. And if you genuinely have ~12 bytes you're trying to get across a channel that you're writing the other side of and have a PoC up and running, it might not be worth it. But it reduced the data by a surprising amount. The guts are worth a brief scan, i.e. it condenses uint32_t's into 8 bits if the value is small enough. That does make the size dependent on the content, which may or may not pose a problem. Thanks for the nanopb suggestion. Got it up and running on both my dev linux machine and on the board (i am really lucky, they implement almost everything in the POSIX specification. Only a couple of #ifdef's were needed). While the data and the amount of it that I'm sending right now is quite trivial (X*12 bytes, where X is variable), i do expect it to become more complex in the future. And not having to worry about endianess, protocol itself and everything related to it is surely a bonus.
|
# ¿ May 26, 2017 21:29 |
|
After a couple of days of figuring out the correct incantation to get the RN42 bluetooth module working, i finally saw the leds blinking, the commands being written and responses being read. It's loving black magic and the documentation writers are the laziest people on the planet. If you change UART pin 0 to be pin 4, please loving update the drat poo poo so that i don't lose my mind. Or that the default state of the device is X instead of Y, that's fine, but would letting me know about it kill ya? Sigh. I'm on my way to finally have a bluetooth driver working. I need a beer .
|
# ¿ Jun 6, 2017 05:52 |
|
Earlier in the thread I have posted how i was implementing protobuf on a synopsys-chipset board over wifi. Nanopb got recommended, I was using lwip, everything worked and life was good. Now I got the task to implement the same protobuf protocol over bluetooth, using the same board and some Roving Networks bluetooth module. Got the basics to work, I can talk with the chipset in command mode, the chipset talks back and all was fine. But, when I'm not in command mode, that is ... I'm ready to send and receive data for protobuf, life is suddenly not so rosy anymore. I am using Android to talk to the board over bluetooth (same application i was using before over wifi, but now just using the bluetooth socket). I can pair with the bluetooth and I can connect to it (for some reason a few tries are needed, but whatever, it works). The problem I'm having is that the bytes that I'm getting over the wire from Android don't make any sense and protobuf throws decoding errors. Sample errors that I see in minicom on the board are: "Decode failed: invalid wire_type" "Decode failed: io_error". Looking at nanopb source, it simply gets the wrong bits, therefore it has no idea what to do with them. What could possibly be causing this? Does the bluetooth serial transmission (RFCOMM it says on Android documentation side) adds additional bits to the data that I'm sending? Is it possible that when I'm asking the device uart_read(buf, count) for it to read more bytes than I told it to? Or less bytes but not bothering to tell me so? Somewhere, somehow the drat bits get messed up and I have no idea where. Does anyone have any experience with this? Thanks.
|
# ¿ Jun 7, 2017 21:48 |
|
taqueso posted:What happens when you send non protobuf data through the same pipe? Try sending more bytes than a packet, a small number of bytes, just a single packet, lots of packets really fast, etc and make sure the input and output match. According the the super quick reading on rfcomm I did, it is supposed to be reliable and emulate an RS232 port, so I wouldn't expect any extra data. Are you possibly running into a threading/synchronization issue? For example, reading a buffer while it is being written to. Well, I shouldn't have any threading/synchronization issues, but i better double check. I will, however, try your idea first, to send known simple data over the air and see what i get on the other side.
|
# ¿ Jun 7, 2017 22:19 |
|
Holy mother .... So, I started with basic things: send 4 bytes to the board and see what i get. So I sent 1,2,3 and 4. On the board, all I got was 1 byte (nothing else available on the RX line): values ranging from 243 to 241, seemingly random. From the board, then i sent 4 bytes back to android (1,2,3 and 4 again). On android I did get 4 bytes back, but they were: -128, 2, 3, 4 (-128 is java-speak for 255). That's a head scratching WTF right there. Then I remember that the chipset says that by default it is running at 115,200 baud rate. But, and I quote: quote:The set UART baud rate command sets the baud rate where <value> is 1200, 2400, nicely I get 4 bytes on the board side, the correct 4 bytes (1,2,3,4) and I get 4 bytes on the android side, the correct 4 bytes (1,2,3,4). WTF chipset? Is it then safe to assume that probably this thing doesn't support 921K? Maybe it doesn't support anything above 115K and the manual is just full of poo poo (like usual)?
|
# ¿ Jun 7, 2017 23:43 |
|
Earlier in the thread I posted about my woes with a Bluetooth Pmod that uses UART, for which I have to write the driver for. Finally got it working from the application, though protobuf still is broken. After I took my mind off of it for a few days, came back to it to investigate more. My lack of knowledge in the embedded field is starting to show its teeth here. The basic read function in my driver is implemented as follows: code:
Now, the strange thing that's happening is that no matter how many bytes I ask that function to read, it never reads more than 2. That is, the GET_RXAVAIL returns at most 2, even though I know for a fact that I have more in there. However, if I ask to read only one byte, it returns to me that it read one byte. But (and here's where it messes with my head) it appears that when pushing 1K per second, and i read one byte at a time, bytes get lost (seemingly random). Like, when asked to read one byte, and there are a bunch of them incoming, it reads 2, reports that it read only one and the other one just vanishes. If I read an even number of bytes at a time, I can push even 3K per second, and everything is fine. Is that ... normal? Has anyone ever seen something like this before? This particular behaviour causes me to not be able to run protobuf on the thing, since that protocol reads only one byte relatively often. Would then the solution be for me to implement some form of "caching" or a buffer or something, then read 2 bytes and if the caller only asked for one remember the other byte for later? For the next call? The other strange thing that I noticed, is that every now and then it reports that there aren't any bytes to be read (GET_RXAVAIL returns 0). At the moment, I have made an algorithm that only gives up on reading after 500ms of the line returning zero. The timer resets the moment I got something. Again, same question: is that normal? Is the UART? The bluetooth chip? The bluetooth in itself. It is an over-the-air transmission, so I guess that can happen ? If you guys have seen this before and consider it normal, then my read functionality will have to be quite a monstrous function. Buffers, timers and all kinds of checks. All am I just dumb and not seeing something obvious somewhere? Is it reading bytes from a line that complicated? Volguus fucked around with this message at 12:09 on Jun 16, 2017 |
# ¿ Jun 16, 2017 12:06 |
|
The entire UART control/reading code is available at https://github.com/foss-for-synopsys-dwc-arc-processors/embarc_osp/blob/master/device/designware/uart/dw_uart.c since they uploaded it there.Phobeste posted:That part does look fine so I'd go deeper into how that uart_control function pointer is defined and all the rest of the behind the scenes stuff. At some point as a basic sanity check you could also hook a logic analyzer (you can get a $100 one from https://www.saleae.com/ that rocks) up to those uart lines and see what's happening on the physical level. code:
JawnV6 posted:I would guess there's an internal FIFO that you're reading from. When it says it has 2b, you read 1b, and the other disappears, I'd check that the assembly access to that register is actually doing a single-byte access. One explanation could be that you're doing a double-word read, the HW thinks you've pulled both bytes, and the compiler just masks off the one you explicitly asked for without realizing that second byte isn't in the HW any more. With embedded work there's a lot of HW masquerading as memory-mapped addresses, a lot of careful attention must be paid to that interface. Sometimes it helps to step back from the immediate problem and run experiments. Just like you checked getting 0x01020304 across, try larger buffers with known quantities that aren't protobufs. JawnV6 posted:One thing, when you say protobufs won't work because of single-byte accesses, are you significantly memory constrained? If not I'd definitely just have the driver dump the full byte stream somewhere in memory then have the decoder unpack it after all the bytes are through the channel. Don't try to point the decoder at the raw UART MMIO unless it's absolutely necessary. Diving into the guts of the thing, in the same C file as above, the actual read function (dw_uart_read) is implemented like this: code:
code:
Logical access is probably the best thing I can do, though we'd have to buy it since we don't have one. I guess I will have to learn how to use the thing, it looks like is a bit better than an oscilloscope (which I haven't used since university, more than 20 years ago). Thanks for the help though, this conversation gives me ideas.
|
# ¿ Jun 16, 2017 19:42 |
|
Mr. Powers posted:Some UARTs keep flags with characters in the fifo. The lower 8 bits will be data, the upper 24 could contain status flags like parity error, framing error, breaks, overruns, etc. RFL is indeed /*!< Receive FIFO level */. So, you are saying that when i read only 1 byte at a time, because i'm sending from the other end so many some bytes will simply get dropped? That actually makes a lot of sense, never thought about it that way. So then, what JawnV6 suggested, with an internally kept buffer would probably be the best way to go about it. Read a set amount from the pipe (1k? 4k?) and give the caller (protobuf) from there instead of the actual line.
|
# ¿ Jun 16, 2017 20:35 |
|
Mr. Powers posted:Yep. I always implement it in the receive interrupt (if there's a fifo trigger interrupt, that's even better). I empty the fifo into a circular buffer and use a semaphore to signal anyone waiting. Hmm, there is a receive callback capability (UART_CMD_SET_RXCB). Haven't used it yet, but this may be the best place to take advantage of it. Thanks a bunch.
|
# ¿ Jun 17, 2017 00:33 |
|
Thank you very much for all your help, everyone. Indeed, a ringbuffer that constantly feeds itself from the RX line and everyone just reads from whatever is in that buffer makes protobuf work like a peach. No more bytes lost into ether, as long as the buffer is big enough for my application.
|
# ¿ Jun 20, 2017 19:58 |
|
ratbert90 posted:It's just eclipse. But at this point, if you aren't using pure eclipse-cdt you are doing it wrong. My only problem with the vendor IDE (eclipse+some plugin) was only that it was old as hell and I would have trouble running it on my latest Fedora. On CentOS though, it was working fine. And, depending on what you do, their plugins can be quite helpful, if not outright necessary. CLion ... that thing has trouble parsing C++ code last time I checked it. Throw a template or two and it goes belly up (to be fair, it's been a while since I checked it).
|
# ¿ Sep 29, 2017 12:30 |
|
|
# ¿ May 3, 2024 12:00 |
|
Fanged Lawn Wormy posted:ah yeah forgot to do the membering there. Yes you are manipulating the original data. No need for return (you could return an error code to tell the caller if you succeeded though). If you would have your method signature like this: code:
|
# ¿ May 28, 2018 17:07 |