Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Star War Sex Parrot posted:

It's gonna end up some stupid IoT thing, I just know it

pull up swsp

pull up!

(suggestion: if you are into digital design, build a high quality true random number generator in a fpga. i did this a couple years ago for $job and it would probably have been a good senior project if i was in school. you get to do a combination of really low level stuff to build a random bit source out of digital fpga fabric, and higher level crypto stuff to post process its output)

Adbot
ADBOT LOVES YOU

MancXVI
Feb 14, 2002

make a blender that's a bluetooth speaker and have it adjust the blade speed to play sounds

big shtick energy
May 27, 2004


is this a group project? how long do you have

edit: does the business case for it matter at all or are you just marked on it as a technical exercise?

Base Emitter
Apr 1, 2012

?
as long as it annoys/confuses Leland it's all good

Star War Sex Parrot
Oct 2, 2003

DuckConference posted:

is this a group project? how long do you have

edit: does the business case for it matter at all or are you just marked on it as a technical exercise?
not sure about the business requirements. yeah it's a group project, so far it's two other EEs and then me, but I'm afraid of being reduced to the group's code monkey so I'm trying to get one more person that's also comfortable doing embedded software.

fall quarter is mostly proposals and prep, winter quarter is when you actually build the thing and present it at the end, and spring quarter is more like a technical writing class about it. but you're free to start working on it ahead of time, so my group is pondering ideas already

longview
Dec 25, 2006

heh.
at the risk of starting a new topic, how do you more professional coders like to define bitfields like external HW config registers?

since everyone who is good at coding is on vacation and i have HW that needs testing i've been coding C/C++ for the ARM processor on my embedded board

my definition for a generic bitfield register now looks like this:
code:
union SOMEDEVICE_SOMEREGISTER
{
struct {
uint16_t bit1:1;
uint16_t bit2:1;
uint16_t value3:4;
etc. to 16 bits
} bits;
uint16_t intval;
}
i know it's not portable between systems with different endinanness, but that's not a huge deal for embedded.
is there another good way to define a register like that where I need to be able to read and set single bits/very small int values for cases where multiple bits are grouped and still allow easy access to the word level data I need to send to the underlying I2C/SPI controller?

the other way I know how to do this is to use some compiler macros to automate the bit-shift boilerplate code and define each register in the preprocessor like
#define SOMEDEVICE_SOMEREGISTER_BIT1 1
which works fine for small 8-bit stuff

Jerry Bindle
May 16, 2003
longview, it sounds like you know enough about the pitfalls of the union in order to go off and use it happily. its how all the hw registers are mapped for pic's, i don't see any problems with it when its targeting a specific set of hardware registers and compiler.

afaik the "portable c" option is to wrap up the bitwise operators with functions or macros.

Bloody
Mar 3, 2013

longview posted:

at the risk of starting a new topic, how do you more professional coders like to define bitfields like external HW config registers?

since everyone who is good at coding is on vacation and i have HW that needs testing i've been coding C/C++ for the ARM processor on my embedded board

my definition for a generic bitfield register now looks like this:
code:

union SOMEDEVICE_SOMEREGISTER
{
struct {
uint16_t bit1:1;
uint16_t bit2:1;
uint16_t value3:4;
etc. to 16 bits
} bits;
uint16_t intval;
}

i know it's not portable between systems with different endinanness, but that's not a huge deal for embedded.
is there another good way to define a register like that where I need to be able to read and set single bits/very small int values for cases where multiple bits are grouped and still allow easy access to the word level data I need to send to the underlying I2C/SPI controller?

the other way I know how to do this is to use some compiler macros to automate the bit-shift boilerplate code and define each register in the preprocessor like
#define SOMEDEVICE_SOMEREGISTER_BIT1 1
which works fine for small 8-bit stuff

I like this approach but frequently find myself mashing bits around with masks

Hunter2 Thompson
Feb 3, 2005

Ramrod XTreme
I like doing stuff like this in C++. I think it's easy to understand but there's probably a fancier way that uses less code and is harder to understand.

code:
// Alternatively don't use namespaces if you're doing C
namespace DevicenameRegister0 {
static const uint8_t GAIN_MASK = 0x0E;
static const uint8_t GAIN_1 = 0x00;
static const uint8_t GAIN_2 = 0x02;
static const uint8_t GAIN_4 = 0x04;
static const uint8_t GAIN_8 = 0x06;
static const uint8_t GAIN_16 = 0x08;
static const uint8_t GAIN_32 = 0x0A;
static const uint8_t GAIN_64 = 0x0C;
static const uint8_t GAIN_128 = 0x0E;
// ... More XXX_MASK and XXX_SETTING stuff
}
//...
namespace DevicenameRegister1 {
// Same as above, but for register 1
}
Then you can write a register-setting function like this one I made up that abstracts the operation.

code:
void set_register(uint8_t register_number uint8_t mask, uint8_t setting) {
    uint8_t register = m_spi->read8(register_number);
    register = (register & ~mask) | setting;
    m_spi->write8(register);
}
If the device will always be memory-mapped then this isn't really necessary. You could just do DEVICE_REG_X = (DEVICE_REG_X & ~THING_MASK) | THING_SETTING; in your code directly.
I don't think this is considered dirty in driver code.

Edit: You can get fancy and cache the register values (if that makes sense for your device) to avoid needing to read over the bus before modifying. It's really all up to you.

My point is that I prefer coding all the register bits as constant unsigned integers of the correct width and naming them similarly to the datasheet's names.

Hunter2 Thompson fucked around with this message at 20:50 on Jul 14, 2016

longview
Dec 25, 2006

heh.
i've already ran into one pitfall, earlier today i was unfucking the I2C driver the SW people wrote (to be fair their intention was correct, but the underlying driver is poorly documented and does unexpected poo poo)

anyway i needed to read a flash device ID since the code needs to know what size chip we put in, the ID is 24-bit so i defined another union structure to make the decoding look nicer and i used uint16s for the individual segments (12-bit mfg id and less than 8 bit for the rest) and unioned it with a uint32
to try to avoid issues i also set up an 8-bit wide uint16 to cover the upper byte the uint32 has available

no idea what it actually mapped up, but the lower bits were fine, and then it looks like it left aligned the 12-bit mfg id or something

in any case, redefining the structure to use all uint32s (1 bit uint32 just seems wrong...) made it map correctly

the drivers for this platform have a really nasty tendency to do unexpected stuff that you don't expect, like instantiating a MAC driver to talk to a PHY will automatically set the PHY to SGMII mode for some unknown reason (this is strangely not a part of the lovely autogenerated documentation which only covers the highest level interfaces)

another really weird thing is the I2C controller address setting; now you can't just set an 8-bit address in 7 bit mode and expect it to work sure, but what i didn't see coming was how the HW handles it
the drat thing decided that bit shifting 1 to the left was correct behaviour, so my reads and writes to 0xF8 and 0xF9 ended up going out on the line as 0xF0 and 0xF3 (it sets the R/W bit correctly at least).

that might be common behaviour, but it still seems retarded to me. ended up patching our driver layer to offset the addresses to avoid having to manually offset every single address relative to what the data sheets and schematics say

Bloody
Mar 3, 2013

longview posted:

their intention was correct, but the underlying driver is poorly documented and does unexpected poo poo

embedded dot text

longview
Dec 25, 2006

heh.
more strangeness (documented but still weird), the TCA6408A I/O expander has a nice feature where it can automatically flip the polarity of I/O pins

really nice since I can define this and provide an interface to the higher level code where all the logical signals are active high. except it only bit flips reads, not writes (this is at least documented, if not very clearly)

where it gets fun is if you define both inputs and outputs as bit flipped and try to do read-modify-write operations; setting an inverted output high will then make it read as low (while the actual state on the line is high), so that operation will flip every inverted output when you write it back

simple enough to work around though, just keep a copy of the polarity mask and XOR it with the data before writing it but still, gotta wonder who thought that was a good idea

Hunter2 Thompson
Feb 3, 2005

Ramrod XTreme
That's hosed up

ChiralCondensate
Nov 13, 2007

what is that man doing to his colour palette?
Grimey Drawer
as I've learned over and over again: bits don't respond to janitoring when the voltage has dropped too far by the time it gets to the bit janitoring place

Bloody
Mar 3, 2013

thats the thing with bits: theyre actually analog. hosed up, but true

Zopotantor
Feb 24, 2013

...und ist er drin dann lassen wir ihn niemals wieder raus...
The craziest problem I ever had to debug was one that manifested on a backplane bus with a specific type of board plugged in a specific slot on the backplane and having a specific pattern of data (IIRC, alternating words of 0x00000000 and 0xffffffff) downloaded to it. As far as I understand it (I'm a software guy), the ground plane of the board had an issue that caused it to slowly charge, and at some point there wasn't enough margin between the ground and the low threshold of the bus transceivers, causing data corruption.

longview
Dec 25, 2006

heh.
that type of problem is common enough to have a nickname: ground bounce

primary issue is the inductance of the ground plane in combination with the capacitance of the line; to quickly switch a CMOS output you have to charge the capacitance of the line very quickly which causes a lot of problems with inductance in the ground reference

that current pull is also a big factor in designing the power supply decoupling network for modern chips since it has to be able to supply the current that goes out on the line (potentially every single output switching at the same instant)

using balanced transmission (LVDS, ECL, CML etc.) largely takes care of that specific issue but doesn't mean a free pass since there's a lot of other fun signal integrity issues that can pop up

Bloody
Mar 3, 2013

signal integrity is the most obnoxious bullshit garbage i hate it ugh

spankmeister
Jun 15, 2008






Uuugh these loving laws of physics always get in the way. GAWD

longview
Dec 25, 2006

heh.
it's really not that difficult with modern circuitry and well designed standards

most single ended signals are either too slow to really matter, or they can be series terminated easily enough

for low speed board-board use RS-422 for slow poo poo and LVDS with shielded cables (depending on speed and length) for higher speed

for really high speed board-board either give up, use ethernet, or use optical (at 10 Gbit you're not gonna run unshielded twisted pair for more than a few inches so forget that idea)

also make sure your reference planes are in order, and make sure you've checked the need for power planes (scrub tier high speed stuff will require a power plane capacitor and less than 1 ohm source impedance at 100 MHz).
be sure to get accurate mounting inductances for your caps and for anything above ~2V check the CV coefficient to make sure your high CV ceramics are actually useful

for gods sake don't run high speed over a cut in either the power or ground plane

parallell terminate and use HSTL where possible (DDR3 single rank is set up like this and it makes it fairly easy)

don't forget to plan your stackup to be able to break out the signals without via stubs destroying everything (at 10 Gbit/s two good via stubs can be enough to mess it up completely)

pushing 30 Gbit? better check the weave in your laminate and make sure to get the right kind of copper for your planes!!

see, it's easy

also just build everything in ECL, the best logic

Bloody
Mar 3, 2013

longview posted:

well designed standards

lol let me tell you about NIH syndrome

JawnV6
Jul 4, 2004

So hot ...
that is the same site as my initial objection

Sapozhnik
Jan 2, 2005

Nap Ghost

i almost understood all of that

man i wish i could get paid to design hardware. but on the other hand i do like being able to debug my poo poo

The Eyes Have It
Feb 10, 2008

Third Eye Sees All
...snookums
Hey is paralleling modern switching DC-DC converters as a way to increase ability to supply current dumb / even worth looking at? I always thought it was a no-no because it fucks with how switching regulators feedback from the output but I seem to remember that's less of an issue nowadays for some reason.

I need to supply a weird voltage like 19VDC at like up to 20A and that's like easily 10x more than I have ever needed to do serious work with so I'm a little at a loss shopping in my usual places.

I thought maybe some hardcore instrumentation / industrial aimed DC-DC converter might do it but in that world only 12V, 24V, 36 and 48 seem to exist.

Raluek
Nov 3, 2006

WUT.

Mister Sinewave posted:

Hey is paralleling modern switching DC-DC converters as a way to increase ability to supply current dumb / even worth looking at? I always thought it was a no-no because it fucks with how switching regulators feedback from the output but I seem to remember that's less of an issue nowadays for some reason.

I need to supply a weird voltage like 19VDC at like up to 20A and that's like easily 10x more than I have ever needed to do serious work with so I'm a little at a loss shopping in my usual places.

I thought maybe some hardcore instrumentation / industrial aimed DC-DC converter might do it but in that world only 12V, 24V, 36 and 48 seem to exist.

arent there some dell sff boxes that have external power supplies that are around that spec?

e: no. they're 12V. dang

longview
Dec 25, 2006

heh.

Mister Sinewave posted:

Hey is paralleling modern switching DC-DC converters as a way to increase ability to supply current dumb / even worth looking at? I always thought it was a no-no because it fucks with how switching regulators feedback from the output but I seem to remember that's less of an issue nowadays for some reason.

I need to supply a weird voltage like 19VDC at like up to 20A and that's like easily 10x more than I have ever needed to do serious work with so I'm a little at a loss shopping in my usual places.

I thought maybe some hardcore instrumentation / industrial aimed DC-DC converter might do it but in that world only 12V, 24V, 36 and 48 seem to exist.

you can but only if the mfg says you can. in general i think it's current mode controllers that work best for parallel operation

two choices i know are good:
two of these in parallel
http://www.delta-elektronika.nl/en/products/s280-series.html

or one of these
http://www.delta-elektronika.nl/en/products/sm800-series.html

The Eyes Have It
Feb 10, 2008

Third Eye Sees All
...snookums
Thanks, I'm looking for DC to DC though - I guess this is the sort of thing some phone calls to sales reps might have to solve.

longview
Dec 25, 2006

heh.
oh yeah, completely missed that

so get a DC/DC boost converter module from whatever source voltage you have to 300VDC and use a switch mode AC/DC converter :v:

Vicor makes a lot of nice converters but I couldn't find any that were fully adjustable at first glance

silence_kit
Jul 14, 2011

by the sex ghost

What is the fastest speed used in on-board signaling in consumer electronics? What causes that speed limit--is it FR-4 attenuation, affordable transceiver speed x power limits, or is it a need to reduce system complexity and avoid obsessing over how to do the wiring?

Zopotantor
Feb 24, 2013

...und ist er drin dann lassen wir ihn niemals wieder raus...

longview posted:

using balanced transmission (LVDS, ECL, CML etc.) largely takes care of that specific issue but doesn't mean a free pass since there's a lot of other fun signal integrity issues that can pop up

that's fine if you can design something from the ground up; unfortunately for me our backplane was originally designed more than two decades ago, and had one update since then (I think because the original bus transceivers or buffers or whatever they're called were no longer available, lol)

did I mention that a lot of chips in your smartphone, graphics card, etc. were tested on this hardware? :unsmigghh:

longview
Dec 25, 2006

heh.

silence_kit posted:

What is the fastest speed used in on-board signaling in consumer electronics? What causes that speed limit--is it FR-4 attenuation, affordable transceiver speed x power limits, or is it a need to reduce system complexity and avoid obsessing over how to do the wiring?

Varies a lot I guess, DDR3 is fairly complicated but even modern PCs don't tend to run that at more than 1 GHz (data strobe/clock), bandwidth comes from the large bus width and the complexity is matching delays and getting that delay matching without too much cross talk.
It's really pushing it with DDR4 though, since the signalling is single ended and PCs require multiple devices on the same line it's starting to get really challenging to manage the impedances and reflections. They've already had to introduce differential clocks with DDR3.
Fair chance the current DDR spec will be replaced with something fairly different in a few years, but that's just speculation. The DIMM based solution is probably running about as fast as we can make it without changing the architecture (like one controller or transceiver set per DIMM).

Gigabit Ethernet is either 125 MHz (R)GMII or 625 Mbit/s SGMII so not too fast really, a good ground reference plane and impedance matching the traces takes care of signals at that speed with no problem (at least for smaller designs).

As a guess for most common fast link I'd say PCI-E, the latest version apparently does close to 2 Gbit/s per lane which is a respectable speed. Doable with standard laminates if your manufacturer is good, but the real difficulty is probably in designing the connectors and managing skew between pairs for multi-lane cards.

10 gigabit ethernet and similar fiber based protocols aren't really common in consumer gear yet but those are "faster" per link since it's just a single pair for each direction which has to run at 10-16 Gbit/s.

Challenge with parallel buses is to manage skew and still get the impedances right, and longer buses need to really think carefully about crosstalk since it becomes a very real problem if not managed. Wiring, essentially.

Above ~5 Gbit/s inside a board is the point to start worrying about via stubs (obviously always avoid, but even 1-2 can destroy the signal above those speeds), losses at 10 Gbit/s inside a standard board are manageable with the current transceivers, at least for links shorter than a few inches.
Transceivers for those speeds almost always use or can use pre-emphasis and fancy equalizers on both transmit and receive to compensate for losses in the board, and this helps a lot in dealing with resistive and dielectric losses.
What they will struggle to deal with is an impedance discontinuity in the middle of the line, since this usually shows up as a notch in the frequency response (and a nice bit of edge-destroying phase distortion), connectors are the biggest source in most cases.

10 Gbit/s per link seems to be the current speed limit for even fairly high end gear; the transceivers are becoming fairly common (most high end FPGAs will have dedicated hardware for 4-16 links). To reach higher speeds the current trend seems to be paralleling up the links (see quad SFP modules). 30 Gbit/s per link is possible in FR-4 but not very common at all.

Zopotantor posted:

that's fine if you can design something from the ground up; unfortunately for me our backplane was originally designed more than two decades ago, and had one update since then (I think because the original bus transceivers or buffers or whatever they're called were no longer available, lol)

did I mention that a lot of chips in your smartphone, graphics card, etc. were tested on this hardware? :unsmigghh:

ooh, ooh, let me guess:
the replacement transceivers were at least 5x faster and nothing worked afterwards

The Eyes Have It
Feb 10, 2008

Third Eye Sees All
...snookums

longview posted:

Vicor makes a lot of nice converters but I couldn't find any that were fully adjustable at first glance

poo poo, Vicor has way more stuff than I realized. I found some nice stuff there (their min & max line) that's adjustable enough as well as being parallel-friendly :toot:

I just wish they were less expensive, but that's life.

big shtick energy
May 27, 2004


longview posted:

it's really not that difficult with modern circuitry and well designed standards

alternatively you can always decide to save like $2 on a $5k BOM by making the daughter board for your >100MHz SERDES chip 2-layer (neither of them having much ground plane on them) instead of 4-layer. there was no amount of eldritch patterns made with copper tape that could make that poo poo pass FCC and we had to just get a 4-layer board

also for the guy who wants to parallel DC supplies: if you want to cowboy it, you can always just use regular supplies and put like a 0.1-ish-ohm, many-loving-watt resistor between each supply and the voltage rail. i think the sense node should still be on the power supply side of the resistor but if it smokes then probably it should have been the other way

JawnV6
Jul 4, 2004

So hot ...

longview posted:

As a guess for most common fast link I'd say PCI-E, the latest version apparently does close to 2 Gbit/s per lane which is a respectable speed. Doable with standard laminates if your manufacturer is good, but the real difficulty is probably in designing the connectors and managing skew between pairs for multi-lane cards.
???

even gen3 was 8GT/s, but there's a lot of equalization tricks like 8b/10b on <gen2 or 128b/130b on gen3 that take the data rate down

im guessing the layout guy doesn't care much about the logical pipe's transfer rate

spankmeister
Jun 15, 2008






DuckConference posted:

alternatively you can always decide to save like $2 on a $5k BOM by making the daughter board for your >100MHz SERDES chip 2-layer (neither of them having much ground plane on them) instead of 4-layer. there was no amount of eldritch patterns made with copper tape that could make that poo poo pass FCC and we had to just get a 4-layer board

What's an eldritch pattern?

movax
Aug 30, 2008

JawnV6 posted:

???

even gen3 was 8GT/s, but there's a lot of equalization tricks like 8b/10b on <gen2 or 128b/130b on gen3 that take the data rate down

im guessing the layout guy doesn't care much about the logical pipe's transfer rate

I've done a bunch of Gen3 designs, skew isn't too bad; early designs i used to just have the fab house tilt the fiberglass 45 degrees to avoid any untowards effects from running parallel to it, but religiously following the intel layout guides ended up just fine. didn't hurt that we could afford 16+ layer boards to ensure everyone got the planes they needed.

si is cool, there's a lot of bullshit out there but it's a necessary evil. all comes down to how fast your signal slews from low to high / high to low

longview
Dec 25, 2006

heh.

JawnV6 posted:

???

even gen3 was 8GT/s, but there's a lot of equalization tricks like 8b/10b on <gen2 or 128b/130b on gen3 that take the data rate down

im guessing the layout guy doesn't care much about the logical pipe's transfer rate

yeah i messed up there; i've never done a pci-e design so i have very little experience with it and grabbed the wrong number

i also forgot to mention that sata 3 is pretty fast and runs over cables which probably took a fair amount of engineering, but again i've never tried to design a motherboard so i don't know much about what type of tranceivers are used

one thing i like a lot is that the same transceivers can be used for pci-e, 10g ethernet, jesd204b (i think), probably sata too.
really simplifies design and means the same FPGA can handle several functions without wasting resources on too many special function pins (obviously the high speed transceivers are special function, but they're a requirement in a lot of modern designs)

silence_kit
Jul 14, 2011

by the sex ghost

Why weren't the PCI-E and DDRX standards set to be 10x higher in speed? What is the technological limitation? I'm assuming that there is a technological limitation, and people are always wanting for more communication capacity between chips in PCs, etc. Maybe I'm wrong about that.

silence_kit fucked around with this message at 17:30 on Jul 20, 2016

JawnV6
Jul 4, 2004

So hot ...

movax posted:

I've done a bunch of Gen3 designs, skew isn't too bad; early designs i used to just have the fab house tilt the fiberglass 45 degrees to avoid any untowards effects from running parallel to it, but religiously following the intel layout guides ended up just fine. didn't hurt that we could afford 16+ layer boards to ensure everyone got the planes they needed.

si is cool, there's a lot of bullshit out there but it's a necessary evil. all comes down to how fast your signal slews from low to high / high to low

yeah I did some gen3/gen2 bringup work years ago, the protocol can handle lane to lane skew relatively well, something like 40 symbol clocks? but on the first rev boards, imagine the ridiculous headaches trying to probe those and figure out what's going on

my favorite EE switching story is turbo codes. for decades people were aware of the shannon limit but thought it was this unreachable thing and 70% was pretty good. then this french team comes out and starts publishing papers along the lines of "how many 9's do u want" and nobody believes them on a first pass

now the core techniques of sampling the incoming signal as analog and taking the bitstream history into account are standard practice, a startuppy guy was telling me about his new thing for LTE where all they're doing is spitting out analog values across a channel

Adbot
ADBOT LOVES YOU

Zopotantor
Feb 24, 2013

...und ist er drin dann lassen wir ihn niemals wieder raus...

longview posted:

ooh, ooh, let me guess:
the replacement transceivers were at least 5x faster and nothing worked afterwards

I don't think that speed was the problem (at 40MHz), but they did some weird stuff to get signal levels to match. That probably contributed to the ground bounce issue.

  • Locked thread