Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
movax
Aug 30, 2008

isr posted:

XC16 is okay compared to XC8, and 99.9% of the time both are fine. The dsPIC/PIC24 core was designed (supposedly) with C in mind, so its rare that you run into a compiler bug. The 8-bit PICs just aren't C friendly at all. Its amazing the compilers work as well as they do, but its often you'll be asking yourself "is this a compiler bug?" when your project isn't working and you don't know why...

But programming 8-bit pics isn't fun in assembly. The ~Special~ function registers are banked, so you have to keep track of which bank you're in or you might clear a timer when you thought you moved a pointer (or something)

I have heard horror stories about the PIC16 in particular out of the 8-bit PICs; apparently it has some kind of incredibly asinine/weird architectural design that results in compilers sucking / being hard to write for it.

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

Rescue Toaster posted:

Are you sure you want to encode serial numbers into the actual program memory rather than a separate EEPROM or something?


On an aside, the PICKit 3 is really driving me nuts. What a piece of crap.

It works OK if I plug it in AFTER the circuit is powered, but if I power-cycle the circuit later while the pickit is connected, the pickit stops talking to my PC (not just pickit can't talk to chip, but MPLAB X can't talk to the pickit) until I unplug & reconnect the USB cable. Totally crazy.

The circuit is totally floating so there's no weird ground bounce or loop (it has a low inter-winding capacitance transformer, even). Windows doesn't see the USB device disconnect or anything, the pickit just stops responding until I cycle it. That's going to make it almost impossible to use it for real in-circuit debugging.

Sounds like a grounding issue, post the ICSP part of your circuit if you can. I am one of the threads resident PIC fanboys.

movax
Aug 30, 2008

For cost you really can't beat spending like $5-$10 (with shipping) on a MSP430 LaunchPad, if you're OK with more effort on your part in reading docs, getting environments set up, etc.

movax
Aug 30, 2008

Everyone and their mom are making GBS threads out ARM "devboards" left and right with software support ranging from non-existent to laughable to only half-lovely. It's a neat architecture and certainly very popular in the market, but there is a level of complexity to keep in mind. If you are just starting out, I really think the best choice is a "simple" MCU like PIC/AVR/MSP430/etc. The devtools are cheap, and you will learn a lot of basics that can be carried on upwards to bigger and more powerful chips. I am partial to PICs because I think Microchip has the best organized documentation out of anyone.

With AVRs, of course Arduino uses the Atmega328, but I would urge you to learn on bare-metal / C. AVRs IMO are less-forgiving if you gently caress-up your fuses because then you have to rig up your programmer with copious amounts of extra wires to blow the fuses with high-voltage programming whereas PICs and MSP430s can recover from their standard debug ports (for the most part). The most common example here is if you accidentally set the wrong clock source for your chip (i.e. use external crystal when you don't have one).

If you're willing to put in work and deal with poorly organized documentation, the MSP430 LaunchPad is a great choice. If you want better documentation and want to spend a little more, a devboard based around a PIC18 or PIC24 that supports the PICKit3 is what I would recommend.

movax
Aug 30, 2008

mfny posted:

I am a little puzzled as to why your recommendation would be so strong in favour of Microchip/PIC. You have said that the software for PICs is a buggy piece of poo poo, software being described in such a way does not inspire confidence at all from a ease of use/learning perspective.

I am thinking having dev tools that work well and not be a buggy piece of poo poo would be important or maybe im being naive in my newbieness ?

Did I? I know I hated on the Atmel tools here. The Microchip tools can be buggy yes, but for starting out I think they'll do fine. The secret is that almost all of the design suites (and EDA tools in general) tend to suck and have poo poo wrong with them. The userbase is tiny (but generally well-educated), there are no alternatives, they can charge obscene amounts of money, etc.

movax
Aug 30, 2008

isr posted:

In some market research I saw, embedded developers are still reporting planning new designs using old rear end architectures like 68k/PPC/186/286/386/486/Z80/8085, you name it. Why? Hell if I know. I suspect its to leverage existing code bases. For example, military contractors have to write everything in assembly, so migrating just wouldn't make sense. Converting Z80 ASM to MIPS would be very enjoyable, I wish someone would pay me to do it...

The old archs have a lot of heritage (flight and otherwise), combined you've probably got centuries of real-world usage time. Well known errata, performance, etc. And yeah, probably the same greybeards working at a company for like thirty years.

I don't know if it's a benefit or not, but those old ones also aren't very mixed-signal, if at all (unless modern variants are slapping the IP core on there and gluing logic/peripherals to it). Want an ADC or something else? 8-bit parallel address/data interface time :corsair:

movax
Aug 30, 2008

Corla Plankun posted:

I just got a job doing embedded code with a dspic and I am really interested in doing things the right way instead of just doing them. My work involves signal processing, serial communication, and a host of digital outputs--I know how to do all of these things one-at-a-time but I am worried about getting them all to take turns nicely. Does anyone know of a good resource (free or paid) that could help me think about this in a constructive way? I think I need to develop an interrupt-driven scheme to keep everything working when individual modules take too much time, but its tricky to tell which parts should be interrupts and which should be in the main loop.

For what it's worth, I'm using MPLAB X and programming in C. Most of the resources I have found are either too basic or they use assembly and it takes too much mental effort to translate that to C.

Sounds like you need a RTOS? Basic scheduling, priority queues, interrupt tiers...all things a RTOS can provide (though admittedly you can do the interrupt priorities yourself with the IPCx registers).

The dsPIC doesn't give you a free shadow set of registers for IPC7 like the PIC32 does, right? I think that is a MIPS thing.

movax
Aug 30, 2008

I'd love to hear more about alternatives to superloops...I mean, you need a while(1) somewhere, right? I mean, my code generally looks like:

code:
// init stuff / functions
//
// set up interrupts and poo poo
//
// while(1) {
//
//  do small repetitive logic;
// }
And then everything else is interrupt driven, seems really OK for simple stuff?

movax
Aug 30, 2008

Delta-Wye posted:

I read that book recently. It's a really neat application of state machines; the hierarchical nature gives a ton of flexibility and power. It's nice not having to handle every event in every state; when unrecognized events occur, they can bubble up the hierarchy until they get handled. That said, it was used for a C++ app that ran on a relatively powerful computer (64K was not a lot) so the overhead worry was negligible.

I'm going to borrow that from one of the guy's here at work, our senior software guys curse at superloops and I want to learn more about the alternatives. Seems like it's got some real good advantages for safety-critical / other fault-sensitive/tolerant applications, though I can't say I saw too much of that in the auto industry.

movax
Aug 30, 2008

You've got a LXR setup for your source tree(s), right?

e: gently caress me it's late and I can't read. I had do similar stuff, backporting a lot of newer features (PCI Express related mostly) into a 2.7.x-something kernel. At least it was x86, which made life a little bit easier.

movax
Aug 30, 2008

JawnV6 posted:

My hardware guy didn't even give me LED's on the last board. :argh:

Why do you need LEDs, you have a debugger and a scope to probe a test-point / some other pin of you need it? :angel:

- hw guy

movax
Aug 30, 2008

Slanderer posted:

Our hardware guys think actual testpoints are a luxury that we do not deserve (or if they do, they hide them underneath daughterboards / on the underside of boards mounted to test fixtures)

I do a lot of my own layout these days, and I usually layout things in a way that will make my life easier in ~two weeks when I'm in the lab doing bring-up on the drat thing. No reason to make myself suffer, so it's usually win-win-win!

Except I'm fighting an up-hill battle at my new job in terms of making readable schematics...we've got guys treating it like software and making these ridiculously over-abstracted hierarchical abortions where you can't even use Ctrl+F to follow net names around the schematic :nms:

movax
Aug 30, 2008

Mr. Powers posted:

I have learned far too much about DDR memory and the signalling used over the past two weeks than I ever wanted to know. The current result of all this work and research is this: Never be the first customer. Not only are there a million barely documented knobs to turn in software, but you've got silicon that may or may not work regardless of all the knobs.

Hahah, you poor bastard. Which vendor's controller?

movax
Aug 30, 2008

Mr. Powers posted:

Microsemi.

:downs:

ProASIC3 or IGLOO2?

movax
Aug 30, 2008

Mr. Powers posted:

SmartFusion 2. Same controller as the Igloo, but without the ARM core disabled. Allegedly this is the same die as the other packages, but there's the possibility that DM[2:3] are brought out in place of DM[0:1]. It's the only reason I can think it's so screwed up.

:stare: Fuuuck. I've always been curious how well they manage/control their die + package combinations, considering how in-depth the IGLOO guides can get (Die A with elements B in package C does D and F, but A + B in Package G does E and F...).


minidracula posted:

I guess it's reasons like this I still end up sitting in conversations with national labs and their partners where someone will mention offhandedly that the reason Group X's NSF-funded research project is going with Vendor Y's Part Z is just because Memory Controllers Are Hard (and said Part Z has one already, despite it probably not being what they would want/need for their project if they got to do their own).

I mean, I don't intend to make it sound like they are trivial or anything, that's totally not the case (ask me how I know). But still.

I think it's why a lot of people decide to roll their own. I'd say it works less than half the time though; the other portion think they've outperformed the manufacturer / commercially available cores, but then act confused when they start hitting corner cases and realize that maybe there's something to validation after-all...

Manufacturer package issues aside, I think some folks screw up their layout too, be it loving up controlled impedance or simply not following length-matching rules/restrictions.

movax
Aug 30, 2008

Mostly finished a side project I'm working on, just waiting on some final connector specs:



dmx512 controlled interface for a laser + 3 motors. should be a fun sw project writing the dmx512/rdm slave interface code.

Purple to simulate the prototypes from OSH Park; expecting finals to be matte black.

movax
Aug 30, 2008

My Rhythmic Crotch posted:


Have high speed GPIO, 2.8MHz. Hurray.

Edit: hendersa, do you know essentially how /dev/mem works? Is it sort of virtualized or protected, so that one process cannot modify memory belonging to another? I wonder if the GPIO bandwidth is limited by hardware or some kernel construct (such as virtualization/protection) of the memory interface. None of this is for my job btw, I'm just tinkering. :)

You running a RT kernel?

movax
Aug 30, 2008

JawnV6 posted:

When's the last time you wrote a program that had an expected runtime longer than a week?

In the past, auto industry -- some nodes lifetime is only limited by whenever you happen to swap out the vehicle battery, or it drops out of regulation.

Nowadays it's all secret stuff.

movax
Aug 30, 2008

Mr. Powers posted:

If you have a low data rate, you could use I2C worth multiple masters if needed. For higher data rates you could use a multidrop UART, but that really needs a single master to avoid collisions.

Do you have actual experience with multi-master I2C? I swear everyone claims their poo poo will play nice in a multi-master environment, but nobody ever loving validates that. Looking at you Xilinx.

movax
Aug 30, 2008

Mr. Powers posted:

This is an unpopular opinion, but I think MSP is going to start fading from the market if ARM keeps going with the CM0+ type devices. The Arduino has already pushed AVRs in its place for hobbyists, and those hobbyists will go on to design around what they know. It requires an investment in tools for a limited range, where an ARM setup gets you from tiny CM0+ to CM3/4/7 and A5. I don't think shops that have been using MSPs will phase them out, but I don't think it's that appealing an option for a new design or a startup that needs to purchase tools.

Edit: CM0+ being 32-bit also helps portability across different scales of platform. I inherited some MSP code that needed to be ported into a platform running an ARM9, and ran into issues because the code was written to depend on storing an address in two bytes. Admittedly, that's just bad coding practice, but not having to worry about that is appealing.

I'm not a MSP fan at all, but they (TI) have got customers that are buying fuckloads of the things, somehow. The FRAM parts are absurd when it comes to power savings -- without trying (i.e. not using low power modes at all), the drat thing consumes 1mA at 10MHz, core + I2C running. I think ARM's got the advantage of sheer numbers + a wider tool ecosystem though, which is nothing to sneeze at.

movax
Aug 30, 2008

JawnV6 posted:

How do I have 2 different JTAG cables that won't work with the chip (MSP430F2) that I'm using? Atmel and ST aren't playing nice.

I'm assuming this FTDI part won't have any restrictions and will easily be compatible with various toolchains and chips. This strategy should delay further tears until at least Monday of next week.

e: and someone just threw a FET430UIF on my desk for completely unrelated reasons :v didn't know we had one

MSPs aren't JTAG compliant -- you probably get out 0x99 (if anything at all) for every shift into it, or antihero fixed byte that indicates MSP CPU core version. They can't be part of a standard system-wide chain; we use FPGAs to debug and program them in-system over a standard chain.

Once you enter debug mode (proper timed pulses on TEST and RESET_B), the behaviour is a little better, but still not ideal. The FET430UIF (grey or black one?) is your only hope. I don't remember if the F2 shares the XV2 core with the FRAM parts, but the JTAG behaviour is probably similar.

Also, yeah, leakage current is a bitch on these guys (posted about that before I think), a good reset supervisor may help avoid corruption if you can't avoid said leakage.

movax
Aug 30, 2008

ante posted:

Hey, so give me two good reasons to use anything other than internal oscillators.


I would assume better stability / accuracy, but other than that... ?

High stability
Higher clocks (if you can't PLL the internal FRC like some PICs)
Synchronous operation in a distributed system
Running chip and peripherals at highest possible speed

Lowering the BOM cost is possible if you write software that can do some calibration routines over temp, or leverage the auto clock tuning/recovery some guys advertise for low-cost USB MCUs.

movax
Aug 30, 2008

I suppose it's worth clarifying that there's both external crystals and oscillators for generating clocks. With the former, you have to carefully match load capacitance, and depending on the amount specified, you may have to factor in the additional capacitance of the MCU's pins + your PCB traces. The MCU has an internal oscillator circuit that will "hammer" that external crystal to operate at the desired frequency. Crystals require no external power directly, they are basically two-terminal devices attached to your MCU.

An oscillator in this context is generally a device that generates a single-ended clock (of course there are many types, LVPECL, etc) that you feed into a single MCU I/O pin intended for that purpose. It does require external power (clean power too -- decoupling + a ferrite or RC filter is a good idea), and most usually have an enable pin. Generally 4-pin devices: VCC, GND, Enable and Clock Output. Depending on the frequency and how much you load it down, they can draw a decent amount of power; low-power applications requiring these, you'll probably want to tie the enable pin to your MCU, so the micro can enable/disable it before/after switching to it from the internal RC.

Everyone probably knew this already, but figured I'd post it for any folks lurking.

movax
Aug 30, 2008

evensevenone posted:

This might be stretching the limits of this thread, but anyone have any experience with Yocto or similar "embedded" Linux distributions? My work has a fairly decent-sized project that runs on what have become basically headless linux boxes that run Debian, but I'd like to further automate the process of building install images (right now we just install everything to the physical device and dd the boot drive, then install to other devices off that, but it's kinda time consuming). I'd like to get it to a point where building the image could be 100% automated and hooked into CI.

It's pretty slick -- at my last job I maintained a simple Linux distro (as a hardware engineer) using Buildroot -- got the job done effectively.

At current job, we use Yocto -- it's more "softwarey" and a pain in the rear end in my opinion, but sure as hell, the SW guys have it hooked into the build system as just one more component that gets built. It's a bit heavier weight in term of infrastructure, but it's certainly more flexible than Buildroot.

movax
Aug 30, 2008

xilni posted:

I've always wondered too, are they just a bunch of NAND or NOR gates plus input/outputs?

Generally CPLDs are much much smaller / less powerful / less complex than full-blown FPGAs -- at one point the architectural difference was that a CPLD was just a 'sea of gates' that you could connect in various shapes, whereas FPGAs were based on a modular block based architecture, where LUTs of various size are used to form logical functions.

movax
Aug 30, 2008

Slanderer posted:

It's also important to note that CPLDs are non-volatile, whereas FPGAs need to be reconfigured from external memory at reset. I've seen a CPLD (or maybe it's a PLA, but the distinction isn't critical) used as glue logic for a hardware power source manager (where using discrete components would be too big). In that situation, it is important to have the logic available as soon as power as present (even before regulators can stabilize) so that the hardware can be setup properly and allow the rest of the system to boot up.

That applies to SRAM-based FPGAs (the majority); there are flash-based FPGAs like Actel (Microsemi) IGLOO/IGLOO2 which have a selling point of them being live at power-on and requiring no external configuration memory. You also have anti-fuse devices, which of course have their configuration burned into them via silicon sculptor / the programming tool.

movax
Aug 30, 2008

Popete posted:

Do most FPGAs have NAND or eMMC packaged onto the die so you don't physically need an external chip?

Nope, just a variety of different ways to load the configuration bitstream. Design security is done via AES, eFUSEs, etc. Generally it's serial flash memory, but there's also some parallel schemes floating around, and some systems dispense with the memory entirely (at least directly) and have the FPGA programmed in-circuit (ISP) by some other processor.

Altera/Xilinx FPGAs are all SRAM -- no other way to get the desired performance. The majority of the bitstream size would be eaten up by configuring all the BRAMs. Their CPLDs have internal memory to store configuration.

Aforementioned Microsemi parts are flash -- no external memory needed. Helps with design security too.

movax
Aug 30, 2008

Has anyone used non-Altera EPCS devices with their FPGAs? Cyclone IV E in particular?

I scrubbed through the instruction sets of my proposed alternate, and I think it will work (meets AC timing, etc.) as a configuration device, but will not work with the stock Altera Serial Flash Loader (SFL) as the Silicon ID will not match. AFAIK though, the FPGA's silicon doesn't perform any kind of ID check and just tries to go read data from whatever serial memory is attached.

I'll probably have to write my own loader that gets chunked on there via JTAG, and then uses a virtual JTAG interface to program the SPI memory (so the same way the SFL does it), but wondered if anyone here has tried it yet. I've got a DE0-Nano board on the way that I'm going to rework my memory onto, and see if it still boots.

movax
Aug 30, 2008

Mr. Powers posted:

I think we just use the Micron M25P series with our Spartan 6's, and I've used an MT25P with a Lattice ECP5. I'm pretty sure we did something similar on older Cyclone III products, but I can't say for certain. Unless Altera is an outlier, they all pretty much comply with the standard SPI flash interface.

Hm, ok -- I just got my DE0-Nano in, so I'll find out soon enough I suppose. Interested in seeing if it can boot from a FRAM SPI memory or not.

movax
Aug 30, 2008

So I've crapped on the Cyclone V SoC in the past, mostly because they lost to Xilinx in getting to market first and I've been doing Zynq stuff for (gently caress me) the past 5 years or so.

But, now I'm seriously thinking about dropping the Zynq, because the Cyc V has L2 ECC support in almost every L2 data store. Also, my Xilinx support sucks rear end whereas I have an Altera guy chomping at the bit to get me to switch over.

The TRM is confusing as gently caress though -- the Cyc V definitely supports 1GB of RAM (max?) attached to the HPS? Is this even with ECC enabled? The Zynq tops at 1GB, 512MB if you turn on ECC.

And, with 2 PCIe Hard IP blocks, which one (if any) on the Cyclone V can be used as PCIe root complex that Linux can talk too?

movax
Aug 30, 2008

Kvaser or ATI maybe? IXAAT didn't have a horrible API, either.

movax
Aug 30, 2008

Most computer engineers I've met are either embedded software, FPGA engineers or ASIC / digital logic designers (comp arch basically). Their electrical engineering abilities generally vary on how much they like that stuff / how well they did on that in school, from being utterly useless outside of the software realm to being able to do their own boards / circuit design for their MCUs / FPGAs.

Personally I majored in both, because I could get the credits to line up easily and I'm comfortable living across the boundary doing low-level software work, IC design, PCBs or system architecture. Try out each area until you find the one you like, or keep moving around because that's how you stay sharp / up-to-date.

movax
Aug 30, 2008

iospace posted:

Has anyone here have the joy of working with board that used this crap?



Because holy poo poo CPCI sucks.

I used to design cPCI backplanes + SBCs; granted we tacked on connectors to add PCIe and other higher-speed I/O as well, but definitely remember cPCI J1 and J2 very well.

Software group was always good for returning hardware with mangled / ripped out connectors on both the SBC and backplane side.

movax
Aug 30, 2008

Le0 posted:

I've been working in embedded software for nearly 10 years in the same company but the big problem I have is that we work on a single type of CPU (Sparc for Space basically), I'd like to learn some of the stuff the cool kids use nowadays. Also because I'd like to change company in the not too distant future.
I started a course where we use FreeRTOS on an Arduino which is a good start but one of the problem I have is that I have a hard time finding ideas of project to build so I can learn stuff.
What do you guys usually build for learning purpose on a new architecture?

LEON3/4(FT)?

ARM Cortex-M4F man -- there are even radiation-tolerant variants being built. Cannot hurt to be familiar with the basic MCU core (Cortex-M0 through M4) that is in loving everything these days.

movax
Aug 30, 2008

Le0 posted:

Yeah I've been working on LEON processor but we mostly have worked on LEON2 until one year ago where I've been working on our first LEON3 (GR712RC) board. Are you working with LEON CPU also?
I'd love to give ARM a try, I'll give it a look.

LEON3 and 4FT, IP core and GR740, yep.

The other thing to check out could be RISC-V; as an open ISA, it could see some future development work in the fault tolerant area...

movax
Aug 30, 2008

Popete posted:

These Xilinxs tools are driving me insane, nothing but auto-generated broken software and libraries.

Spent a week trying to figure out why the Microblaze ethernet (with lwip library) was no longer working when nothing in my software had changed. I got a new FPGA image that had added another DMA block instance (1 for the ethernet and 1 for transferring audio data). Well what happens is the lwip library in xilinx SDK is hard coded to always use XPAR_AXIDMA_0_DEVICE_ID (this is the device ID assigned to a DMA instance) which works fine with only 1 DMA instance for the ethernet interface as device 0 is always the ethernet DMA instance. But when you add the second DMA instance the default name it chooses always means the ethernet DMA instance will come second (alphanumerically) and thus the XPAR_AXIDMA_0_DEVICE_ID will be set to the wrong DMA instance.

The lwip library won't report any errors as it has a valid DMA instance to use it just happens to be the wrong DMA instance and the network interface will come up and look like it's fine but won't actually work.

Once I changed the hard coded value in the lwip library from XPAR_AXIDMA_0_DEVICE_ID to XPAR_AXIDMA_1_DEVICE_ID it worked. Hooray!

Now, to go for broke, get your entire setup into Git and building as part of scons or some other build system without shooting yourself.

Amazingly, the workflow / tools for Zynq-based projects used to be worse. I was at Xilinx HQ a few months ago, and at least some of their senior guys admitted "yeah, we kind of hosed up" when it came to making Vivado and tools play nice with VCS.

movax
Aug 30, 2008

Harik posted:

I want to clarify this a bit.



This is the bit of schematic that's wrong. The problem is capture has no concept of exports and imports, so those VCC rails don't have to be connected to anything. +5V and +3.3V were called something different on the power controller because this codec design was pulled in from a different board with different names. Nothing indicates that +5V was a dead end, because it could be an export.

There's all sorts of static analysis tools for programming to make sure this exact error doesn't happen, but for circuit boards you're supposed to manually check every part of sometimes incredibly complex schematics, which is just nuts. We do, but the focus is on the complete failure parts - DDR, EMMC, power delivery to the processor. If it comes up and a peripheral has a problem we can whitewire it and fix the schematic without blowing weeks redoing the board. If the DDR doesn't work we're dead in the water.

I'm currently enjoying a perusal of the device tree to try to figure out what GPIO is misconfigured and causing instant wakeups. If only there were a wakeup reason recorded that wasn't clearly bogus. (CPU 0 performance counter is not a valid wakeup reason).

We use Altium, and the way I protect against stuff like this strictly enforcing naming conventions, and having a Python linter that parses the netlist to look for things like this. That data directly feeds into a Python based simulator that among other things, executes a power and thermal model, that draws its data directly from the source — a schematic.

Also, in layout, I think you’d notice instantly if a key power connection was missing / there’s a power “island” somewhere.

Altium is the best (I.e. least lovely) schematic / layout tool available today, IMHO. Cadence is surviving on legacy / enterprise customers for PCB layout, but we built a shim to plug Altium into our ERP system. Still think IC layout and PCB / schematic areas of EDA are ripe for disruption, but it’ll be such an uphill battle. (Foundries don’t want to play with another tool for their PDKs == get hosed and still suffer with Virtuoso)

I’m hyper OCD / anal about hardware development process for my engineers, but in four years, we’ve not had to respin a board in response to a design error. Trade-off is additional time spent in review / upfront dev for the tools, but builds are not cheap and take time.

movax
Aug 30, 2008

Odette posted:

Anyone think RISC-V or the other FOSS MCUs will eventually become a force to reckon with?

I think so, for RISC-V. Aero and space are looking at RISC-V hard for RH processors, and I bet Chinese chip makers would love to stop paying ARM license fees at some point.

If anything, maybe it’ll get more people centralized around it instead of split between MicroBlazes, Nios IIs, Tensilicas, etc...

movax
Aug 30, 2008

feedmegin posted:


What does RISC-V have over ARM as an ISA for RH stuff?

It’s more that since it’s an open ISA, you don’t have to shell out for the tier of ARM license that gives you the access you need to do your own layout, add TMR to register files, implement scrubbers and EDAC in your SRAMs and peripheral memories. In layout you want to be using DICE or other non-commercial topologies vs your 6T SRAM cells to ensure some kind of SEE immunity. As process size shrinks, TID tolerance has generally gone way up, but SEE performance goes down (as in, more) because you need less charge to cause an upset.

Then again, Cortex-A53 has been selected and is in development for the next American “high-performance” space processor (essentially similar to Zynq MPSoC) and is taping out in the next year or so from Boeing, IIRC.

It’s also appealing because it may finally overtake SPARC V8 in Europe as the high-performance CPU of choice.

NVIDIA turned it into the Falcon (or replaced the Falcon) as it’s GPU housekeeping MCU now. Smart move, IMO.

movax fucked around with this message at 05:41 on Nov 10, 2017

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

Harik posted:

Never did get back to replying to this.


Yeah. 3 man shop though. EE does all the hardware, I do the software, and the other guy manages the hosting/cloud VMs. We've done some pretty cool designs for such a small team, I'll do a writeup at some point. Pretty varied work, the guts of a change machine (the protocols to readers and dispensers are bad), crane safety equipment running on embedded micros connecting wireless 900mhz to an android-based display, slick little pill dispenser that can call your kids when you forget to take meds, AR-15 skins for feeding simunition training inputs into Unreal/VBS for virtual soldier training, dozens of little one-offs for NFC readers, RS485 <--> ethernet bridges, etc.

Small team explains how things like this get missed - you focus resources on the board-killers like DDR, core voltages, etc. TBF it was an inductor to isolate the audio from the system noise, it just somewhat over-succeeded in it's design goal; zero audio noise. :laugh:

It's why I love/hate embedded. Hardware sucks and I've released more magic smoke than I like to admit, but you get to work on some neat projects.



This sounds like a cool little firm to work with, how do you do business development?

  • Locked thread