Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
carticket
Jun 28, 2005

white and gold.

Yeah, it's no fun using multiple platforms of EW. Also EWARM is the only one that sounds cool.

Adbot
ADBOT LOVES YOU

carticket
Jun 28, 2005

white and gold.

armorer posted:

I have zero experience with it, but you could always stick it in a VM if you want to try it out without interfering with another install.

The interference is just file associations. Only one instance can be opened when you double click ewp/eww files. I have 5 versions of EWARM installed at work for different projects that are stuck to specific versions and and install of EW430. It doesn't cause problems at all since I don't try to double click the workspace files to open them.

It does suck when you accidentally open a 5.x workspace in 6.x. You basically have to roll back via version control or recreate the workspace in 5.

Edit:

This week I learned to love Duff's Device. I've been trying to optimize a bit banged parallel NAND driver because we need better performance. With Duff's Device, the read loop went from 6 instructions per byte down to 4.125. After doing all that work I found out that the three instructions that hit the peripheral bus take so much longer than the loop overhead that there was only like a 10% boost.

carticket fucked around with this message at 05:23 on Nov 30, 2013

carticket
Jun 28, 2005

white and gold.

So, lately I've been picking up VHDL and general techniques for making FPGAs do what you want. Given my entire background is in sequential execution of instructions, it's definitely interesting. Do you think there may be interest in a write up on parallels between the two and making the jump from software to HDL?

carticket
Jun 28, 2005

white and gold.

Sagacity posted:

Mr. Powers: That would definitely be very interesting. A lot of the existing stuff out there is geared toward EE types but there's surprisingly little that talks about things like "Know how you would do X in C? Well, in VHDL,"

Yeah, that's the trouble I've run into, although I have an EE background so it's not totally foreign, professionally I am a software engineer and the concepts are completely different. It helps having a team of FPGA guys at work to answer my questions.

carticket
Jun 28, 2005

white and gold.

JawnV6 posted:

I don't think you get any mileage out of trying to teach an HDL like it's C with some deltas. More likely to end up with someone stuck trying to make a recursive function work.

I know I'm veering into "uphill both ways" territory but I learned on a GUI and dagnabit that's good enough for everyone. You're describing how a circuit should be built, not creating a list of instructions.

How complicated were you doing? I know someone who started to schematic capture, but if you're trying to do complicated things, that gets very uphill very fast. The project from college that I am attempting to tackle again is driving a VGA output that shows an analog click with the current time. I can't imagine trying to do that graphically.

carticket
Jun 28, 2005

white and gold.

JawnV6 posted:

Right, I use a HDL for everything now. The biggest project I did purely by GUI was an embedded Tetris game, took a sega genesis controller and spit VGA out, stored the board in SRAM. But for learning purposes, telling everyone C is close will hamper more than help.

I didn't mean to suggest that they are similar, but there are some parallels in terms of design patterns. I.e. if you want to do x in C, you can do the same x in VHDL this way. That's what's been the most difficult for me. It would be geared towards people who have a software background and would help bridge the gap caused because they are so different. There are still skills that will apply in both, just differently.

Also, that is a ridiculous project to do graphically.

carticket
Jun 28, 2005

white and gold.

OK, examples, and this is an incredibly simple lab I did where it was specifically done through software and then in an FPGA: blinking a LED.

You want to blink an LED on your micro. If that's all you're doing you can just use a timer and watch the value. From the count you can decide if the LED should be on or not. It's very simple code even if you didn't have a timer and just counted yourself.

Now you want to do it in your FPGA. You don't have a timer peripheral, you've just got a clock. Your LED output code will appear superficially similar to the C code in that you will decide the state of the output based on the counter's value, but you also have the process to handle counting clocks.

It's an overly simple example, I know, but of I were putting time into it, I would continue building the example to more complex systems. It's not about saying "here is C code and here is how to do it in VHDL", but to say "here is something you want to do, and here is how you might do it in C. This is how you could do the same thing in VHDL and I'm explaining why it I'd done this way." It would help people like me get over the first hurdles of going from a software background into FPGA design.

Really I just find the information out there to be overly simple (here's how to make a 3-input AND), lacking on practical examples (here's a list of all the syntactic structures of the language) or lacking explanation (here's a complicated design I did this way but I'm not telling you why).

I figure if I document my learning, I can go back and revise when I have an Aha! moment down the road. Most of what I have come across is written by an expert that has probably forgotten what it was like when they first learned.

Going the route of "I'm designing a circuit"is great for people who have circuits/logic background, but it's a completely new skill for software guys. If you tailor the material to someone with software background, you can leverage skills they have and teach the skills that are missing.

Edit: I think this is an appropriate thread since posters in this thread who want to jump into HDLs would really be the target audience. Although, arguing about whether it's C-like doesn't really belong anywhere since we're pretty much in agreement that it is not. If you want more examples then it's really just a matter of me actually following through and writing stuff up, since the learning is still an ongoing process for me.

carticket fucked around with this message at 21:08 on Dec 5, 2013

carticket
Jun 28, 2005

white and gold.

JawnV6 posted:

I need a little utility to dump the flash off a connected MSP430 without disturbing it. This will be run by someone else out in the field, so the less complexity the better. It's totally fine for it to be cygwin->msp430-gcc, but if there's some way for CCS to do this out of the box that would be better.

Nothing's jumping out at me besides hopping into debug mode (trashing whatever image is on the flash) and poking through the disassembly. Agents are waiting outside the warehouse and I need to send this interrupt.

IAR EWARM can connect with a debugger without downloading or interrupting execution. You can then halt and save a memory dump. I just uninstalled EW430 so I can't check to see if it has the same options but I would imagine it does. The kickstart edition may be able to do this for you, too. It's worth checking in to.

carticket
Jun 28, 2005

white and gold.

There are some great applications for things like Templates related to nonvolatile storage or shared variables with an RTOS. ChibiOS has a full C++ API, if you're looking for example. Generally though, some of the features are costly in resources (vtables, templates), and some techniques are frowned upon (dynamic memory) in embedded systems.

carticket
Jun 28, 2005

white and gold.

The first step is basically become an expert at USB, then specifically Mass Storage class (which entails learning a lot about SCSI). Next, you'll need to get some sort of inkling as to what micro is on the stick. Then you probably need to be a top-of-the-profession reverse engineer to go further than that. You might luck out and find a part number in there for a micro with a USB device that has exploitable errata. You could try sending vendor specific setup requests to see if you get anything back. Really, once you're here it's either going to be extreme skill, luck, or extreme patience and perseverance.

carticket
Jun 28, 2005

white and gold.

Otto Skorzeny posted:

I would up writing an SPI interface using CortexM3 about a year ago despite my employer having paid the license fees and having the parallel spec in front of me because the hardware guy on the project didn't want to use more pins :suicide:

You can run the SD spec on the same number of pins as running it in SPI mode and on most CM3s you should be able to run the interface at 24 MHz or 48 MHz if your card supports it. You have to go out of your way to flip a card into a parallel mode.

Sir_Substance posted:

edit: while we are here, I was having interesting thoughts about using them as hardware encryption keys by generating entropy based on the layout of their bad sectors. Viable?

I'm not sure the bad block locations will actually be sufficiently random. Based on the maps we end up with on bare NANDs at work, they tend to cluster a bit. I don't know how much entropy is needed to be cryptographically secure, though. SD cards are a whole different story from bare NAND, and I have no idea if they would exhibit similar patterning.

carticket fucked around with this message at 03:18 on Feb 5, 2014

carticket
Jun 28, 2005

white and gold.

Tres Burritos posted:

What's the best choice device for learning / playing with? I've been working on a pcb design and I'm about ready to get it made by OSH Park. In the short term I just need to be able to write to a couple of pins every couple milliseconds. In the long term I'd like to be able to take the code that I've written and port it to a standalone device/microcontroller without too much work. Linux + Open source is good.

I've been thinking about a Beaglebone black, but would I be able to say write a program on it and then port it to a smaller avr? Or should I be looking at an arduino or something like that?

poo poo, is buying a cheap rear end DIP avr and a breadboard/Jtag just as good? I'm not looking to do anything too complicated in the near or far future.

I'm a fan of the STM32 discovery boards. Some of them are as cheap as $15, and they have an STLink V2 debugger on board which is nice. They also have a header and can be used to program other boards with STM32s as well.

carticket
Jun 28, 2005

white and gold.

Slanderer posted:

Our hardware guys think actual testpoints are a luxury that we do not deserve (or if they do, they hide them underneath daughterboards / on the underside of boards mounted to test fixtures)

Blame the layout guy. He saw that big area with a low clearance and that's the perfect place to put all those zero height test points.

carticket
Jun 28, 2005

white and gold.

Tiger.Bomb posted:

This is a ton of fun. Just got to the 100-point levels.

This is pretty rad. The lovely browser at work is making it slow and some solutions just don't work (enter the same password twice and it works the second time...). I'll play with this more when I get home.

carticket
Jun 28, 2005

white and gold.

Tiger.Bomb posted:

The first ten levels are _very_ basic. You're pretty much always doing some kind of overflow. Figuring out what you need to overflow it with is the trick.

I just got to the first overflow level. I need to remember to look at the conditions for unlock before digging into the login/password function. Hanoi would have been much faster if I had approached it that way.

carticket
Jun 28, 2005

white and gold.

Speaking of things that aren't basic. Has anyone used ChibiOS? I'm getting a crash at startup and it looks like it is switching to a corrupted context. It started out frequently happening at startup, but now it's maybe a 1/50 sort of thing. I had a similar problem when I first set the project up and it was related to interrupt stacks, and it being intermittent is driving me crazy.

carticket
Jun 28, 2005

white and gold.

It sounds like running SD in SPI mode is more of a pain in the rear end than running it in SD mode with a real SD controller peripheral. And running it in SD mode is a pretty big pain in the rear end, too.

carticket fucked around with this message at 14:08 on Feb 20, 2014

carticket
Jun 28, 2005

white and gold.

Even beyond that, though, the SD controller peripherals will take care of the precise clock counts of things and handle the CRC generation and checking. If you use a SPI library, you don't have to write that code, but it's still the software that is calculating all that.

carticket
Jun 28, 2005

white and gold.

Sir_Substance posted:

That's apparently not a thing in SPI mode. The first command you send needs a correct CRC but after that, it ignores the CRC part of the packet unless you specifically tell it to pay attention.

Huh. I thought when I had read up the calculation was still done and transmitted. I don't know why they don't throw down SD controllers, though. It's becoming so prevalent and the new cards don't support SPI. I would imagine a library for an SD peripheral would be smaller (code size) than doing it with SPI, and you could even MUX the pins as alternate function. Probably a cost driver, though.

carticket
Jun 28, 2005

white and gold.

I have learned far too much about DDR memory and the signalling used over the past two weeks than I ever wanted to know. The current result of all this work and research is this: Never be the first customer. Not only are there a million barely documented knobs to turn in software, but you've got silicon that may or may not work regardless of all the knobs.

carticket
Jun 28, 2005

white and gold.

movax posted:

Hahah, you poor bastard. Which vendor's controller?

Microsemi.

carticket
Jun 28, 2005

white and gold.

movax posted:

:downs:

ProASIC3 or IGLOO2?

SmartFusion 2. Same controller as the Igloo, but without the ARM core disabled. Allegedly this is the same die as the other packages, but there's the possibility that DM[2:3] are brought out in place of DM[0:1]. It's the only reason I can think it's so screwed up.

carticket
Jun 28, 2005

white and gold.

It seems I have to back track on my complaints earlier. It is very likely our board design and not Microsemi's fault in any way. Their documentation is still terrible, with the DDR controller documentation split across at least the files, and still missing bits.

carticket
Jun 28, 2005

white and gold.

I'd suggest the STM32. They have a wide range, and if I recall correctly, the STM32L1 Discovery board (with on board debugger) runs $12.

carticket
Jun 28, 2005

white and gold.

Why not use something like an Archimedes Screw: http://en.m.wikipedia.org/wiki/Archimedes'_screw

Drop food in one end with gravity and then you just run a stepper a certain number of turns to portion the food? You'd need to fabricate the screw most likely, or maybe if you're lucky you could find something appropriately sized to go inside a standard sized pipe.

carticket
Jun 28, 2005

white and gold.

Slanderer posted:

Time for a very specific question:

I'm working on a project with 2 microcontrollers (STM32F400 and STM32F300 series) and like 4 independent power sources. However, I currently don't have the ability to load code via the bootloaders, so it's JTAG all the way for now. However, since most of the time I'm not using a debugger for debugging (I'm making optimized builds that don't debug well anyway), I need to remove all 4 power sources from the device (which takes a minute or two) every time I load new code onto one of the microcontrollers.

I'm using IAR embedded workbench with a J-Link debugger (pretty standard). Anyone know if I can set it up to start the CPU automatically after downloading? If I were doing this from the command line utility it would be trivial...

I've never seen an option for that. If you disable run to main, it will just stop at your reset vector. You may be able use a C-SPY macro file to do this, but I'm not familiar with the macros beyond displaying some info on log breakpoints. There's probably a resume directive and you can just put it in your debugger .mac file.

carticket
Jun 28, 2005

white and gold.

Slanderer posted:

I looked at the macros and tried issues a reset at different times (after the flashloader, at least in theory) but it wasn't working out...

Then I figured out that I can semi-reliably reset the CPU by one of the following:
disconnecting from the jtag port
connecting to the jtag port
disconnecting my debugger from USB
connecting my debugger from USB

It's not consistent (reset lines are weird idk), but it's good enough for me.

Looking at a macro file I have, execUserSetup should execute after download, you could either send a GDB command with __gdbserver_exec_command or you could try __hwReset with a halt time of a big number.

carticket
Jun 28, 2005

white and gold.

Arachnamus posted:

I'm looking at putting something together which controls focus on a camera from an arduino. This involves rotating a ring on the lens, which can be quite delicate.

I'm from a heavy software background so I'm trying to figure out what the best peripheral is to do this. At first I looked at brushless motors controlled via an ESC, but they look to be very much speed-oriented which makes sense given their use in RC planes etc. They also don't provide any position or other feedback. Given the delicacy of camera lenses I'd like to be able to detect resistance that indicates hitting either end of the ring's rotation.

Would a servomotor be the right tool for this job? I'd quite like to transfer the power via a belt for ease of installation and reduced risk of damage if the motor overruns the focus ring's stop points, but I could manage toothed gears if it makes it easier to map the rotation of a lens to the rotation of the motor.

I'm also looking for quite a small form factor for the motor, maybe 5cm cubed or thereabouts.

They make some pretty tiny stepper motors. I think one of the companies is micromo. You'd need a stepper driver, too.

carticket
Jun 28, 2005

white and gold.

Spatial posted:

Not having an inc instruction: :negative:

What's wrong with adding one? Are you concerned about atomicity?

carticket
Jun 28, 2005

white and gold.

Spatial posted:

Sorry, was half asleep when I posted that. The instruction I was yearning for was inc on a memory location. No such luck on the Cortex M0 though. Load-store all the way, baby.

I doubt the M0 has it but if you need atomicity, you can get it with a LDREX/STREX loop without disabling interrupts. It becomes totally non deterministic if you're going for real time, though.

carticket
Jun 28, 2005

white and gold.

Internet Janitor posted:

Structuring programs as a series of coroutines that swap out the PC register to interleave tasks appears to be pretty common. The 8-bit ALU/accumulator contrasted with the 16-bit general purpose registers is sort of a pain, though.

Is that in lieu of context frames? You just switch which reg is the PC in your scheduling interrupt (or on a yield I suppose) instead of saving and restoring a context? I haven't done much with coroutines, but they don't have a typical context with their own stack, right?

carticket
Jun 28, 2005

white and gold.

Do you need to do ISR(int Name)? Normally your function attributes come before return type, but I don't know if that is required. I guess it depends on what the ISR macro expands to.

carticket
Jun 28, 2005

white and gold.

I wrote a software watchdog that worked by kicking the hardware watchdog from an interrupt. It had multiple watchdog channels, each with individual timeouts. The interrupt basically did the software checks for kicks from the application tasks, and kicked the hardware watchdog if all task channels were up to date. In the event that one fell out of date, it simply let the hardware watchdog trigger the reset (which could be detected at startup).

I can't think of another reason to do it in an interrupt unless there's some weird configuration, but even then it sounds like it is setup to monitor to health of the periodic interrupt. Maybe there's a chance the handler can lock up?

carticket
Jun 28, 2005

white and gold.

Based on the trace, I don't think you'll be able to go much faster than 10 MHz and not have your highs and lows start to get squashed.

carticket
Jun 28, 2005

white and gold.

There should be virtually no prong required for a Cortex M build. Just have to get the ISRs registered, and maybe a timer for tickless operation.

carticket
Jun 28, 2005

white and gold.

I write C for battery powered thermal weapon sights and night vision goggles.

carticket
Jun 28, 2005

white and gold.

If you have a low data rate, you could use I2C worth multiple masters if needed. For higher data rates you could use a multidrop UART, but that really needs a single master to avoid collisions.

carticket
Jun 28, 2005

white and gold.

movax posted:

Do you have actual experience with multi-master I2C? I swear everyone claims their poo poo will play nice in a multi-master environment, but nobody ever loving validates that. Looking at you Xilinx.

At the time I suggested it, I was under the impression that each module would be developed by poeticoddity and would be a microcontroller network under their control. I wouldn't suggest it if you're mixing and matching micro vendors or adding third party I2C devices to the bus. I basically thought it was the frictionless spherical chickens in a vacuum scenario where everything is controlled and it becomes easy.

carticket
Jun 28, 2005

white and gold.

meatpotato posted:

Going to the msp430 is not backwards, it's just different! Pursue the area that interests you the most, finding a job in either won't be difficult for the foreseeable future. The msp isn't going away soon and neither is embedded Linux on ARM.

This is an unpopular opinion, but I think MSP is going to start fading from the market if ARM keeps going with the CM0+ type devices. The Arduino has already pushed AVRs in its place for hobbyists, and those hobbyists will go on to design around what they know. It requires an investment in tools for a limited range, where an ARM setup gets you from tiny CM0+ to CM3/4/7 and A5. I don't think shops that have been using MSPs will phase them out, but I don't think it's that appealing an option for a new design or a startup that needs to purchase tools.

Edit: CM0+ being 32-bit also helps portability across different scales of platform. I inherited some MSP code that needed to be ported into a platform running an ARM9, and ran into issues because the code was written to depend on storing an address in two bytes. Admittedly, that's just bad coding practice, but not having to worry about that is appealing.

carticket fucked around with this message at 08:10 on Jan 12, 2015

Adbot
ADBOT LOVES YOU

carticket
Jun 28, 2005

white and gold.

Temperature. That's pretty much the biggest reason side from required clocks like the 32kHz clock used for RTC. You can dial in an internal RC to make it satisfactory for USB (which has tight timing requirements), but unless you calibrate it over temperature, it probably won't function for USB at cold. The new Kinetis low power parts have a USB clock recovery feature that let's you run USB crystalless over temperature (or so they claim).

  • Locked thread