Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
No Gravitas
Jun 12, 2013

by FactsAreUseless
Here is an interesting question. I'm creating a neat thing in VHDL and I'm stuck at the home stretch phase.

Imagine that you have a very obscure 8-bit C-friendly microcontroller. You have no idea if the CPU core works correctly. You also have no idea if the C compiler is correct, but the assembler is known to be correct. The crt0 file is known NOT to be correct, but that can be trivially fixed. No C stdlib exists, but (for example) newlib could be used. We can make the assumption that the MCU peripherals are all correct, no need to test those.

How would you go about creating a test program that proves the core to be functioning? Assembly? If so, what patterns of instructions to test? Or rather maybe C and risk wasting a lot of time on crap compiler? What program to use for the testing? Any book recommendations?

Once I'm done this project I will probably release it as open source, but I don't want to release a crap thing. The thing has at most historical interest, but it was a neat part of the past.

Adbot
ADBOT LOVES YOU

JawnV6
Jul 4, 2004

So hot ...
I'd start from the known good assembler and build up from there. Hand-write or generate assembly, run it on the core, test against ???.

Do you have a simulator? Or a real core you're trying to replicate? It's not clear from your description what the DUT is. I'm guessing you've built something and want to test that your own code is good but having "close" compiler and crt0 makes it seem like you might have real hardware you're trying to puzzle out.

As for patterns, have some basic handwritten checkout to make sure it's alive. Then you want to focus on likely buggy areas. Fill up queues, exercise bypasses, cause backpressure, do tricky control flow. If you're doing anything generated, coverage metrics are going to be crucial. I'm much more familiar with the SVTB side of things over VHDL, but there should be some way to easily collect coverage on simulated runs and you can instrument it yourself if you're running on a FPGA.

The Verilog Megathread can probably provide more help than this thread, although plenty of us frequently check both.

hendersa
Sep 17, 2006

No Gravitas posted:

Here is an interesting question. I'm creating a neat thing in VHDL and I'm stuck at the home stretch phase.

Imagine that you have a very obscure 8-bit C-friendly microcontroller. You have no idea if the CPU core works correctly. You also have no idea if the C compiler is correct, but the assembler is known to be correct. The crt0 file is known NOT to be correct, but that can be trivially fixed. No C stdlib exists, but (for example) newlib could be used. We can make the assumption that the MCU peripherals are all correct, no need to test those.

How would you go about creating a test program that proves the core to be functioning? Assembly? If so, what patterns of instructions to test? Or rather maybe C and risk wasting a lot of time on crap compiler? What program to use for the testing? Any book recommendations?

Once I'm done this project I will probably release it as open source, but I don't want to release a crap thing. The thing has at most historical interest, but it was a neat part of the past.
If you're creating a VHDL core, then you should perform your testing of the core via VHDL simulation and verify that the core is correct prior to moving up to higher level testing at the assembly level. Basically, you'll set up a testbench that instantiates your core and then loops through a set of hardcoded asm instructions. From the core's point of view, it is fetching instructions from memory. But, the testbench doesn't really care about that, since it is simulating the memory accesses and feeding your hardcoded cases directly into the core. The testbench is a VHDL loop construct that iterates through two data structures: the data that you're pushing into the core on each cycle and the results that you expect back. Add cases in the testbench for each case for each opcode (one case that exercises the carry, overflow, etc. flags, cases for extra addressing modes, etc.).

I will be the first to admit this takes a long time to set up. But, it also serves as a regression test for future modifications. Just make your core changes and then rerun the testbench and see if you broke anything. Once you verify that the opcodes are behaving properly, THEN you can move onto building an assembler that generates correct binaries.

JawnV6
Jul 4, 2004

So hot ...
He has a known good assembler. And writing something that can validate a core at the pin IO level is equivalent to writing a full simulator, which he hasn't stated is available and the construction of which is duplicating his VHDL effort in another language. Much easier to run a few ASM snippets through and check for correctness.

Delta-Wye
Sep 29, 2005

hendersa posted:

If you're creating a VHDL core, then you should perform your testing of the core via VHDL simulation and verify that the core is correct prior to moving up to higher level testing at the assembly level. Basically, you'll set up a testbench that instantiates your core and then loops through a set of hardcoded asm instructions. From the core's point of view, it is fetching instructions from memory. But, the testbench doesn't really care about that, since it is simulating the memory accesses and feeding your hardcoded cases directly into the core. The testbench is a VHDL loop construct that iterates through two data structures: the data that you're pushing into the core on each cycle and the results that you expect back. Add cases in the testbench for each case for each opcode (one case that exercises the carry, overflow, etc. flags, cases for extra addressing modes, etc.).

I will be the first to admit this takes a long time to set up. But, it also serves as a regression test for future modifications. Just make your core changes and then rerun the testbench and see if you broke anything. Once you verify that the opcodes are behaving properly, THEN you can move onto building an assembler that generates correct binaries.

I like doing this, but I use systemverilog instead. gently caress doing file access in VHDL, and I haven't used a modern compiler that can't handle mixed-mode projects.

No Gravitas
Jun 12, 2013

by FactsAreUseless
I don't have a simulator, but I might be able to get one. Won't be easy, which is why I did not do it yet. I don't have and probably cannot ever obtain the core I'm replicating. Most likely I will have nothing to compare against though. For now it is safe to assume that I have the documentation and that is it.

The DUT is a CPU I wrote in VHDL and that I'm running on my crappy FPGA. I acquired a compiler, but it is the worst gcc port I have ever seen. Nothing is done in a standard way. crt0 is outright wrong and has no right to work. I had to do some hacking and make some decisions to get it working. I have zero gcc porting experience, so I'm not at all sure if anything works after my changes. I mean, it does not ICE and the code generated looks reasonable, but I'm just not sure. The broken crt0 does not inspire confidence that C is used frequently either.

I have started writing out some hardcoded testbench patterns by hand and I got frustrated with how ad-hoc it all felt. I'm not sure how to do coverage given the VHDL tools that can be gotten for free, which is my budget. I could instrument the source code by adding coverage metrics by hand, but that just feels dirty... At any rate, I do have testbench regression testing in, it saved me tens of times already. It just... so... tedious... losing... power... ...so... ... ad... ....hoc... :effort:

Thanks for help, I appreciate it.

JawnV6
Jul 4, 2004

So hot ...
gcc is a mess, on purpose. LLVM is fantastic and gives you plenty of hooks to see what it's doing. Starting over with LLVM may well save you time in the end despite having a "close" gcc starting point.

For quick and dirty coverage, just have a macro that you can slap on a signal that routes any hit to some location in memory. The hardest part is probably going to be getting the results back over to the host PC. I'm handwaving here since I'm not familiar with VHDL and what it offers for something like this, but you get the idea? You want to focus coverage on corner cases like bypasses where any activation of that signal means the test was 'good' and exercised a potentially buggy area.

What's your end goal? Matching the documented behavior? Running old programs written for the old core?

hendersa
Sep 17, 2006

JawnV6 posted:

He has a known good assembler. And writing something that can validate a core at the pin IO level is equivalent to writing a full simulator, which he hasn't stated is available and the construction of which is duplicating his VHDL effort in another language. Much easier to run a few ASM snippets through and check for correctness.
Well, what I suggested was this:

JawnV6 posted:

I'd start from the known good assembler and build up from there. Hand-write or generate assembly, run it on the core, test against ???.
... except that I filled in that "???" with using "VHDL simulation via testbench". All I did was fill in some more details on using VHDL simulation for the process. If he would like to run pieces of asm through his core, how will he be able to verify correctness accurately? I'm not saying that he couldn't run a few samples of asm through and get a feel for how the core performs, but for a full validation of core correctness, the VHDL simulation testbench is the way to go. I agree with you that it is a lot of work. But it does have the benefit of allowing him to record the register and flags states during simulation and then use a waveform viewer to watch the internals of the core for evaluating internal side-effects like settings flags and the like. I've just been bitten too many times by subtle corner-cases that I'd rather push 20 hours of validation testbench into a core than spend 200 hours swearing at higher-level (and more complex with additional components) systems that misbehave at random points.

No Gravitas posted:

At any rate, I do have testbench regression testing in, it saved me tens of times already. It just... so... tedious... losing... power... ...so... ... ad... ....hoc... :effort:
Trust me... I can sympathize. On the bright side, if you release your project as open source, everyone will love you for the extra effort. So that's good, right? I've hardcoded testbenches in VHDL and used ghdl to simulate and GTKWave to view the waveforms for the bargain basement price of $0. Good for learning the fundamentals, but much, much more effort than using professional tools.

No Gravitas
Jun 12, 2013

by FactsAreUseless

JawnV6 posted:

gcc is a mess, on purpose. LLVM is fantastic and gives you plenty of hooks to see what it's doing. Starting over with LLVM may well save you time in the end despite having a "close" gcc starting point.

For quick and dirty coverage, just have a macro that you can slap on a signal that routes any hit to some location in memory. The hardest part is probably going to be getting the results back over to the host PC. I'm handwaving here since I'm not familiar with VHDL and what it offers for something like this, but you get the idea? You want to focus coverage on corner cases like bypasses where any activation of that signal means the test was 'good' and exercised a potentially buggy area.

What's your end goal? Matching the documented behavior? Running old programs written for the old core?

How well does LLVM handle 8-bit CPUs with some registers used for indexing memories? In the end I'd probably be OK without having C, I just though it may be a shortcut to doing validation.

My end goal is having fun with the finished core. I aim for binary compatibility. This thing is probably the last one of this type of core ever designed from scratch without being a derivative of something. The ISA has a lot of things that make me salivate. Very elegant. Unless you try to implement it, that is. It has more bypasses than a hospital full of bacon-addicted cardiac patients...

Lot of sequels in the MCU world right now and so this is nice and refreshing.

hendersa posted:

On the bright side, if you release your project as open source, everyone will love you for the extra effort.

If I ever make it to the end. But yeah, I do want to share the good. I thought about selling it, but no one wants to buy an 8-bit CPU core made by someone self-taught in the age of ARM... So might as well share. I could use the money, but it ain't gonna happen.

JawnV6
Jul 4, 2004

So hot ...

hendersa posted:

Well, what I suggested was this:

... except that I filled in that "???" with using "VHDL simulation via testbench".
Because he hadn't specified. Now he has. He doesn't have a simulator. He doesn't have a real core to test against. He has documentation. Which means that "using simulation" amounts to writing a known-good simulator in VHDL or another language, and given that he's already done that once I don't think the best way to test the first system is to build a second. It doesn't strike me as being "a lot of work" it strikes me as untenably optimistic and impossible in practice.

Best case, he writes the same bugs in the simulator and the duplication of effort is entirely wasted. I'm amazed that you're still pushing this "method" despite his end goal still left unstated and knowing he doesn't have anything to easily compare it to.

JawnV6
Jul 4, 2004

So hot ...

No Gravitas posted:

How well does LLVM handle 8-bit CPUs with some registers used for indexing memories? In the end I'd probably be OK without having C, I just though it may be a shortcut to doing validation.
You're still going to be writing the layer between TAC and assembler. It's just that LLVM will expose nice handles for you to grab hold of and work with while gcc explicitly obfuscates this area.

No Gravitas posted:

My end goal is having fun with the finished core. I aim for binary compatibility. This thing is probably the last one of this type of core ever designed from scratch without being a derivative of something. The ISA has a lot of things that make me salivate. Very elegant. Unless you try to implement it, that is. It has more bypasses than a hospital full of bacon-addicted cardiac patients...
Where do you expect the bugs to be? How far are you from running an old binary through?

Coverage should be your starting point. If there's a lot of bypasses, watch them and make sure they're exercised. Write a test per bypass if you have to, though I really think randomly-generated assembler is the easiest path to a lot of volume of running code. I'm kinda guessing this is an old game system and the old binaries are games?

JawnV6
Jul 4, 2004

So hot ...
edit: welp, it DID get posted the first time

No Gravitas
Jun 12, 2013

by FactsAreUseless

JawnV6 posted:

Where do you expect the bugs to be? How far are you from running an old binary through?

Coverage should be your starting point. If there's a lot of bypasses, watch them and make sure they're exercised. Write a test per bypass if you have to, though I really think randomly-generated assembler is the easiest path to a lot of volume of running code. I'm kinda guessing this is an old game system and the old binaries are games?

The bugs I expect in the bypasses and in the address generation unit. I don't have any old binaries. I don't think any exist to speak of. The thing was not popular when it came out, let alone now. But I am pretty close to completion. I ran some trivial code on the gate array and it ran just fine.

To write:
  • Multiplier
  • Overflow flag
  • Interrupts
  • Conditional moves
  • TEST/CMP instructions
  • Clock speed control instructions and halting

To test:
  • The shifter.
  • More testing on jumps/calls.
  • AGU writeback. Software stack writeback.

Basically, most of the thing is there. I'm about a month of concentrated effort from finishing.

And no, this isn't an old game system. It is just a workhorse MCU. You know, run a washing machine kind of device? Yeah, like that.

hendersa
Sep 17, 2006

JawnV6 posted:

Best case, he writes the same bugs in the simulator and the duplication of effort is entirely wasted. I'm amazed that you're still pushing this "method" despite his end goal still left unstated and knowing he doesn't have anything to easily compare it to.
Sorry. I didn't mean to get you all riled up. I was basing my suggestion off of two things that were originally stated:

No Gravitas posted:

You have no idea if the CPU core works correctly.
... and ...

No Gravitas posted:

How would you go about creating a test program that proves the core to be functioning?
I only suggested that running a VHDL testbench on the core to prove that it is functioning correctly for each opcode, prior to moving to higher-level testing, would be a good approach. I didn't suggest that he write a simulator, but use a series of test cases in his testbench to exercise each operation. The results would be compared against the original spec for the microcontroller. I'm not suggesting that he use a VHDL testbench for higher-level testing. If he doesn't need that level of verification, or if the testbench that he has already written is adequate for his purposes, then he is already familiar with the spirit of my advice and is past the point where my comment was useful.

Slanderer
May 6, 2007
All this talk is making me regret having to drop digital design back in school....

Delta-Wye
Sep 29, 2005

No Gravitas posted:

Basically, most of the thing is there. I'm about a month of concentrated effort from finishing.

And no, this isn't an old game system. It is just a workhorse MCU. You know, run a washing machine kind of device? Yeah, like that.

You're blueballing me man :( I'm hoping this is some awesome eccentric PLC controller or the like.

JawnV6
Jul 4, 2004

So hot ...

No Gravitas posted:

The bugs I expect in the bypasses and in the address generation unit. I don't have any old binaries. I don't think any exist to speak of. The thing was not popular when it came out, let alone now. But I am pretty close to completion. I ran some trivial code on the gate array and it ran just fine.
I'd start by writing some snippets of code targeting different sections of memory and execution units, then a randomizer to throw different flow control around them before submitting the whole batch to the core and checking memory afterwards.

Like have a 5-instruction sequence that grabs a value from memory, multiplies it, then stores it back. Toss that chunk and a few others that focus on cmov, overflow, clock modification, etc. at the core in a few configurations with different branch types between them. Use coverage to figure out what's getting hit. Write more tests or adjust the randomizer to hit areas that the first random regression didn't hit. Pull some of the test cases that show good coverage back into simulation so you can get a full trace and make sure that your coverage points actually imply interesting behavior is happening on the FPGA.

hendersa posted:

I only suggested that running a VHDL testbench on the core to prove that it is functioning correctly for each opcode, prior to moving to higher-level testing, would be a good approach.
Except it's not. Getting a pin-accurate model of a pipelined CPU is tantamount to writing a full simulator and writing a test for each opcode separate from the bypasses isn't going to catch anything but the most trivial errors. There's no good way to do what you're saying, no matter which stupid thing it turns out you're saying.

hendersa posted:

If he doesn't need that level of verification, or if the testbench that he has already written is adequate for his purposes, then he is already familiar with the spirit of my advice and is past the point where my comment was useful.
Good, we agree that your advice was vacuous at best.

No Gravitas
Jun 12, 2013

by FactsAreUseless

Delta-Wye posted:

You're blueballing me man :( I'm hoping this is some awesome eccentric PLC controller or the like.

It isn't anything great. Really. There is zero reason to use this CPU.

The people designed a great ISA, but it really does not work well with gate arrays. This thing won't run fast on an FPGA. Unless you fab it, it won't run low power either. I'm building it to optimize for 4-LUT gate arrays, so good luck fabbing it. Despite my coding being pretty darn good with VHDL the core runs at about 50 MIPS, and that is without ROM and data memory setup times accounted for. We are talking 25 MIPS tops once the dust settles, if not less. Maybe more on a better FPGA, but I just have the Nexys2 to run stuff on.

I wish I could say more, but I'm worried it will be Chrono Resurrection all over again. I only took the public documentation and made a circuit that works. Despite the fact that I don't think I'm doing anything wrong, I'm paranoid. :can:

To change the topic a bit: Has anyone of you guys ever used a tiny compression algorithm for the data in a program? I have seen some cute decompressors that fit in under 150 bytes of ROM, but I just cannot decide which is a good candidate... Any hints?

isr
Jun 13, 2013
What type of data are you interested in compressing? Whats the end goal? Saving NVM/RAM/EEPROM/Communication time?

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.
I've personally had some good results with LZO and its variants. Never in an embedded scenario, though.

evensevenone
May 12, 2001
Glass is a solid.
What kind of strategies do people use for sticking things like serial numbers in a binary? I'm using gcc, which produces elfs. I had assumed that if I just made a global variable, it would be easy to find a tool that would let me edit the .elf file in some sort of scriptable way, but amazingly there doesn't seem to be anything.

The only thing I've found so far is making a .hex file and using the symbol table to figure out what offset to edit.

I also tried writing my own tool using libelf, but for some reason the elf file I get is half the size of the original and the libelf docs are pretty bad.

chippy
Aug 16, 2006

OK I DON'T GET IT
edit: Sorry, thought I was the electronics thread.

Pan Et Circenses
Nov 2, 2009

evensevenone posted:

What kind of strategies do people use for sticking things like serial numbers in a binary? I'm using gcc, which produces elfs. I had assumed that if I just made a global variable, it would be easy to find a tool that would let me edit the .elf file in some sort of scriptable way, but amazingly there doesn't seem to be anything.

The only thing I've found so far is making a .hex file and using the symbol table to figure out what offset to edit.

I also tried writing my own tool using libelf, but for some reason the elf file I get is half the size of the original and the libelf docs are pretty bad.

Just discovered this thread (I love and work in embedded systems), so sorry if this is too late to be useful, but just use a constant with a known value and search/replace. No need to putz around with elf files, if you declare a "static const char[32]..." somewhere it doesn't really matter what's actually in there when the program runs as long as it's valid data. In other words, try the following:
code:
#include <stdio.h>

static const char serial[] = "SERIAL";

int main (int argc, const char* argv[])
{
    printf("%s\n", serial);
}
Prints out "SERIAL" when run obviously. Then just use sed to replace SERIAL with whatever you actually want the value to be, padding unused space with nulls:

sed -i "s/SERIAL/v1.1\x0\x0/g" mybinary

Run it again and it prints "v1.1". Use a simple bash script or whatever to customize the process to your heart's content. Just make sure you have a string that's sure to be unique in your executable file. Did I get the question right I hope?

Rescue Toaster
Mar 13, 2003
Are you sure you want to encode serial numbers into the actual program memory rather than a separate EEPROM or something?


On an aside, the PICKit 3 is really driving me nuts. What a piece of crap.

It works OK if I plug it in AFTER the circuit is powered, but if I power-cycle the circuit later while the pickit is connected, the pickit stops talking to my PC (not just pickit can't talk to chip, but MPLAB X can't talk to the pickit) until I unplug & reconnect the USB cable. Totally crazy.

The circuit is totally floating so there's no weird ground bounce or loop (it has a low inter-winding capacitance transformer, even). Windows doesn't see the USB device disconnect or anything, the pickit just stops responding until I cycle it. That's going to make it almost impossible to use it for real in-circuit debugging.

Pan Et Circenses
Nov 2, 2009
Maybe transient voltage spike or something like that during power cycle is knocking the PICKit out? Have access to a scope you can hook the circuit->PICKit connections up to so you can watch for naughty spikes or other weirdness on power up? Or maybe add some protection diodes to those signals? Only thing I can guess off the top of my head.

Edit: Now that I think, that's the sort of behavior I usually see when I connect a USB thing's ground to not-ground (which I do embarrassingly often with my USB scope).

Pan Et Circenses fucked around with this message at 22:31 on Jul 28, 2013

movax
Aug 30, 2008

Rescue Toaster posted:

Are you sure you want to encode serial numbers into the actual program memory rather than a separate EEPROM or something?


On an aside, the PICKit 3 is really driving me nuts. What a piece of crap.

It works OK if I plug it in AFTER the circuit is powered, but if I power-cycle the circuit later while the pickit is connected, the pickit stops talking to my PC (not just pickit can't talk to chip, but MPLAB X can't talk to the pickit) until I unplug & reconnect the USB cable. Totally crazy.

The circuit is totally floating so there's no weird ground bounce or loop (it has a low inter-winding capacitance transformer, even). Windows doesn't see the USB device disconnect or anything, the pickit just stops responding until I cycle it. That's going to make it almost impossible to use it for real in-circuit debugging.

Sounds like a grounding issue, post the ICSP part of your circuit if you can. I am one of the threads resident PIC fanboys.

Rescue Toaster
Mar 13, 2003
I added a 20 gauge solid copper wire from target ground to my PC ground. That didn't solve it. The circuit's power supply is a pretty standard linear supply. The power transformer is a split bobbin so the primary<->secondary capacitance is really low. There shouldn't be much if any ground bounce/spike when switching on or off.

The circuit on the board is about as simple as possible. MCLR has a 10K pullup and goes to pin 1, pin 2 is 3.3v, pin 3 is ground, 4 & 5 go to PGED3 and PGEC3. There is no other circuitry on MCLR or the IO lines at all. Target ground is tied to PC system ground with the jumper I just added, and 3.3v goes to the output of a 7833 linear regulator. That's really all there is at this point.

The PICKit 3 stops responding as soon as I shut the supply off. The supply has quite a bit of capacitance on the bridge, so if the PICKit is trying to back-feed at all there's an awful lot of capacitance on the other side of the regulator, which is not reverse-current protected. It's set to target power only but it could be conducting through the IO lines, I suppose.

Rescue Toaster fucked around with this message at 03:10 on Jul 29, 2013

evensevenone
May 12, 2001
Glass is a solid.

Pan Et Circenses posted:

Run it again and it prints "v1.1". Use a simple bash script or whatever to customize the process to your heart's content. Just make sure you have a string that's sure to be unique in your executable file. Did I get the question right I hope?

Yeah, I thought about doing that but it seemed a little dicey. Instead I made a linker section, put the variable in that section (something gcc does support), and then made a script that gets the offset of that section in the binary and and changes that location. That also let me put the data in a specific location in flash, which might be handy for debugging.

Rescue Toaster
Mar 13, 2003

movax posted:

Sounds like a grounding issue, post the ICSP part of your circuit if you can. I am one of the threads resident PIC fanboys.

Yes it was grounding. I'm still not sure why, but there you go.

A PIC specific question. (dsPIC33 actually) Absolutely everywhere online people say that a normal instruction takes 4 clock cycles. However, in the oscillator section of the reference manual, microchip very clearly shows that Fp is 1/2 of Fosc, and even has a diagram extremely clearly showing that one instruction is fetched, and one is executed every cycle of Fp, and thus every 2 cycles of Fosc. Most basic instructions only take 1 cycle (two technically with the two-stage pipeline).

I would just say that people are dumb and don't understand pipelines, but when using any of the delay functions, you actually do have to use the 1/4 factor apparently... or is that just by convention? For instance I have a 50Mhz external oscillator with no dividers or PLL active, so I would expect Fp to operate at 25Mhz, which means with a two-stage pipeline the processor actually executes 25 million instruction-cycles per second. Yet when using any of the delay functions I've found (INCLUDING microchip's own) I have to put in 12500000 as the clock frequency if I want the time to be right. Quite bizarre.

This guy seems to confirm that it works the way I think it should on the dsPIC: http://ctjhai.wordpress.com/2008/11/02/duration-of-an-instruction-cycle/ Seems like it's just history/convention/tradition that people assume each instruction takes 4 cycles, so they work that into the math for the delay routines, and you have to correct for it.

Rescue Toaster fucked around with this message at 02:38 on Jul 30, 2013

isr
Jun 13, 2013

Rescue Toaster posted:

Yes it was grounding. I'm still not sure why, but there you go.

A PIC specific question. (dsPIC33 actually) Absolutely everywhere online people say that a normal instruction takes 4 clock cycles. However, in the oscillator section of the reference manual, microchip very clearly shows that Fp is 1/2 of Fosc, and even has a diagram extremely clearly showing that one instruction is fetched, and one is executed every cycle of Fp, and thus every 2 cycles of Fosc. Most basic instructions only take 1 cycle (two technically with the two-stage pipeline).

I would just say that people are dumb and don't understand pipelines, but when using any of the delay functions, you actually do have to use the 1/4 factor apparently... or is that just by convention? For instance I have a 50Mhz external oscillator with no dividers or PLL active, so I would expect Fp to operate at 25Mhz, which means with a two-stage pipeline the processor actually executes 25 million instruction-cycles per second. Yet when using any of the delay functions I've found (INCLUDING microchip's own) I have to put in 12500000 as the clock frequency if I want the time to be right. Quite bizarre.

This guy seems to confirm that it works the way I think it should on the dsPIC: http://ctjhai.wordpress.com/2008/11/02/duration-of-an-instruction-cycle/ Seems like it's just history/convention/tradition that people assume each instruction takes 4 cycles, so they work that into the math for the delay routines, and you have to correct for it.

Okay, so on 8-bit pics (PIC10/12/16/18) the instruction cycle is 4 oscillator cycles, FCY = 1/4 FOSC. For 16-bit, (PIC24/dsPIC), FCY = 1/2 FOSC. So if you have a dsPIC33 with a 4MHz xtal, and turn on 8x PLL, FOSC will be 32MHz, FCY will be 16MHz.

As for the microchip's delay functions, it may be programmed with 8-bit in mind. If only FOSC is specified (which is common for microchip stuff), it might not know whether to divide by 2, or 4..

Not directly related to your question (which i'm not sure i answered completely) but still another important thing to note is that nearly each instruction is 1 cycle, except for goto's branches and calls which are two calls if taken.

So consider this implementation of strlen:
code:
_strlen:
; w0 = string pointer
; w1 = return value pointer
  push w0
Lstrlen_check_for_null:
  cp0.b [w0++] ; 1 cyc
  bra z, Lstrlen_end ; 1 - 2 cyc.
  bra Lstrlen_check_for_null ; 1 - 2 cyc.
Lstrlen_end:
  pop w2
  sub w0,w2,[w1]
  return
This is pretty good as it is, but if you repeat the cp0 and bra z statement some number of times, you'll use more program memory but execute very many less 2 cycle loop branch instructions.

isr fucked around with this message at 16:13 on Aug 5, 2013

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man
Has anybody here ever used an avr xmega a series? Specifically, as a Spi slave, and specifically specifically as a spi space using DMA so you don't have to interrupt every byte? Interrupting every byte sucks. But the problem is the spi timing isn't guaranteed so there has to be a full DMA transaction per byte, a single shot burst transfer. And the DMAC disables a channel at the end of its transaction. So it seems like I'd have to interrupt every byte anyway to turn the DMA back on.

Is there a way to get around this that anybody here knows?

isr
Jun 13, 2013

Phobeste posted:

Has anybody here ever used an avr xmega a series? Specifically, as a Spi slave, and specifically specifically as a spi space using DMA so you don't have to interrupt every byte? Interrupting every byte sucks. But the problem is the spi timing isn't guaranteed so there has to be a full DMA transaction per byte, a single shot burst transfer. And the DMAC disables a channel at the end of its transaction. So it seems like I'd have to interrupt every byte anyway to turn the DMA back on.

Is there a way to get around this that anybody here knows?

The RPTCNT and REPEAT bit in the DMA controller will let you repeat the DMA transaction in several different modes, check out section 5.4 in this pdf http://www.atmel.com/Images/doc8077.pdf

Edit: I just read the bit about your SPI timing isn't guaranteed. Do you mean that the individual bytes of some data structure are arriving in separate SPI transactions?

isr fucked around with this message at 17:05 on Aug 6, 2013

yippee cahier
Mar 28, 2005

Hey guys, glad I found the thread. I'm using Atmel's SAM3 & SAM4 chips at work right now and it'll be great to have a resource to figure out what's going on in the real world. Haven't touched microcontrollers since school so I'm still getting up to speed.

Anyways, I'm using Atmel Studio as an IDE for now, but others use Eclipse on the same project, so I can't get locked too into one particular IDE. I've cooked up a makefile to pull in a bunch of ASF to replace the half baked drivers and libraries that were being used before and it's already a lot better. On the other hand, I've also already noticed an issue mentioned in the thread when portings over -- no timeouts in the I2C library! Has anyone tried filing improvement requests with Atmel? How responsive are they? Are there any projects out there that aim to provide proper platform independant midlevel peripheral libraries that I could port to the register interface of my chip or should I just be patching Atmel's and calling it a day? I see all the other interesting platforms out there and worry about getting too deep in Atmel's stuff...

EpicCodeMonkey
Feb 19, 2011

quote:

Has anyone tried filing improvement requests with Atmel? How responsive are they?

It's varied a bit recently due to internal resource restructuring, but the SAM3* and SAM4* have a heavy focus right now so you're much more likely to get bugs assigned and fixed for those devices. File your bugs here:

http://asf.atmel.com/bugzilla/

Slanderer
May 6, 2007

sund posted:

Hey guys, glad I found the thread. I'm using Atmel's SAM3 & SAM4 chips at work right now and it'll be great to have a resource to figure out what's going on in the real world. Haven't touched microcontrollers since school so I'm still getting up to speed.

Anyways, I'm using Atmel Studio as an IDE for now, but others use Eclipse on the same project, so I can't get locked too into one particular IDE. I've cooked up a makefile to pull in a bunch of ASF to replace the half baked drivers and libraries that were being used before and it's already a lot better. On the other hand, I've also already noticed an issue mentioned in the thread when portings over -- no timeouts in the I2C library! Has anyone tried filing improvement requests with Atmel? How responsive are they? Are there any projects out there that aim to provide proper platform independant midlevel peripheral libraries that I could port to the register interface of my chip or should I just be patching Atmel's and calling it a day? I see all the other interesting platforms out there and worry about getting too deep in Atmel's stuff...

I think part of the reason for the lack of timeouts is that I2C doesn't actually specify that feature, unlike SMBus. I think they expect you to either implement it yourself, or utilize the watchdog.

isr
Jun 13, 2013
Take this with a grain of salt, because I'm much more familiar with microchip libraries than atmel.

Don't wait for them to patch the library, because you'll be waiting a very long time. just go ahead and patch the library yourself. For I2C, I'd recommend polling the clock line. You don't need to poll at twice the nyquist rate necessarily, you can poll at a much lower-rate if you set the timeout higher. So, for example, if your bus rate is 400KHz you could poll at 1KHz. If you read I2C Clock as low for, say, 1-2 seconds you can be reasonably sure that something has caused the bus to hang at which point you can rectify the situation however you want.

isr fucked around with this message at 02:29 on Sep 1, 2013

mfny
Aug 17, 2008
I have decided to do something ive been threatening to do for years and get into this embedded thing. Main thing I am struggling to decide on is my "foot in the door" hardware as it where i.e a dev kit,evaluation board,that sort of thing. However there are so many of them that its hard to filter the crap from the good. So ill just list what my googling/reading/research so far points me to am looking for:

Inexpensive

"Plug in and Play" programming via USB and a simple clean easy to set up and use development environment available. Want to spend time learning rather then getting the tools to work. I also think I should be looking at something with a bit more "kick" to it then Arduino as well, though not something that will leave me totaly overwhelmed, a "middle ground" so to speak.

I am pretty much a blank slate/newbie..I did dabble a little in 68000 ASM years ago on the Mac, (or Macintosh rather as thats how long it was ago) and thats about the most ive ever done in terms of programming and it didnt realy go anywhere at all, mainly due to it being a rather niche thing indeed.

ante
Apr 9, 2005

SUNSHINE AND RAINBOWS
Go get yourself a $30 PicKit 3 and MPLAB. My preference is for the original MPLAB, rather than MPLAB X, but they're both kinda buggy pieces of poo poo. That's pretty much par for the course.

PICs will take you the full gamut from "pretty easy" to "industry standard" solution.


There will almost definitely points right at the beginning where you'll feel overwhelmed, but just take a break when that happens and come back to it. Or ask here. I like answering easy questions because then I'm like 90% sure I have the right answer and won't look dumb

isr
Jun 13, 2013

ante posted:

Go get yourself a $30 PicKit 3 and MPLAB. My preference is for the original MPLAB, rather than MPLAB X, but they're both kinda buggy pieces of poo poo. That's pretty much par for the course.

PICs will take you the full gamut from "pretty easy" to "industry standard" solution.


There will almost definitely points right at the beginning where you'll feel overwhelmed, but just take a break when that happens and come back to it. Or ask here. I like answering easy questions because then I'm like 90% sure I have the right answer and won't look dumb

I am with this poster. Also, check out the microstick II, it comes with a few 16bit PIC24 MCU's, and a 32bit PIC32, all in DIP packages. The Microstick II is basically a PicKit 3, so if you want to program a non-dip part thats also possible. MPLAB X sucks in windows, but it flies in linux. Some other things to check out are TI's MSP430, Atmel is popular (they can't ship parts, but if you're a hobbiest this doesn't matter), and there are a load of ARM things around. I mainly work with microchip PIC24 parts. I have a real-ice, a pickit 2, a pickit 3, an ICD3, and a microstick II. I mainly use the microstick II.

If you want to have a good time doing embedded development, you'll need a few other things. At minimum, get one of these logic analyzers. http://www.saleae.com/logic/ . The 8 channel logic analyzer will let you view digital signal wave forms and decode communications protocols. The most frustrating thing about learning embedded dev is that you have to get so many things working in a system before you can tell if anything at all is working. The logic analyzer will help you see if gently caress all is happening.

The saleae is $150. There are <$25 clones that will work with the saleae software. The software basically boatloads to the hardware, so if you are really on a budget get the clone, otherwise please support saleae because the product is worth way more than $150.

So if you get the microstick II + saleae is $180, and you'll basically be set for a very long time.

After that, get a scope. Any scope will do. It doesn't have to be fast, pretty, or calibrated. It just has to be good enough to get a sense of rise times, fall times, and view signals in realtime.

Also, I can't recommend total phase tools enough. I have the beagle 480 usb analyzer, beagle SPI/I2C analyzer, and two of the aardvark SPI/I2C host adapters. The software is first-class, and they make a really easy to use API so you can control the tools programmatically.

Those things make up my embedded dev tool-box. Depending on what you want to do you will probably choose different or other tools, but figuring out what tools you need is a very large part of getting done what you want to get done without hating life, getting discouraged, and quitting.

isr fucked around with this message at 04:14 on Sep 4, 2013

Adbot
ADBOT LOVES YOU

mfny
Aug 17, 2008

ante posted:

Go get yourself a $30 PicKit 3 and MPLAB. My preference is for the original MPLAB, rather than MPLAB X, but they're both kinda buggy pieces of poo poo. That's pretty much par for the course.

PICs will take you the full gamut from "pretty easy" to "industry standard" solution.


There will almost definitely points right at the beginning where you'll feel overwhelmed, but just take a break when that happens and come back to it. Or ask here. I like answering easy questions because then I'm like 90% sure I have the right answer and won't look dumb

PICKIT 3 is a programmer only ? not really the kind of hardware I was looking for I dont think ..

Something along the lines of an launchpad/eval board was what I had in mind i.e MSP430 Launchpads/Eval boards from TI prehaps ?

  • Locked thread