Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
movax
Aug 30, 2008

also, every development board ever:

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

so i think i've got all the photodiode / other poo poo figured out 'enough' to know that poo poo is definitely possible, plus what all the signals do

now i'm thinking about the caching / memory scheme though for loading an ISO -- i can certainly run a fast enough clock to generate EFM/EFM+ encoded data on the fly, but like -- i can't put 16GB of RAM on this thing to hold an entire image in there. based on the motor and sled signals, i can translate those into a position, so i'd need some kind of predictive guesswork to figure out where the sled is slewing too and pre-load that chunk of data

how did console programmers usually do this poo poo from a sw pov? the toc is mastered onto the disc after authoring, did they write their own low-level cd-rom code?

most of the lag in seeking in an optical drive comes from physically moving the laser head and then re-focusing; i have no way of knowing the target destination so i have to simulate that, and then i have as long as it takes to re-focus before i'm slower than a real thing.

e: now i wonder what like daemon tools and virtual iso drives do -- just some file handles and the underlying os handles digging up the required data? at least with that scheme you know the exact command being sent to the drive and you can choose to intrepret it as you see fit

movax fucked around with this message at 00:10 on Jan 2, 2016

movax
Aug 30, 2008

I'd love to use Ethernet and do everything via nfs/smb, but for initial development, a sd card or big SPI flash might be the best option. can get one of those games that's only a few hundred mb in size to use as a test case; if I design my interfaces properly, it can be pretty modular.

when the console drive slews say from the beginning of the disc to the middle, at the end of that sequence I'll know where I should be but then I'm screwed because I need to come up with the data pretty loving fast; on a real disc, the physical bits are waiting there of course.

movax
Aug 30, 2008

bump lol

movax
Aug 30, 2008

i wish i had time to gently caress around with my optical drive idea thing but grad school is killing me

i'm learning shitloads about dsp though -- signals are fun

movax
Aug 30, 2008

Poopernickel posted:

Tangent bump: This is the best book on DSP I have literally ever read - I refer back to it almost every time I need to implement something tricky:
http://www.amazon.com/Understanding-Digital-Signal-Processing-Edition/dp/0137027419

What's great about it is that the book isn't academic at all. It targets the working engineer (me) who's way too lazy to digest a bunch of math just to get to a conclusion (also me)

yeah this book is dope, i'm using it in addition to the main course textbook (oppenheim and schafer)


DuckConference posted:

I've now reached the stage of madness where one becomes convinced that the termination provided by attaching a scope probe is fixing everything on the bus

it almost makes sense if you forget that it's an I2C bus running at 3.3V and 10kHz and shouldn't be anywhere near that sensitive

add the scope probe equivalent circuit to your schematic and it should make sense

the fact you need that filter on the line is weird as hell. are your reflections / ringing that bad that devices are seeing double-clocks?

movax
Aug 30, 2008

hobbesmaster posted:

that was probably their target...

also that's a lot of noise

movax
Aug 30, 2008

Sweevo posted:

no, the reason why people want IoT is because they are idiot shitlords who think wifi toast racks and web apps to remotely monitor your washer fluid are things people actually want

movax
Aug 30, 2008

Bloody posted:

will 2016 be the year of twitter on the refrigerator?

nah

2014?

movax
Aug 30, 2008

Star War Sex Parrot posted:

this thing is my enemy



blast to the past

actel 54sx anti-fuse fpga, og altera max cpld, and the classic tms320 dsp (and its lovely programming environment)

which debugger are you stuck using?

movax
Aug 30, 2008

Bloody posted:

why is it a colossal pain in the rear end to integrate 10/100/1000 ethernet into a thing

alternatively why is isolating usb a colossal pain in the rear end

because they picked nrzi signaling as their physical layer (re: usb)

movax
Aug 30, 2008

gonna cross post

quote:

So I've crapped on the Cyclone V SoC in the past, mostly because they lost to Xilinx in getting to market first and I've been doing Zynq stuff for (gently caress me) the past 5 years or so.

But, now I'm seriously thinking about dropping the Zynq, because the Cyc V has L2 ECC support in almost every L2 data store.

The TRM is confusing as gently caress though -- the Cyc V definitely supports 1GB of RAM (max?) attached to the HPS? Is this even with ECC enabled? The Zynq tops at 1GB, 512MB if you turn on ECC.

And, with 2 PCIe Hard IP blocks, which one (if any) on the Cyclone V can be used as PCIe root complex that Linux can talk too?

movax
Aug 30, 2008

Bloody posted:

medical device r&d ¯\_(ツ)_/¯
some day this will be a hard realtime system with no pc or human operator involved but unfortunately that day is not for at least a year
they are being buffered (albeit not for very long, right now the buffer is like 3 ms deep, this is being improved around 1000x in the next iteration) but still need some sort of channel to be sent over in a lossless format to a pc in such a manner that an operator can quickly react to problematic signals

pcie! :getin:

movax
Aug 30, 2008

yeah it's not galvanic but data and clock are AC coupled

also you can do PCIe over fiber

movax
Aug 30, 2008

question for fpga dudes

have a 14-bit parallel video interface into 7-series fabric that will come in at most 640x512, 30fps. data pipeline is this interface, axi to arm core, and then stream over ethernet (zynq-7000)

1) simple parallel interface that AXI DMAs data into the DDR from the ARM controller, no additional frame buffer (or use the DDR that the CPU is using and steal a few megabytes)
1a) I bet it'll be annoying from a sw pov to steal that chunk of address space from linux

2) simple parallel interface + a SRAM or SDRAM frame buffer in FPGA land (32Mbit maybe, so roughly 4 frames), and then DMA into DDR for the CPU to do stuff

any canonical implementations or app notes to steal from?

movax
Aug 30, 2008

BobHoward posted:

re 1a, that came up at work and apparently a viable solution (for an embedded system with known HW) is to pass a parameter to the Linux kernel via your boot loader that makes it use only X bytes of physical memory instead of the default which is to use all of it. you can then do w/e you like with the extra mem above the region Linux occupies

we're planning on using this with x86 Linux but I can't think of a reason why it wouldn't also work on zynq

if you don't do this and you need a rly large dma buffer you'll have to implement scatter gather dma because the Linux kernel will not let you allocate huge amounts of contiguous physical ram

hmm yeah, I recall that kernel arg. raw video bandwidth is 150 mbit per second, 14-bits per pixel -- wonder how rough that'll be fighting the DDR controller for priority vs. the cpu; latency doesn't matter so putting an additional 32Mb framebuffer seems reasonable -- got to see how much fabric a MIG instance takes up if not using SRAM

movax
Aug 30, 2008

i used a can core from them once, it was fairly decent

movax
Aug 30, 2008

whatever is the least amount of effort from a sw pov (toolchains, etc)

every mcu out there can do that easily

how does it integrated into the system? could also get a big gently caress-off spi adc/system monitor chip and talk to it also with literally any mcu

movax
Aug 30, 2008

Bloody posted:

in my inbox: "microchip acquires atmel"

:rip:

i thought this happened last year, or is now 100% official official?

movax
Aug 30, 2008

Poopernickel posted:

The latest in mergergate - Qualcomm is apparently in talks to buy Xilinx

jesus christ please god no

seriously, why would they? do they want xilinx's serdes's that badly to build some kind of giant backbone/virtex ultrascale like part that xilinx already basically produces just for cisco/juniper/etc?

movax
Aug 30, 2008

ah poo poo guess it's old news, there's articles back to early next year

cool tidbit on monitoring stock filings and such to predict what might be going down though

movax
Aug 30, 2008

JawnV6 posted:

my team won the FLIR hackathon a few weeks back, the dev kit just came in

important thermography research is in progress


can u tell where the sunbeam is




ici after re-settling



handprint on cat fur


http://imgur.com/a/VUntm whole album

terrible picture subject aside (why not dogge), which flir sensor is it? lepton? parallel/serial interface? looks like a dope sensor

movax
Aug 30, 2008

JawnV6 posted:

yeah, it's a lepton

comes with a bare board & screen that are just enough to take thermal images. there's a breakout board that exposes a SPI interface, you can get 60x80 images at 8hz

do you think it can detect farts?

movax
Aug 30, 2008

JawnV6 posted:

???

even gen3 was 8GT/s, but there's a lot of equalization tricks like 8b/10b on <gen2 or 128b/130b on gen3 that take the data rate down

im guessing the layout guy doesn't care much about the logical pipe's transfer rate

I've done a bunch of Gen3 designs, skew isn't too bad; early designs i used to just have the fab house tilt the fiberglass 45 degrees to avoid any untowards effects from running parallel to it, but religiously following the intel layout guides ended up just fine. didn't hurt that we could afford 16+ layer boards to ensure everyone got the planes they needed.

si is cool, there's a lot of bullshit out there but it's a necessary evil. all comes down to how fast your signal slews from low to high / high to low

movax
Aug 30, 2008

http://www.bloomberg.com/news/articles/2016-07-26/analog-devices-said-in-advanced-talks-to-buy-linear-technology

fuuuuuuuuuuuuck

i love ltc parts, i'm ehhh on ADI's stuff (outside of SDR)


many, many parts are gonna get EOL'd like a motherfucker

movax
Aug 30, 2008

Bloody posted:

oh boy theres nothing like trying to bring up undocumented asics with flaky as poo poo interfaces

POTENTIAL ERRATA: will draw arbitrarily large amounts of current on digital I/O if they come up with the supply; I/O have to be held low while supply comes up and cant be used for XX time after power on

aka if you try to bring them up with CS/nRST high lol @ u

this is why reset supervisors exist

movax
Aug 30, 2008

hobbesmaster posted:

PSA: if a data sheet has "DRAFT" as the background for every page and SUBJECT TO CHANGE as the footer do not do a board layout based on the pinouts
:staredog:

movax
Aug 30, 2008

mostly out of idle curiosity, is there a public changelog from what changed between say ARM Cortex-A9 r3p0, r4p0 and r4p1?

movax
Aug 30, 2008

nevermind, i think i found it

movax
Aug 30, 2008

Poopernickel posted:

in tyool 2017 my employer is going into production on a brand new product, designs started from scratch in 2015

what did they pick to drive their analog knobs, leds, and buttons? an attiny88 featuring:
- 512 bytes of ram
- SPI controller with no FIFO and a shared data register between transmit and receive
- no UART
- obsolete toolchain
- 8MHz clock speed
- two PWM channels (and the board has 4 user-facing LEDs)
- saves maybe 30 cents versus an equivalent m0, on a product that will cost several hundreds of dollars

also none of the LEDs are on either of the PWM pins

wish I had a hot tub time machine so I could go back and bunch that designer right in the dick

hahahahahaha

was this picked because someone prototyped with some kind of ~~maker~~ product?

also lol at saving 30 cents for something that isn't gonna (I assume) break 10K units -- how many hours is $3K saved?

movax
Aug 30, 2008

Spatial posted:

nvidia is using RISC-V to replace their onboard microcontroller Falcon. guess it's got legs

i think some of the chinese fab houses may be inclined to move towards it -- no licensing fees for something you make 100M+ of? woo hoo!

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

BobHoward posted:

yeah it's this i was thinking of, most cmos gpios on µCs are just going to self limit. it's an iffy thing to do but you can sometimes get away with it. not recommended for mass production

a better low component count option is inverse logic: connect the µC gpio to the LED cathode, and vcc to a current limit resistor to the LED anode. microcontroller gpios can often sink much more than they can source, and if the µC supports open/drain outputs, this trick lets you run the led from a higher voltage than the microcontroller's vcc

yep this is the way to go -- drive LEDs with a "____LED_B" signal and call it a day. lol i guess at the LEDs (potentially) briefly flicking while mcu is in reset / booting up

$0.001 for the requisite resistor ain't no thang

  • Locked thread