Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Tan Dumplord
Mar 9, 2005

by FactsAreUseless

Switzerland posted:

Thanks again for the advice.

One question tho re: capital-H HARD - if it's hard, why has nobody come up with a small Arduino-ish thing that takes the hard out of "make your own mechanical-y watch"?

Electronics aren't the BIG issue. Mechanical stuff is hard. You want a slick watch, you're probably not building it yourself IMO.

Adbot
ADBOT LOVES YOU

Aurium
Oct 10, 2010

Switzerland posted:

Thanks again for the advice.

One question tho re: capital-H HARD - if it's hard, why has nobody come up with a small Arduino-ish thing that takes the hard out of "make your own mechanical-y watch"?

Electromechanical watch parts are hyper specialized. They have to be to run at the low voltages and tiny currents.

If you want a personalized mechanicalesque watch you'd be best off buying one and then putting a new face / hands on it. You'll probably go though more than one putting new hands on it.

If you want to do weird stuff with a watch, like diy radio protocols (which isn't your interest) the most interesting one is TI's eZ430 chronos.

Or you could always get a smart watch and make your own homescreen.

If you want to hack a cheap digital watch (which you don't), you should give up, because even the lcd displays on those are incredibly specialized to the point that putting a different one on it probably won't work, and all of the functionality is on a single chip anyway, so there's nothing to change unless they didn't break out a button for, say, the backlight. Which again will be designed around tiny power consumption that its probably designed around one particular light anyway.

ante
Apr 9, 2005

SUNSHINE AND RAINBOWS
Electronics getting stuff down to that size, even digital is pretty tricky. You're not going to be able to solder components that small yourself (or at least, not at first), and if you can't build the hardware yourself, I guess you could hack on software? There's probably something on Tindie like that if you want to try your hand at it.

Also, sleep states for low power consumption on Arduino is hard. Better with native C/C++ and vendor abstraction layers, but still not exactly beginner stuff.

Popete
Oct 6, 2009

This will make sure you don't suggest to the KDz
That he should grow greens instead of crushing on MCs

Grimey Drawer
If all you wanna do is make a digital clock you get an 8 bit atmega chip and one of these.

https://www.adafruit.com/product/3013?gclid=Cj0KCQjwqM3VBRCwARIsAKcekb1-lnlWlrUrXaqlfGB4C27lSpc9eMdCiivaNkmKBE0drSc9-WJcKAIaAvWbEALw_wcB

It's a temperature compensating RTC that will get you pretty close. It'll still drift over time but less so.

Then get a 4 x 7 segment LED panel too display the digits.

Aurium
Oct 10, 2010
For all the hype around the accuracy of RTCs, the real value of them is the battery backup, which will let them keep value, and keep ticking while you've removed other power. That particular one has some interesting calendar features as well.

If you don't care about that, the math to calibrate your system clock vs absolute time is borderline trivial, and can easily get you to the same ballpark as an external RTC, for no added components. Many uCs also have the ability to talk directly to a watch crystal.

It may have much worse thermal drift, but if you're using it inside your temp won't change that much. Outside it can change a lot though.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

^^ and you could add a temp sensor for a few cents and with a little work compensate for temperature in software, too.

OP, I would recommend just making the clock display part first. You don't need to care if it loses time or is much less accurate below freezing. Hell, you don't care if it can track time at all if you have no way to show the result. Everything can be improved as you go, you will find out what you need to work on. Just make something first. Then see what could be better and make that part better.

Aurium
Oct 10, 2010

taqueso posted:

^^ and you could add a temp sensor for a few cents and with a little work compensate for temperature in software, too.

OP, I would recommend just making the clock display part first. You don't need to care if it loses time or is much less accurate below freezing. Hell, you don't care if it can track time at all if you have no way to show the result. Everything can be improved as you go, you will find out what you need to work on. Just make something first. Then see what could be better and make that part better.

Many uCs, including most Atmegas, already have a temperature sensor onboard.

yippee cahier
Mar 28, 2005

The RTC peripheral on the STM32 chip I'm working with supports being disciplined with 50 or 60 Hz reference signal from the utility grid. That's assuming your grid is reliable... https://arstechnica.com/tech-policy/2018/03/ovens-across-europe-display-the-wrong-time-due-to-a-serbia-kosovo-grid-dispute/. Might have to come up with something else when you move to battery operation.

https://en.wikipedia.org/wiki/Radio_clock has some other sources of time signals that could be interesting. I've only ever used GPS receivers, which often have a 1 pulse per second output that coincides with a second rollover event. While that's great for a clock reference, the added benefit is the receiver will provide an absolute timestamp that could be used to set your clock automatically. If you want local time you'll have to maintain a timezone database or be able to set a static UTC offset.

ante
Apr 9, 2005

SUNSHINE AND RAINBOWS
All y'all are over engineering this. The real one true method is to use an ESP8266 dev board with Arduino and read NTP time

carticket
Jun 28, 2005

white and gold.

I would actually use a high intensity IR source to bathe a room in a pulsed signal encoded with the time (and also function as a PPS signal) such that your watch auto syncs when in the room.

Hunter2 Thompson
Feb 3, 2005

Ramrod XTreme
I worked on the firmware of a "smart" mechanical watch and there's nothing fancy going on in firmware at all. Most of the time the microcontroller sleeps, waiting for the RTC to interrupt it so it can move the motor.

However, as others have said, the complexity lies in the motor and gearbox package (the "movement") and the miniaturized PCB. Movements are extremely precise mechanical devices, so you'll certainly want to buy one instead of trying to make your own. Good luck finding one for sale to the public though, they're specialty parts sold by only a few OEMs to other businesses that make watches.

I don't know how to make a PCB so somebody else can comment on the difficulty of that, but smaller usually means harder to design and more expensive to manufacture.

E: you can easily buy a complete movement as replacement parts, but that also includes whatever circuitry drives the stepper motor. I'm assuming you want to DIY the motor driver so maybe you could remove it and replace it with your own.

Hunter2 Thompson fucked around with this message at 09:13 on Mar 24, 2018

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

A watch sized pcb design and build is totally possible for an amateur, but it would be the culmination of quite a few hours of learning and experimenting.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)

JawnV6 posted:

It's a lepton I won at a hackathon, we had a RPi talking to it over SPI. The kit came with an impressive little mounting board with a LCD screen and did the color-correction stuff, along with the module mount with the pin breakouts. I'm trying for something much smaller, got some knockoff ESP8266's that I *think* I can wrangle a SPI bus out of.
https://twitter.com/jawnv6/status/938450932395753472

It does, you've just gotta get a board without an antenna and forget.

The "AI Cloud Inside" refers to firmware updates from some arbitrary source, iirc. I imported a bunch of these and translated docs when they came out ...

dougdrums fucked around with this message at 21:58 on Mar 27, 2018

unpacked robinhood
Feb 18, 2013

by Fluffdaddy
Switzeland, In addition to what was said, if you still have clock specific questions, some people in the YLLS watch thread are surprisingly knowledgeable.

Slanderer
May 6, 2007
This might be a better question for another thread, but I figure I might try here first: I'm on an AM335x-based platform running Linux, and I'm trying to get a splash screen to show up as early as possible (and stay up!). I was able to get the splash screen to show up in U-Boot (thanks to texas instruments-provided code), and also in the kernel (the hardest part of each was getting the image converted to the proper format lol). However, the screen turns off around the time u-boot loads the kernel / the kernel starts up and doesnt turn back on until the LCD controller driver is loaded up.

The problem is that I know next-to-nothing about kernel startup and the device drivers (besides messing with some device tree stuff). Obviously something is getting reconfigured in a way I didn't expect, but I'm not sure what, and more importantly, I'm not sure how to even begin debugging this.

In order for an image to be displayed, the following needs to be in place:
1. GPIO need to be configured (for multiple enable lines to devices)
2. LCD clock PLLs configured
3. LCD driver registers configured
4. IO for LCD signals setup
5. Image loaded into memory

So at some point I'm losing one or more of these things, but I'm not sure what. On a non-linux embedded platform, I'd just set a breakpoint during init and check registers + break out a multimeter.

Anyone experienced with embedded linux have any thoughts?

Popete
Oct 6, 2009

This will make sure you don't suggest to the KDz
That he should grow greens instead of crushing on MCs

Grimey Drawer
Whats the device tree entry for your LCD panel look like? Might check if there is a reset line in the device tree entry for the LCD panel. Then take a look at the driver associated with that device tree entry and see if it's toggling the reset line. The kernel driver is likely not expecting the LCD panel to be up at boot (sounds like U-Boot is setting it up?) so it's gonna reset the LCD controller chip.

There might be an option to tell it the panel is already configured, but more than likely you'll have to edit the drive to not reset/reconfigure the panel chip. Otherwise you could take the LCD initilization out of U-Boot and just wait for Linux to come up and bring up the panel first.

hendersa
Sep 17, 2006

Slanderer posted:

This might be a better question for another thread, but I figure I might try here first: I'm on an AM335x-based platform running Linux, and I'm trying to get a splash screen to show up as early as possible (and stay up!). I was able to get the splash screen to show up in U-Boot (thanks to texas instruments-provided code), and also in the kernel (the hardest part of each was getting the image converted to the proper format lol). However, the screen turns off around the time u-boot loads the kernel / the kernel starts up and doesnt turn back on until the LCD controller driver is loaded up.

The problem is that I know next-to-nothing about kernel startup and the device drivers (besides messing with some device tree stuff). Obviously something is getting reconfigured in a way I didn't expect, but I'm not sure what, and more importantly, I'm not sure how to even begin debugging this.

In order for an image to be displayed, the following needs to be in place:
1. GPIO need to be configured (for multiple enable lines to devices)
2. LCD clock PLLs configured
3. LCD driver registers configured
4. IO for LCD signals setup
5. Image loaded into memory

So at some point I'm losing one or more of these things, but I'm not sure what. On a non-linux embedded platform, I'd just set a breakpoint during init and check registers + break out a multimeter.

Anyone experienced with embedded linux have any thoughts?

I've never had much luck with the AM3358/9 with U-Boot splashscreens. It's a dirty hack to get the image up in U-Boot, since you're just getting the bare minimum limping along, and you're really out of luck if you're running through something like an HDMI framer chip. I've had the best luck with doing a two-stage splash screen with the U-Boot approach on the front end and then replacing the kernel penguin logo with a 224 color splash. You get one splash, a brief black bit, and then the second splash screen comes up. I suspect it is the clock PLLs being initialized by Linux that's clobbering your U-Boot image.

You can also put an early runlevel rc.d script or systemd rule that calls a script to cat the framebuffer data into /dev/fb0. Here's how you'd convert an 8-bit PNG to the proper format to do that:

code:
$ sudo apt-get install imagemagick
$ convert -format PNG8 splash.png -separate -reverse -combine -format RGBA splash.rgba
... and the script to run at boot (say, splashscreen.sh):

code:
#!/bin/sh
/bin/cat /[PATH_TO_SPLASH]/splash.rgba > /dev/fb0
You could launch the script under systemd like this:

code:
[Unit]
Description=My Custom Splashscreen
DefaultDependencies=no

[Service]
ExecStart=/[PATH_TO_SPLASHSCREEN_SCRIPT]/splashscreen.sh
StandardInput=tty
StandardOutput=tty

[Install]
WantedBy=sysinit.target
To cut down on that blackout period, figure out which driver modules you're loading and move them into an initrd that you load in U-Boot. This will get the video subsystem in the kernel up and running quicker, since you're probably loading the tilcdc driver as a module. That will cut down on that "black" portion of the boot process.

travelling wave
Nov 25, 2013

Slanderer posted:

This might be a better question for another thread, but I figure I might try here first: I'm on an AM335x-based platform running Linux, and I'm trying to get a splash screen to show up as early as possible (and stay up!). I was able to get the splash screen to show up in U-Boot (thanks to texas instruments-provided code), and also in the kernel (the hardest part of each was getting the image converted to the proper format lol). However, the screen turns off around the time u-boot loads the kernel / the kernel starts up and doesnt turn back on until the LCD controller driver is loaded up.

The problem is that I know next-to-nothing about kernel startup and the device drivers (besides messing with some device tree stuff). Obviously something is getting reconfigured in a way I didn't expect, but I'm not sure what, and more importantly, I'm not sure how to even begin debugging this.

There's a config option to make the driver core print out stuff when binding drivers to devices (CONFIG_DEBUG_DRIVER?) which is a good place to start. From memory you need to screw with the dynamic debug settings to make it produce output though. There's also the initcall_debug command line parameter that causes the kernel to print out when it does various low-level inits. That is sometimes useful for debugging driver issues since modules that are built into the kernel have their module_init function converted into an initcall.

quote:

In order for an image to be displayed, the following needs to be in place:
1. GPIO need to be configured (for multiple enable lines to devices)
2. LCD clock PLLs configured
3. LCD driver registers configured
4. IO for LCD signals setup
5. Image loaded into memory

Odds are you're getting screwed the pinmux driver undoing 1), the clock driver undoing 2), or both. You can probably hack around the 2nd with the "clk_ignore_unused" kernel command line param, but should be able to fix both properly with devicetree changes.

CBD
Oct 31, 2012
Does anyone have experience working in Simplicity IDE/Keil? I'm working with SI Labs FC8051F560, using the F560_SPIO_Master to test a 7 segment display board. I have a few communication problems that I can't seem to track down.

iospace
Jan 19, 2038


One part "no archives for you", one part amusement:

The ATmega328 has four different form factors. The amusing thing is the SPDIP can be bought individually direct from Microchip, but if you want another form factor you have to buy in multiples of at least 250.

Odette
Mar 19, 2011

iospace posted:

One part "no archives for you", one part amusement:

The ATmega328 has four different form factors. The amusing thing is the SPDIP can be bought individually direct from Microchip, but if you want another form factor you have to buy in multiples of at least 250.

I don't like how Microchip have just gutted Atmel's site, there were a lot of documents I was planning on getting from Atmel. Oh well.

iospace
Jan 19, 2038


I can't wait to get this I2C library done. It is a royal pain my my rear end (I'm writing it from scratch because A. so I can demonstrate it to prospective employers, and B. I'm a giant masochist).

e: at least I have an LCD screen to assist me in debugging. It's made my life so much easier.

iospace fucked around with this message at 06:20 on May 14, 2018

csammis
Aug 26, 2003

Mental Institution
I wrote an I2C driver recently (for the same reasons as you) and I probably would have crushed the whole thing under a moving train if I hadn't had a logic analyzer to see what was going on :unsmith:

iospace
Jan 19, 2038


The problem is this magnetometer (it's a compass project) is being set to continuous conversion mode annnnnnnnnd it doesn't want to keep on spitting out data for whatever reason.

E: single byte read and write work, so I have that going for me.

iospace fucked around with this message at 21:54 on May 14, 2018

carticket
Jun 28, 2005

white and gold.

In my experience with I2C transducers, continuous conversion just means it keeps a register up to date with the latest conversion... Unless it has a FIFO, but that's usually reserved for multi sensor packages.

BattleMaster
Aug 14, 2000

Or does it do something weird like act as a master and expect your device to act as a slave and accept data whenever it wants to talk?

iospace
Jan 19, 2038


It's a slave device only. There's a second sensor (accelerometer) on it, but that has its own I2C address. My issue is it's continuously reading it but the data isn't changing.

The sensor, for what it's worth: https://www.adafruit.com/product/1120 (I bought this a while ago and only now am finishing this up).

iospace fucked around with this message at 01:03 on May 15, 2018

iospace
Jan 19, 2038


I got it working! :peanut:

Mostly because A. I realized the entirety of the data had to be read, and B. the coordinates are X Z Y, not X Y Z. Now to get it working with a multi-read, not single byte reads.

iospace fucked around with this message at 01:44 on May 15, 2018

carticket
Jun 28, 2005

white and gold.

Enjoy magnetometer to heading hell!

iospace
Jan 19, 2038


How much of a bitch is SPI?

Asking for a friend.

carticket
Jun 28, 2005

white and gold.

Easy. A lot easier than I2C.

iospace
Jan 19, 2038


Alright.

I seem to be hanging up on the multi-read when I sent the first ACK to the slave for the first data byte, but it never sends it back it seems so TWINT never goes high or something. If I don't check TWINT I get an error so.

BattleMaster
Aug 14, 2000

SPI is mega-easy.

It's clocked serial, but instead of a shared line for bidirectional communications like I2C, it uses one line for transmitting and another line for receiving (relative to the master). That way it avoids rules to determine who gets to talk when - whenever the master is pulsing the clock signal, the slave knows it needs to be talking or listening. In theory the communication can be full duplex, but in practice it's usually half-duplex with one side transmitting data and the other side transmitting some ACK/NACK bytes so the transmitter knows things are working out. Instead of using an addressing system like I2C, SPI just uses a simple CS line for each slave.

There are all sorts of advantages over I2C. Aside from being a lot more simple, SPI be clocked faster because it's not limited by a weak pull-up versus bus capacitance. It also doesn't waste bus time on protocol overhead like having to transmit addresses or wait for acknowledgement bits between bytes. The main disadvantage is that it requires 3 lines common for all devices plus 1 CS line per slave, versus just 2 all-told for I2C. There's also no standard multi-master approach for it, but if that ever is necessary I'm sure you can figure something out easily enough.

csammis
Aug 26, 2003

Mental Institution

iospace posted:

B. the coordinates are X Z Y, not X Y Z.

I was all set to call BS on this but sure enough that’s what the datasheet says :psyduck:

e: and the accel is X Y Z what the hell

iospace
Jan 19, 2038


csammis posted:

I was all set to call BS on this but sure enough that’s what the datasheet says :psyduck:

I know! I'm all "Wait, what the gently caress? Who thought THIS was a good idea?"

The interesting thing, though this may be EMF from my computer so who knows, is my dad's iPhone was reading 186 when I was getting 180. Forgot to check North.

Aurium
Oct 10, 2010

Mr. Powers posted:

Easy. A lot easier than I2C.

The caveat that i2c is i2c. With SPI people can't really agree how it should be implemented.

You get different clock polarities, you get get different edges being the important one. You even get manufacturers arguing about how chip select works. Or even if it works at all.

Practically, most chips are easy to get working, but they won't necessarily work like the last one you used. Every once in a while you'll get a really weird one.

If you were going to build a library, you'd have to cover all of the combinations, and you'll need to change settings as you move between chips.

Popete
Oct 6, 2009

This will make sure you don't suggest to the KDz
That he should grow greens instead of crushing on MCs

Grimey Drawer
My time in RTP hell is finally coming to a close. I've been working on it on and off for over a year now. It's a Microblaze softcore processor inside a Xilinx FPGA, softcore processors are new for me and they have their own little pitfalls. Lots of problems with drivers/IP components (especially the ethernet IP) not resetting correctly and maintaining stale data causing weird behavior during development.

My design is running FreeRTOS with LwIP networking library support. There are 5 encode RTP (outbound) streams of stereo/5.1/7.1 audio all simultaneously being streamed out of the Microblaze/FPGA and then it supports 3 simultaneous decoded (inbound) stereo RTP streams. In reality the 3 decoded streams are actually only 1 at any given time but the Microblaze can switch which RTP stream to listen to via an interrupt coming from the main Linux processor elsewhere on the board. The difficult task was extrapolating a real audio frequency from the inbound stream. I knew ideally that the streams could either be 44.1 kHz or 48 kHz but due to network/UDP variability and audio sources being slightly different than the ideal rate the actual audio rate could vary. I feed the decode RTP stream data into a DMA engine which contains a FIFO of unkown size and I had no visibility into how full the internal DMA FIFO was at any given time, this means I never knew if it was full or empty which would cause audio artifacting. My initial solution was to use a timer in the FPGA to determine the time between any two RTP packets received and based on this time difference extrapolate a clock frequency to set the internal rate adjust clock that pulled data out of the DMA FIFO. In this way the DMA FIFO would never overflow or undeflow as I was constantly adjusting the clock rate based on the timing of packets coming in. This worked eventually after a lot of fiddling until I went back and re-enabled the encoding task in FreeRTOS (I had it turned off during development to make things simpler). This threw off my timer calculations as task switching and OS scheduling meant I couldn't easily determine the time between two tasks. Time to start all over again...

One of the FPGA guys came to my rescue and had the brilliant idea to add another FIFO on the output of the DMA FIFO, this FIFO would have a level value that I could read via GPIOs. My final implementation involved pre-filling the secondary FIFO to a known level (say 1 or 2 packets deep) and then turning on the clock so data started flowing out. This way anytime I received a new RTP packet I could check the FIFO fill level and adjust the clock faster/slower to get it back to the target fill level. With some adjusting based on difference between current fill level and target fill level I came up with a "2nd order" clock adjustment algorithm that is working great! Really happy to finally have this sorted out as I was dreading how I was gonna manage to pull this off without knowing the actual level of the DMA FIFO.

DMA well very powerful can be a real pain in the rear end to debug, once you turn it on there is no stopping it to take a look and poke around. Things tend to fall apart real fast and you're left looking at the broken mess to figure out what went wrong.

General_Failure
Apr 17, 2005
I asked this in the general programming questions thread but was told this may be a better fit. C-c C-v time. There we go.

Where to start... Okay. Does anybody have knowledge on writing .dts scripts for U-Boot?

From what I have been able to work out, U-Boot has support for loading and executing aarch32 code on supported aarch64 platforms. However it appears to only be able to do this easily by using a new FIT format image, which it can detect the code type.
My real issue is I just want mkimage to take my binary like a good utility. Trouble is although they tout he new format as being more flexible, I can't work out how the hell to use it. I don't want an FDT. I don't want a RAMdisk. None of that poo poo. I just need to get it to load my "ROM".

Background: I'm porting RISC OS to the Allwinner H3 SoC. It's 32 bit. Technically it should easily run on other Allwinner 32 bit SoCs. If I can get U-Boot to execute an image as 32 bit, it also would take very little work to make it run on things like the H5 and A64 which also interests me. Unfortunately for the life of me I cannot figure out how the hell to get U-Boot to do what I want.

I can wrap my binary in a legacy uImage, but it doesn't seem to help. The real irksome thing is the SoCs boot in 32 bit, and I think the second stage is 32 bit too. It's just U-Boot that slams it into aarch64 mode. I wish there was a 32 bit version. But there doesn't seem to be.

So, does anyone here have the knowledge on how to make a simple .dts script that allows generation of a simple aarch32 header for a binary, and nothing else?

csammis
Aug 26, 2003

Mental Institution

iospace posted:

The interesting thing, though this may be EMF from my computer so who knows, is my dad's iPhone was reading 186 when I was getting 180. Forgot to check North.

:getin:

Magnetometer calibration is a whole thing. There's hard iron interference - permanent magnets nearby that might be distorting any magnetic fields (think speaker magnets or lightly magnetized screws) - and soft iron interference - ferrous metals that have been temporarily magnetized unintentionally (by the Earth's magnetic field) or intentionally (speaker coils, transformers, what have you). These can usually be statically mapped out and compensated for in any given device. Then there's the most hilarious gotcha of all: magnetic north is usually nowhere close to true north. The Earth's magnetic north pole isn't situated on its geographic pole and depending on where you are on the Earth's surface the offset could vary by a fraction of a degree to several degrees. This is called magnetic declination. Your dad's iPhone is probably compensating for it because it knows, geographically, where it is. There are a few models which are used to calculate drift and issue corrections but one standard approach is to precalculate an table of magnetic declination values based on latitude and longitude and then interpolate based on your current location. NOAA has a service you can use to create that table. You can get as ridiculously fine-grained as you have the program space for.

This app note has some good calibration information. This app note is in my opinion the gold goddamn standard for tilt compensated e-compass calculation and calibration (and it's based around the LSM303).

Mr. Powers posted:

Enjoy magnetometer to heading hell!

Amen brother, amen

Adbot
ADBOT LOVES YOU

iospace
Jan 19, 2038


I knew that magnetic north is not true north, which is why when I removed it from my desk and took it into the kitchen, it was a bit more accurate I felt. Either way, the basic functionality is there, now it's time to get it really working!

(the feeling I got when I saw the numbers update constantly reminded me why I do this poo poo)

e: dicking around with it in class, holy hell there has to be some really crazy magnetic fields because it was going nuts.

iospace fucked around with this message at 22:00 on May 15, 2018

  • Locked thread