Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
echinopsis
Apr 13, 2004

by Fluffdaddy

rjmccall posted:

these are good posts

Adbot
ADBOT LOVES YOU

feedmegin
Jul 30, 2008

carry on then posted:

there are old issues of byte in the internet archive and the ads and reviews for things like dos memory managers make the experience of using a pc back then sound dire. there's no way i wouldn't have been a mac user if i were born early enough to have to make that decision back then

That would very much depend on how much money you had.

Sweevo
Nov 8, 2007

i sometimes throw cables away

i mean straight into the bin without spending 10+ years in the box of might-come-in-handy-someday first

im a fucking monster

extended vs expanded memory is easy

extended memory is the one people actually used. expanded memory is the one people only ever set up by mistake

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl
where did DEC go for chip fab? did they have their own plant, did they hire others?

echinopsis
Apr 13, 2004

by Fluffdaddy
I never understood the difference between chip and fast ram on the amiga

Radia
Jul 14, 2021

And someday, together.. We'll shine.
the turbo buttonj makes your computer faster, it's right there in the name

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl

Lady Radia posted:

the turbo buttonj makes your computer faster, it's right there in the name

i remember my dad being grumpy about "turbo" being used as a word for "fast"

"oh is there a turbine involved? no?? so why did you use the word turbo?!"

ultravoices
May 10, 2004

You are about to embark on a great journey. Are you ready, my friend?

echinopsis posted:

I never understood the difference between chip and fast ram on the amiga

chip ram is the stuff that is shared between the cpu, video with DMA.

fast ram was the next block if ram that was directly addressable by only CPU.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Farmer Crack-rear end posted:

where did DEC go for chip fab? did they have their own plant, did they hire others?

I believe they had their own fab in the 1970s-80s at least, don’t know about the 1990s and beyond

Radia
Jul 14, 2021

And someday, together.. We'll shine.
i like when companies have their own fab. i dont have a good reason as to why. i think it's neat.

carry on then
Jul 10, 2010

by VideoGames

(and can't post for 10 years!)

Lady Radia posted:

i like when companies have their own fab. i dont have a good reason as to why. i think it's neat.

it's great right up until you don't have enough to make to offset their massive costs, which leads to fun things like ibm manufacturing the atari jaguar of all things just to keep the factory bringing money in (not sure if they were actually fabbing any chips for the jaguar but the principle is similar with any factory)

Kazinsal
Dec 13, 2011



AMD spun their fabs off in 2008 and sold a majority share in the resulting company to get rid of like a billion dollars in debt and it kept them afloat long enough to develop Zen, demand for which is high enough that AMD probably wishes they still had their own fabs lol


working on a post about interrupt routing in x86 over the years. not sure what I'll write about after that, kinda running out of actually interesting stuff

echinopsis
Apr 13, 2004

by Fluffdaddy

ultravoices posted:

chip ram is the stuff that is shared between the cpu, video with DMA.

fast ram was the next block if ram that was directly addressable by only CPU.

fast ram sucks it seems

Quebec Bagnet
Apr 28, 2009

mess with the honk
you get the bonk
Lipstick Apathy

Kazinsal posted:

working on a post about interrupt routing in x86 over the years. not sure what I'll write about after that, kinda running out of actually interesting stuff

you could always post about something cursed in x86 instead

SYSV Fanfic
Sep 9, 2003

by Pragmatica

echinopsis posted:

I never understood the difference between chip and fast ram on the amiga

Kinda the difference between having a choice between a motorcycle all to yourself and a car you share with your siblings Agnes, Angus, and Paula. Somethings you absolutely need the car (and have to wait), especially if your doing something together. Otherwise you can zip along on your bike.

Course the bike is the exact same speed as the car. You just can't share it.

SYSV Fanfic fucked around with this message at 07:26 on Jan 18, 2022

Kazinsal
Dec 13, 2011



Quebec Bagnet posted:

you could always post about something cursed in x86 instead

it's x86. all its components are cursed.

teaser: this one involves embedded bytecode in firmware that it's up the OS to implement a VM for

echinopsis posted:

fast ram sucks it seems

fast ram was good for throwing code into because the CPU would never stall for a few bus cycles fetching instructions like it could if the code was in chip RAM (because the RAM may be busy being DMAed to/from or the video chip could be reading it to redraw a scanline or what have you).

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl

Kazinsal posted:

AMD spun their fabs off in 2008 and sold a majority share in the resulting company to get rid of like a billion dollars in debt and it kept them afloat long enough to develop Zen, demand for which is high enough that AMD probably wishes they still had their own fabs lol

i would think they're happier with being able to hire TSMC and get bleeding edge fabs, but i could be wrong

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl
were 8088s really substantially cheaper to manufacture than 8086s, or was that mostly a market segmentation thing?

Kazinsal
Dec 13, 2011



alright, interrupts on x86. I was hoping to get to this post sooner but instead of editing my draft last night I spent five hours in the ER but now I have antibiotics and painkillers so it's time to :justpost:

x86 chips have only a single general-purpose interrupt pin but they support up to 256 interrupts -- you can invoke these in software manually with the int imm8 instruction, and hardware can invoke a specific interrupt by writing the interrupt number to the low 8 bits of the data bus and raising the INTR pin (generally an interrupt controller does this and other devices ask the bus to tell the interrupt controller to do it for them).

back in the early days of the 8086, the PC had a really simple Programmable Interrupt Controller, the Intel 8259A (which was a 16-bit version of the 8-bit 8259 with optional interrupt buffering). it had eight interrupt pins that would be hooked up to devices, an interrupt output pin that was hooked up to the CPU's interrupt pin, and 8 data pins for communicating with the CPU and software. during bootup you'd tell the 8259 what its base interrupt vector was and it would add that to whatever interrupt pin number it was signalling for when it told the processor to raise an interrupt. super convenient! except at that time Intel hadn't really codified the reserved set of interrupt numbers and the 8086/8088 only had seven fault/traps built in so IBM's BIOS hooked up the 8259 to vectors 0x08 through 0x0F (and used the next 16 vectors for BIOS service routines, the cheeky fuckers).

this was alright for a bit, but suddenly PC users had a shitload of ISA cards in their systems and ran out of IRQs pretty quickly, especially because IBM used IRQ 0 for the Programmable Interval Timer, IRQ 1 for the keyboard controller, and 3 and 4 for serial ports. add in a floppy drive (usually IRQ 6) and there's only three IRQs free for other cards. sooooo in classic hack-something-together fashion, IBM bodged another handful of interrupts into the 286-based PC/AT by throwing *another* 8259A on the board, hooking its interrupt output pin to the first 8259's IRQ 2 pin, and letting IRQs 8-15 cascade through two interrupt controllers to get to the CPU. this *worked* but it also meant that in you now needed to keep track of whether your interrupt was being invoked by the primary PIC or the secondary PIC, and acknowledge the IRQ on the primary PIC in both cases and the secondary PIC if you're on that one.

older yosposters will remember having to janitor IRQs. this is why. there simply weren't enough, even though all the way as far back as the 8086, the processor itself could happily address a full 256 interrupt vectors. and at the time the 8259A was one of the most advanced interrupt controllers available as a single integrated circuit, what with how reprogrammable it was. as plug and play capability started to show up with cards that had the ability to signal "yes, this is actually me raising an interrupt", driver and kernel developers got used to checking that in their interrupt routines to allow multiple devices to safely share a single IRQ.


enter the 486. at this point people have realized that multiprocessor systems are neat but expensive, so Intel baked in some multiprocessor support. they also released a new interrupt controller to support this called the 82489DX Advanced Programmable Interrupt Controller (APIC), which was a single chip that could handle either of two roles. one, the IOAPIC, took the place of the 8259s (and in fact, would emulate them by default, so a freshly booted IBM-compatible system would still behave like a PC/AT). the other, the Local APIC, was present attached to each individual CPU socket, and handled local external interrupt routing and inter-processor interrupts (IPIs). they were physically the same chip, just configured with fuses to do one job or the other, and they had their own local bus for communication and interrupt routing separate from the system bus. the local APIC can raise interrupts on the CPU it's attached to from 32 to 255, meaning we were finally free of interrupt routing catastrophes, right?

haha you're like a third of the way through the post you know where this is going

so for starters, ISA cards were still in common use, and they still could only talk to the emulated 8259 side of the IOAPIC. more advanced PnP EISA cards and, by the Pentium, PCI cards could be set up to route through the IOAPIC to arbitrary interrupts, but this had a few issues. for starters, the Multi-Processor Tables generated by the system's firmware would have to be parsed to see if each device could actually do that properly (mostly for the legacy devices; I've never heard of a PCI card that *needs* legacy PIC interrupt routing). if so, you now had to turn on APIC mode on each local APIC and the IOAPIC, tell the device (or its configuration space on the PCI bus) that you're moving to APIC mode, and that it needs to generate its interrupts on a specific line. now your kernel needs to keep tabs on what interrupts are going where because sometimes you'd STILL have multiple PCI cards on the same interrupt and they couldn't be reassigned for dumb and awful reasons like cheap bus controllers on the cards or whatnot. also you'd need your interrupt trampoline to tell the handler what interrupt it came from to make guessing the device easier. but it made it less likely you'd need to overload your IRQs, so it sucked less. right?


in the mid 90s power management became a concern and the Advanced Configuration and Power Interface spec came about to replace the previous Advanced Power Management spec as well as the Multi-Processor Spec that the aforementioned MP Tables were part of and the Plug and Play BIOS spec that made IRQ and I/O port reconfiguration work on PnP-aware ISA cards. ACPI is this gigantic unwieldy beast of a spec that is composed of dozens of tables, some of which are easily parseable, like the base system information ones and the APIC list so you can tell how many CPUs you have and start them up. then there's the PCI routing table. the PCI routing table is not actually a table. it's bytecode.

see, ACPI has its own virtual machine spec called ACPI Machine Language (AML). AML bytecode was designed with the intent of being CPU-agnostic, so in theory the same ACPI code could be used on x86 or PowerPC or ARM or any other system that had ACPI compatibility. in practice, it's pretty much x86 only, but we're stuck with it, so OSes that wanted to properly support full multiprocessing with interrupt routing through different processors needed to have an AML virtual machine, because lol gently caress you the firmware doesn't have one for you. and AML is *dense*. it's not a general purpose virtual machine, but it's pretty close. it's a whole CISC machine in its own right, and it's part of the reason that power and multiprocessor routing on Linux sucked so much for so long -- the only people who got a whole AML virtual machine done were Microsoft and Intel. Intel later open sourced their as the ACPI Component Architecture, but it's an enormous library (I'm pretty sure at this point it's well north of a million lines of code) and integrating it into a kernel is just a nightmare, but if you want to do APIC routing properly, you gotta either import ACPICA and hook it up to your kernel or write your own subset of an AML virtual machine. once you've got that all ready to go, you use it to find and execute the bytecode inform the ACPI-aware firmware that you're moving to IOAPIC mode, find and execute the bytecode to see what virtual interrupt pins (INTA# through INTD#) are being used on every device, then you need to finally program a redirection entry into the IOAPIC with the information you got from the ACPI tables so that the IRQs for those pins actually goes to the correct interrupt vectors.

if it's such a pain, why would you use the IOAPIC for interrupt routing in a multiprocessor system? well, at this point in history you have to. the 8259 PIC emulation mode only connects to CPU 0, so to implement multiprocessor interrupt routing with it you would have to dedicate CPU 0 to always being in charge of every interrupt that comes in and then use IPIs to tell another CPU to actually do the work. motherboard designers could also add more IOAPICs onto the system if it was expected that you'd be shoving shoving enough devices in there to result in more than 16 IRQs (the max an 82489DX could handle) being needed. also, the IOAPIC has about one third the interrupt latency that the PIC/emulated PIC does, so you get significant performance improvements from it.

in the Pentium 4 the APIC was extended as the xAPIC (Intel is not exactly a creative bunch; it stands for Extended APIC), increasing the number of supported CPUs and using the high-speed system bus to communicate between local APICs instead of a separate, much slower (by the P4 era) out of band bus. the x2APIC showed up in Nehalem and theoretically increased the number of supported CPUs in a machine to 2**32-1. more importantly though the x2APIC reduced inter-processor interrupt latency and complexity, and made virtualization of IPIs much faster as well. the x2APIC also makes the "clustered mode" method of addressing batches of CPUs/cores with as a single logical destination much less annoying to use; instead of needing to parse a hierarchy of clusters to figure out what cluster mapped to what set of local APICs, you just check one of the CPUID functions (I think it's EAX=1F, ECX=0) and it tells you how to interpret cluster IDs.


but what if we don't want to do this? well, in some cases, we're in luck! the thoughtful shits at PCI-SIG added something called Message Signalled Interrupts in the PCI 2.2 spec and made it mandatory in PCI Express. MSI works by ignoring the IOAPIC altogether and letting you tell a device to simply write a data word to an arbitrary address on the physical address bus when there's an interrupt. since every local APIC in a system is exposed to the address bus, you can skip the IOAPIC and a driver can tell a device that supports MSI to just tell a specific CPUs local APIC to raise an interrupt on whatever vector you want it to. this is even faster than IOAPIC routing, so your latency is cut down to between 1/7 and 1/10 of emulated PIC routing, *and* you can just blat the interrupt exactly where it needs to go. if only PCI-SIG was also thoughtful enough to not charge thousands of dollars just to read the specifications for the system bus that practically every desktop, laptop, and server on the planet uses.

we can do better, though. MSI doesn't require support for 64-bit addressing, and a device can only allocate up to 32 interrupts with MSI. so the extended MSI-X spec fixes this by requiring 64-bit addressing support and up to 2048 interrupts per device through a table that supports sending different interrupts to different destinations (so you could have different queues on a NIC go to different processors, for example). sure, each processor itself can only support 224 non-fault interrupts per core, but that's fine! you've got loads of interrupts, and modern systems have lots of cores! go ham!

as a fun note, PCI Express doesn't actually support ISA and PCI style interrupt pins/lines. it just emulates them by using special in-band messages to tell the bus controller "hey, pretend I raised #INTA" etc.


so now we're at the modern state of interrupt routing on x86: the x2APIC being primarily used for local APIC functionality (which since the Pentium has been integrated into the CPU core), MSI/MSI-X being used to actually route interrupts wherever possible, and the IOAPIC for edge cases like legacy devices. but there's a few other neat features that local APICs provide.

since the local APIC is integrated into the CPU core it can benefit from the core's high precision multi-GHz timing. this means that the local APIC is technically capable of providing extremely high resolution timers, down to sub-microsecond scale. early local APICs had a few timer bugs and calibrating the local APIC timer is a bit of a pain (it doesn't actually tell you its frequency; you have to estimate it at bootup) but these days Windows uses it for internal kernel timing and Linux uses it to provide tickless real-time kernel functionality. each local APIC timer can be configured differently, and the timer supports three modes: one-shot, where it counts down for a specified period and then raises an interrupt; periodic, where it counts down for a specified period, raises an interrupt, then starts counting down again; and TSC deadline, where it waits until the CPU's internal cycle counter reaches a certain value and then raises an interrupt.

the local APICs also let you send arbitrary interrupts to other CPUs. this is excellent for telling other CPUs to synchronize to whatever one CPU just did. because translation lookaside buffers are per-CPU, if one CPU modifies a page table and invalidates the appropriate TLB entry, any possible TLB entries for that page will be unaffected on the other CPUs. so you assign an interrupt vector on all CPUs to "invalidate this TLB entry" or "invalidate all your TLB entries" and when you need to do a TLB shootdown globally you send an IPI to all processors except the originating one for that vector. anything you can think of that you need a fast method to get another CPU into kernel mode and have it do something you can assign a known interrupt vector for and just issue an IPI whenever needed. there's pre-defined interrupt destination groups for "broadcast", "broadcast except to self" and "self-IPI" if you need that for some reason, so most of the time you only need to issue one IPI and they should all theoretically arrive at all CPUs at the exact same time.

and the best part is that you don't even need to gently caress with ACPI's garbage virtual machine in order to use them.

echinopsis
Apr 13, 2004

by Fluffdaddy
:worship:

Kazinsal
Dec 13, 2011



Farmer Crack-rear end posted:

were 8088s really substantially cheaper to manufacture than 8086s, or was that mostly a market segmentation thing?

not particularly because the pinout is the same and the die is the same size, just the 8088 has an 8-bit data bus controller and the 8086 has a 16-bit data bus controller so maybe it was a couple bucks less per unit at the "buying by the thousands" scale. I couldn't find good reliable numbers for 8086 per-unit launch prices but the 8088 was released at a list price of $124.80 per unit without volume discounts.

also the 8-bit data bus of the 8088 made wiring it up to MCS-85 family chips super easy so IBM basically slapped together a motherboard containing an 8088 and a bunch of 8080/8085 support chips and it just magically worked right. the IBM PC used the following MCS-80/85 components: 8237 DMA controller, 8253 programmable interval timer, 8255A programmable peripheral chip (basically a GPIO controller), 8259A programmable interrupt controller, and 8288 bus controller (effectively a 16-bit version of the 8283). the keyboard controller was just firmware running on an 8048 microcontroller on its own 400 kHz clock and everything else was an add-in card.

I've written code for a PC/XT and it's wonderfully quaint. everything's so slow you don't really have to worry about timing and synchronization all that much. kinda thinking about doing a quick and lovely unix clone for the 8088 as an extremely elaborate and niche april fools joke

EIDE Van Hagar
Dec 8, 2000

Beep Boop

Good Sphere posted:

my dad worked there for 20+ years, before moving onto Intel when they were bought up :)

i had a poster on my bedroom wall of the alpha processor

his mother asked him to look for a job. he opened up a phone book and saw digital equipment corporation and said "oh computers. i've heard of these". he first worked with hardware assembly, then was an instructor for various things, including using oscilloscopes, until he went onto technical writing. he made a silly (but very inspiring) video on a weekend there in his early days. there was a production rental department where he got cameras, and made a video about him stuck in a microprocessor, a journey through the manufacturing plant, and something with someone dressed in a gorilla costume. gotta post it someday

this should reallt really all be on youtube

EIDE Van Hagar
Dec 8, 2000

Beep Boop

Farmer Crack-rear end posted:

i remember my dad being grumpy about "turbo" being used as a word for "fast"

"oh is there a turbine involved? no?? so why did you use the word turbo?!"

i got really really mad once when the nyt crossword tried to use “supercharged, as am engine” as a clue for “revved”. i remember a lot of website comments with other angry car people saying “you can rev any car but ‘supercharged’ has a specific technical meaning that implies use of a supercharger, which is orthogonal to throttle input”

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

this is yosposting

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
the times just had a clue like “aunt and uncle’s little girl” for “niece”, so

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

rjmccall posted:

the times just had a clue like “aunt and uncle’s little girl” for “niece”, so

whoops

git apologist
Jun 4, 2003

if you scroll real fast through kazinsal’s posts in greenpos real fast it’s like that thing in popular science fiction movie “the matrix”

feedmegin
Jul 30, 2008

Kazinsal posted:

not particularly because the pinout is the same and the die is the same size, just the 8088 has an 8-bit data bus controller and the 8086 has a 16-bit data bus controller so maybe it was a couple bucks less per unit at the "buying by the thousands" scale.

Yeah, if I recall, it's not so much the cost of the CPU itself - it's that wiring up an 8 bit databus is cheaper and involves fewer components and possibly less glue logic than 16 bits. You're only driving half the lines! And you don't have to worry about stuff like high and low byte lane selects.

FalseNegative
Jul 24, 2007

2>/dev/null
Kazinsal these posts are incredible, and I'm learning a ton about the computers I grew up on. Thank you!

Zlodo
Nov 25, 2006

feedmegin posted:

Yeah, if I recall, it's not so much the cost of the CPU itself - it's that wiring up an 8 bit databus is cheaper and involves fewer components and possibly less glue logic than 16 bits. You're only driving half the lines! And you don't have to worry about stuff like high and low byte lane selects.

there was this cool interview of some of the team that designed the 68k at Motorola and they said IBM chose the 8088 over the 68k at the time specifically because of the 8 bit bus since it helped make all the hardware cheaper (and they could use existing peripheral chips designed for 8 bit CPUs)

IBM asked Motorola to do the same and they refused, the Motorola guy in the interview said that in retrospect it would have been actually quite easy to do (and they did do it eventually, is was the 68008)

Silver Alicorn
Mar 30, 2008

𝓪 𝓻𝓮𝓭 𝓹𝓪𝓷𝓭𝓪 𝓲𝓼 𝓪 𝓬𝓾𝓻𝓲𝓸𝓾𝓼 𝓼𝓸𝓻𝓽 𝓸𝓯 𝓬𝓻𝓮𝓪𝓽𝓾𝓻𝓮
6B00B

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Kazinsal posted:

I've written code for a PC/XT and it's wonderfully quaint. everything's so slow you don't really have to worry about timing and synchronization all that much. kinda thinking about doing a quick and lovely unix clone for the 8088 as an extremely elaborate and niche april fools joke

it’s a decent platform for that, that’s what Andy Tanenbaum & crew did to make MINIX and another group did to create PC/IX

I’d suggest a DEC RT-11 or HP RTE or MPE clone just to be different, should be just as easy

or heck TOPS-10 or TOPS-20 since those had lots of fans

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

rjmccall posted:

the times just had a clue like “aunt and uncle’s little girl” for “niece”, so

spoiler that poo poo man

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Zlodo posted:

there was this cool interview of some of the team that designed the 68k at Motorola and they said IBM chose the 8088 over the 68k at the time specifically because of the 8 bit bus since it helped make all the hardware cheaper (and they could use existing peripheral chips designed for 8 bit CPUs)

IBM asked Motorola to do the same and they refused, the Motorola guy in the interview said that in retrospect it would have been actually quite easy to do (and they did do it eventually, is was the 68008)

that’s a weird assertion since the 68000 has had 6800-series peripheral support from the start

my understanding was that Motorola didn’t have second sourcing and full qualification set up early enough for IBM’s adoption

on the other hand there’s still just more poo poo to route, and you still always had to use 16 bits worth of RAM and ROM until the 68020’s dynamic bus sizing support

Sweevo
Nov 8, 2007

i sometimes throw cables away

i mean straight into the bin without spending 10+ years in the box of might-come-in-handy-someday first

im a fucking monster

the 68000 was also just really expensive at the time

SYSV Fanfic
Sep 9, 2003

by Pragmatica

Farmer Crack-rear end posted:

were 8088s really substantially cheaper to manufacture than 8086s, or was that mostly a market segmentation thing?

The cheapest ram at the time was 1 bit wide. Putting eight of these in parallel with a higher capacity was substantially cheaper than putting sixteen of them in parallel at a lower capacity. It also made it easier to perform IO operations with legacy 8-bit components.

IBM's PC team had other reasons to choose it. They didn't want the PC to threaten IBMs high margin 16-bit stuff, and Intel offered a discount.

Kazinsal
Dec 13, 2011



Sweevo posted:

the 68000 was also just really expensive at the time

also this. in 1979 the 68000 was $500 a piece, compared to the 8088's $125 and the 8086 being probably not terribly much more than that.

echinopsis
Apr 13, 2004

by Fluffdaddy

Gentle Autist posted:

if you scroll real fast through kazinsal’s posts in greenpos real fast it’s like that thing in popular science fiction movie “the matrix”

feedmegin
Jul 30, 2008

eschaton posted:

that’s a weird assertion since the 68000 has had 6800-series peripheral support from the start

my understanding was that Motorola didn’t have second sourcing and full qualification set up early enough for IBM’s adoption

on the other hand there’s still just more poo poo to route, and you still always had to use 16 bits worth of RAM and ROM until the 68020’s dynamic bus sizing support

Well, 16 bit data lines. Original 68k at an architectural level had a 32 bit address space but only ran out 24 address lines (limiting maximum addressable memory to 16 megabytes) - the top byte in an address would thus be ignored. This caused problems down the line when addressable memory was increased because people would use that top byte for stuff like type tags; a similar thing happened with early ARM, which is why modern 64 bit architectures that don't use all their theoretical address space specify exactly what the unused bits should be and fault if you put anything else in there.

Adbot
ADBOT LOVES YOU

SYSV Fanfic
Sep 9, 2003

by Pragmatica

feedmegin posted:

modern 64 bit architectures that don't use all their theoretical address space specify exactly what the unused bits should be and fault if you put anything else in there.

Just want to add that this check is the default behavior. You can disable these checks x86_64, and AArch64 and and Linux (probably windows too) uses the highest bits to tag memory regions. The tag bits extend from left to right, when the architecture grows the canonical area, it grows from right to left. It's going to be awhile before there is a risk of them overlapping.

If anyone learns best through a hands on approach and wants to better understand computer architecture, I'm a big fan of the rc2014 classic ][ and sc126. Soldering one together, then breadboarding something as simple as a circuit that turns LEDs off and on goes a long way to giving people a framework to understand this stuff.

Edit:
If soldering scares you, "The elements of computing systems" is good too. If you approach it as an experience where you aren't afraid to copy/paste other people's stuff when you get stuck.

SYSV Fanfic fucked around with this message at 02:54 on Jan 21, 2022

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply