Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
LASER BEAM DREAM
Nov 3, 2005

Oh, what? So now I suppose you're just going to sit there and pout?

BlankSystemDaemon posted:

Striped mirrors are a way of increasing the IOPS in a RAID array, because spinning rust has a physical upper limit on how many IOPS it is capable of providing - but beyond that, striped mirrors also have a failure mode that striped data with distributed parity doesn't, which is that it can lose data if two specific disks die, whereas raid6/raidz2 will at least let you replace one of the failed drives without faulting the array, unless an URE occurs when there's no data availability.

Those last two words are important, by the way - because data availability is what RAID is for, whereas redundancy when it comes to storage is usually something that comes about in the datapath itself (from using SAS Multipathing with dual controllers for each disk, all the way up to having active-active high-availability systems).

Thank you for the fantastic write-up on this! I legitimately love learning about stuff like this.

That said, I'm still looking for a recommendation for an ethernet or USB-attached RAID enclosure that will play nice with a windows server running Linux VMs. Baseline storage would be 10TB.

Does this make sense, or should I just build a PC, slap in a decent RAID card, and manage it myself? That was my initial instinct, but I want to make sure I'm not wasting time doing stuff the old way. The PC would otherwise be consumer-grade components.

LASER BEAM DREAM fucked around with this message at 15:04 on Aug 16, 2022

Adbot
ADBOT LOVES YOU

Korean Boomhauer
Sep 4, 2008

Adolf Glitter posted:

Maybe, but that's also what is shown on stuff that's been out of stock for months/years.
What parts? There's probably a different place to get them from, or an alternative part.

ASrock rack motherboard. I guess the RAM I'm looking at is on amazon as well, but I'm not sure other places that have it that aren't terribly expensive. digikey has the RAM but its 70 bux more :negative:

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

LASER BEAM DREAM posted:

Thank you for the fantastic write-up on this! I legitimately love learning about stuff like this.
Use a triple mirror made of 10TB disks.

:v:

YerDa Zabam
Aug 13, 2016



Korean Boomhauer posted:

ASrock rack motherboard. I guess the RAM I'm looking at is on amazon as well, but I'm not sure other places that have it that aren't terribly expensive. digikey has the RAM but its 70 bux more :negative:

Prices on ram seem to vary particularly wildly. I was looking at non registered ecc ddr4 a while back and scan (I think) was something like £140 ebuyer £110 and I eventually got it off ebay for £90 (roughly, I don't recall the exact amounts, but it was that sort of range)
The ebay seller was a refurbisher and it had a decent warranty and returns policy
Ram is alway pretty volatile though and regional pricing is a big thing. poo poo, the uk prices are now all so much higher than the us. I loving hate this place
Fingers crossed for you :-)

BlankSystemDaemon
Mar 13, 2009



Computer viking posted:

Sure, but I would have expected the problems to be "it's hard to get full speed over most cabling" or "it uses too much power", not "the hardware, firmware and drivers all seems to have been made by the less competent interns".
It's bad because 2.5G will happily drop down to 1000BaseT or even 100BaseTX at the slightest provocation, and sometimes even seemingly without any reason.

LASER BEAM DREAM posted:

Thank you for the fantastic write-up on this! I legitimately love learning about stuff like this.

That said, I'm still looking for a recommendation for an ethernet or USB-attached RAID enclosure that will play nice with a windows server running Linux VMs. Baseline storage would be 10TB.

Does this make sense, or should I just build a PC, slap in a decent RAID card, and manage it myself? That was my initial instinct, but I want to make sure I'm not wasting time doing stuff the old way. The PC would otherwise be consumer-grade components.
No worries; I've spent a few decades in storage, and it's one of the few fields where things change so slowly that I'm not strictly speaking out-of-date despite my health meaning I can't work there anymore and haven't for half a decade.

First, let's cover something: DAS = Direct Attached Stoarge, NAS = Network Attached Storage - and I think you want the first, so I'd recommend looking for a USB3.2 Gen2 JBOD DAS - something like this but with fewer disks?
The reason you want USB3 is that it's got 128/130b encoding (instead of USB2s 8/10b encoding, which gives 20% overhead), and the reason you want 3.2 Gen2 is that it ensures you're getting the full 10Gbps link.
Ideally it'll also do USB Attached SAS, instead of Bulk-only Transfer Protocol which is the default for most USB.

DAS' absolutely make sense if you don't have multiple computers accessing the same data.

LASER BEAM DREAM
Nov 3, 2005

Oh, what? So now I suppose you're just going to sit there and pout?

BlankSystemDaemon posted:

It's bad because 2.5G will happily drop down to 1000BaseT or even 100BaseTX at the slightest provocation, and sometimes even seemingly without any reason.

No worries; I've spent a few decades in storage, and it's one of the few fields where things change so slowly that I'm not strictly speaking out-of-date despite my health meaning I can't work there anymore and haven't for half a decade.

First, let's cover something: DAS = Direct Attached Stoarge, NAS = Network Attached Storage - and I think you want the first, so I'd recommend looking for a USB3.2 Gen2 JBOD DAS - something like this but with fewer disks?
The reason you want USB3 is that it's got 128/130b encoding (instead of USB2s 8/10b encoding, which gives 20% overhead), and the reason you want 3.2 Gen2 is that it ensures you're getting the full 10Gbps link.
Ideally it'll also do USB Attached SAS, instead of Bulk-only Transfer Protocol which is the default for most USB.

DAS' absolutely make sense if you don't have multiple computers accessing the same data.

Wow, that's pretty pricy! I'm thinking of just going with the old-school onboard SATA route in a Fractal Design R5 for easy access.

In between my original post I've been googling around the Home Server reddit, and for people running windows I've seen discussion of "File and Storage Services". Essentially you JBOD the disks and let storage services create virtual disks for you.

Does anyone have any experience with it, or is that a bad path to pursue?

I like Linux and will be using this PC to host VMs of it, but I don't want to troubleshoot the inevitable issues on the bare-metal machine when I'm only semi-competent in the OS.

LASER BEAM DREAM fucked around with this message at 01:39 on Aug 17, 2022

Klyith
Aug 3, 2007

GBS Pledge Week

LASER BEAM DREAM posted:

In between my original post I've been googling around the Home Server reddit, and for people running windows I've seen discussion of "File and Storage Services". Essentially you JBOD the disks and let storage services create virtual disks for you.

Does anyone have any experience with it, or is that a bad path to pursue?

I like Linux and will be using this PC to host VMs of it, but I don't want to troubleshoot the inevitable issues on the bare-metal machine when I'm only semi-competent in the OS.

Windows Storage Spaces has a kinda unsettling track record of MS update fuckups but if your one alternative is Linux then ZFS-on-Linux is almost certainly worse. Btrfs can do mirrors fine but is still experimental on parity. BSD is the preferred OS for NAS servers.

If you're planning to use 10 for a while you're probably fine, since 10 is now in minimal-update maintenance mode. If you're gonna use 11 I would set a long delay for feature updates.

Korean Boomhauer
Sep 4, 2008

Korean Boomhauer posted:

ASrock rack motherboard. I guess the RAM I'm looking at is on amazon as well, but I'm not sure other places that have it that aren't terribly expensive. digikey has the RAM but its 70 bux more :negative:

i timed my complaining on this well, becuase the motherboard cropped up on newegg with a back in stock date of next week yesssss

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

Windows Storage Spaces has a kinda unsettling track record of MS update fuckups but if your one alternative is Linux then ZFS-on-Linux is almost certainly worse. Btrfs can do mirrors fine but is still experimental on parity. BSD is the preferred OS for NAS servers.

If you're planning to use 10 for a while you're probably fine, since 10 is now in minimal-update maintenance mode. If you're gonna use 11 I would set a long delay for feature updates.
ZFS on FreeBSD and Linux are both OpenZFS, there is no more ZFS-on-Linux as a project.
The OpenZFS repo contains a majority of system-independent code, as well as some bits of system-dependent code for FreeBSD and Linux respectively.
The plan is that eventually, support for Windows, macOS, NetBSD and even Illumos will also be added.

I still think FreeBSD is the better option, because Linux is still not at a point where the tooling is as integrated; you still can't easily do boot environments on Linux, and there are still the device-by-id gotchas because of how Linux handles floppy support (ie. it won't go away until floppy support does, and only once someone fixes it after that) - but them using the same codebase means there's less divergence in features, which is nice (specifically, it means that Linux can now use TRIM, which only FreeBSD could before).

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

BlankSystemDaemon posted:

I still think FreeBSD is the better option, because Linux is still not at a point where the tooling is as integrated; you still can't easily do boot environments on Linux,

You have the ZFSBootMenu project now that supports boot environments for Linux based systems. It's pretty cool and easy to get set up (for a nerd), although you need to jump through some pretty big hoops to not have to enter your decryption passphrase twice during boot.

BlankSystemDaemon
Mar 13, 2009



Keito posted:

You have the ZFSBootMenu project now that supports boot environments for Linux based systems. It's pretty cool and easy to get set up (for a nerd), although you need to jump through some pretty big hoops to not have to enter your decryption passphrase twice during boot.
Or you can use FreeBSDs standard loader, as it supports kload in 14-CURRENT - but that's still a hoop more than should be required to jump through.
The fundamental problem is that most Linux bootloaders appear to think that proper filesystem support is too hard, so they require a copy of the kernel and other ancillary files on the boot disk to be loaded into memory.

If FreeBSD has been doing it for decades, there's no reason anyone else can't do it other than not-invented-here syndrome.
Also, I just noticed that I'm the one who touched that document last. Perhaps I should look into updating it for UEFI? :ohdear:

Computer viking
May 30, 2011
Now with less breakage.

Worst thing is, grub2 has support for reading a whole bunch of file systems and is (as far as I can tell without trying) designed to make it easy to plug in more. They just like their convoluted initrd designs over in linux land, I guess. :shrug:

Hughlander
May 11, 2005

I don’t understand. I’ve been doing proxmox with zfs boot on Linux for the past 5 years. What’s lacking?

Computer viking
May 30, 2011
Now with less breakage.

Hughlander posted:

I don’t understand. I’ve been doing proxmox with zfs boot on Linux for the past 5 years. What’s lacking?

Nothing, if it works it works. It's just not a given on the larger linux ecosystem - IIRC, Fedora routinely breaks ZFS if you install their recommended kernel upgrades.
Also, a more zfs-first OS may have some neat extra tools. The boot environments mentioned are basically the opportunity to make clones of the boot drive before upgrades (or indeed at any point you want), and boot from any of them or roll back to them at will. It's possible to make work on linux, it's not the end of the world to not have it ... but it is neat.

Maneki Neko
Oct 27, 2000

LASER BEAM DREAM posted:

Wow, that's pretty pricy! I'm thinking of just going with the old-school onboard SATA route in a Fractal Design R5 for easy access.

In between my original post I've been googling around the Home Server reddit, and for people running windows I've seen discussion of "File and Storage Services". Essentially you JBOD the disks and let storage services create virtual disks for you.

Does anyone have any experience with it, or is that a bad path to pursue?

I like Linux and will be using this PC to host VMs of it, but I don't want to troubleshoot the inevitable issues on the bare-metal machine when I'm only semi-competent in the OS.

I ran a storage spaces setup for about 6 years and never had an issue. I eventually moved away from it in the last year or so primarily because the whole hardware setup was getting old and I just moved to unraid instead on a new system.

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.
Last I heard ZFS expansion is still aiming for a "Q3 2022" release. Should I reasonably expect it to be available by end of year or would that be hopelessly naive?

BlankSystemDaemon
Mar 13, 2009



A Bag of Milk posted:

Last I heard ZFS expansion is still aiming for a "Q3 2022" release. Should I reasonably expect it to be available by end of year or would that be hopelessly naive?
You can follow the progress of it here, but I've never heard of Q3 2022 as a specific timeline - only thing can remember reading in the OpenZFS leadership meeting agenda is that it's meant for OpenZFS 3.0.
I imagine we'll know more after the OpenZFS developer summit coming up.

YerDa Zabam
Aug 13, 2016



Q3 was mentioned in this blog post.

https://freebsdfoundation.org/blog/raid-z-expansion-feature-for-zfs/

(I don't know if integration is the same as release though)

YerDa Zabam fucked around with this message at 11:15 on Aug 18, 2022

Olympic Mathlete
Feb 25, 2011

:h:


Thank you for the previous help, thread. I've got my little Synology 218play up and running and feeding the house various bits of content I've had sat around on external drives for far too many years (I have far too much music, holy poo poo). Picked up a Nvidia Shield TV Pro and am very impressed with that too, the combination of NAS and the Shield is pretty much the solution I've been after for years but didn't quite realise it.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

BlankSystemDaemon posted:

You can follow the progress of it here, but I've never heard of Q3 2022 as a specific timeline - only thing can remember reading in the OpenZFS leadership meeting agenda is that it's meant for OpenZFS 3.0.
I imagine we'll know more after the OpenZFS developer summit coming up.

Nice, this is the only thing keeping me on unraid and not just running a Proxmox host with virtualized storage.

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.

BlankSystemDaemon posted:

You can follow the progress of it here, but I've never heard of Q3 2022 as a specific timeline - only thing can remember reading in the OpenZFS leadership meeting agenda is that it's meant for OpenZFS 3.0.
I imagine we'll know more after the OpenZFS developer summit coming up.

OK, thanks for the info. Hopefully the last week of October will offer something more concrete. My poor raidz2 pool is at 82% and I hate looking at that little caution symbol

BlankSystemDaemon
Mar 13, 2009



A Bag of Milk posted:

OK, thanks for the info. Hopefully the last week of October will offer something more concrete. My poor raidz2 pool is at 82% and I hate looking at that little caution symbol
ZFS has done a lot to negate the things that caused people to conclude that 80% ~= 100% capacity - I've had several pools reach 100% capacity and recover just fine when I started deleting files.

The one thing ZFS can't do by itself, which needs admin intervention, is avoid using all transaction groups; if it's 100% capacity and can't possibly write a transaction group to disk, it can't delete anything. There are ways to fix it, but they're non-trivial.
I don't know why ZFS doesn't default to N% reservation on the pool dataset like UFS does (it has 8% reserved that only the superuser can write to by default), but it probably should?

Yaoi Gagarin
Feb 20, 2014

If you use striped mirrors you can expand your pool easily and you can even use different size drives as long as each mirrored pair is matched. It's very convenient

BlankSystemDaemon
Mar 13, 2009



VostokProgram posted:

If you use striped mirrors you can expand your pool easily and you can even use different size drives as long as each mirrored pair is matched. It's very convenient
Striped mirrors are always used to increase IOPS, almost never to increase data availability because if two disks in a mirror fail, you lose your entire pool.
With large enough numbers of mirrored vdevs, you end up having lower MTBDL than a a single disk.

Even if you buy a mix of vendors and don't use disks with serial numbers too close together, there's still a point at which striped mirrors don't make sense anymore.

BlankSystemDaemon fucked around with this message at 22:33 on Aug 18, 2022

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

It also means if two disks in a mirror fail, you lose your entire pool.

With large enough numbers of mirroed vdevs, you end up having lower MTBDL than a a single disk.

You should have backups!

BlankSystemDaemon
Mar 13, 2009



VostokProgram posted:

You should have backups!
Yes, but by the same token, you also shouldn't build your storage to fail.

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.

BlankSystemDaemon posted:

ZFS has done a lot to negate the things that caused people to conclude that 80% ~= 100% capacity - I've had several pools reach 100% capacity and recover just fine when I started deleting files.

The one thing ZFS can't do by itself, which needs admin intervention, is avoid using all transaction groups; if it's 100% capacity and can't possibly write a transaction group to disk, it can't delete anything. There are ways to fix it, but they're non-trivial.
I don't know why ZFS doesn't default to N% reservation on the pool dataset like UFS does (it has 8% reserved that only the superuser can write to by default), but it probably should?

This is all great info. I suppose I can treat 90% as my new temporary 100% and not really worry about it. The difference between 80% and 90% is 5TB for me, a non-trivial amount that I can't imagine I'll fill before expansion drops.

VostokProgram posted:

If you use striped mirrors you can expand your pool easily and you can even use different size drives as long as each mirrored pair is matched. It's very convenient

I've preferred 6 drive raidz2 because it's far more economical and provides plenty of redundancy. There are (temporary?) tradeoffs in terms of flexibility as we can see here, sure. But if I can expand a 6 drive raidz2 pool to 7 drives, that's still quite safe in terms of total pool size, and then I get all the space of the 7th drive for the cost of just one drive, with full redundancy. My goal is lots of space for cheap, and I'd never get there with mirroring.

LASER BEAM DREAM
Nov 3, 2005

Oh, what? So now I suppose you're just going to sit there and pout?

Maneki Neko posted:

I ran a storage spaces setup for about 6 years and never had an issue. I eventually moved away from it in the last year or so primarily because the whole hardware setup was getting old and I just moved to unraid instead on a new system.

Thanks for the endorsement! I'll post a trip report in the thread once it's up and going!

BlankSystemDaemon
Mar 13, 2009



A Bag of Milk posted:

This is all great info. I suppose I can treat 90% as my new temporary 100% and not really worry about it. The difference between 80% and 90% is 5TB for me, a non-trivial amount that I can't imagine I'll fill before expansion drops.
Set a reservation of a couple hundred megabytes, and never worry about it ever again.

LASER BEAM DREAM posted:

Thanks for the endorsement! I'll post a trip report in the thread once it's up and going!
If you want endorsements, the gold standard of filesystems is when the creators start using them and when the company moves their entire business to it.
For ZFS, that occurred with Jeff Bonwick and Matt Ahrens using it for their /home after about a year of development and Sun moving their entire business to it in 2004-2005, respectively.
I'm not sure Microsoft uses either storage spaces or ReFS.

BlankSystemDaemon fucked around with this message at 01:22 on Aug 19, 2022

Klyith
Aug 3, 2007

GBS Pledge Week

BlankSystemDaemon posted:

If you want endorsements, the gold standard of filesystems is when the creators start using them and when the company moves their entire business to it.
I'm not sure Microsoft uses either storage spaces or ReFS.

My dude I know you love ZFS and all, but there are good ways to show your love without badmouthing other systems.

Especially when you're saying things that make you look real stupid with a 30 second google. Yes, MS uses Refs and storage spaces

Wibla
Feb 16, 2011

I'm still using mdraid + XFS and it just works with no drama... :shrug:

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

My dude I know you love ZFS and all, but there are good ways to show your love without badmouthing other systems.

Especially when you're saying things that make you look real stupid with a 30 second google. Yes, MS uses Refs and storage spaces
I think it came off differently than I meant it, and I apologize for that. I didn't mean to imply that Microsoft isn't using it, just that I wasn't aware of them doing so - but clearly they are at least using storage spaces, though there's no mention of ReFS.
There's also a slight difference in what I was talking about with ZFS and Sun and how they used it; they moved their entire business to using it internally, and nothing about the article suggests Microsoft does that with storage spaces and ReFS - though if they do, that's awesome.

What I was getting at is more the situation with BTRFS and Facebook; Facebook will readily tell you how they use it for their load-dependent scale-out servers (ie. spin up more servers when there's a spike in demand) - but what they don't tell you is that those systems are entirely transient and are assumed to be volatile.
The actual storage solution they use has changed from a combination of HDFS and Hadoop that they used for many many years to something called TEUTONIC, which near as anyone can tell is a proprietary thing Facebook seems to have no interest in opening up.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Storage Spaces is something that weirds me out. It feels like a knee-jerk reaction to ZFS to me, that happened around the time latter became famous. Last I remember, all the clustering and Storage Spaces Direct stuff, that came way later, sits in separate drivers on top of the initial SS stuff. Might as well be independent.

I agree with the ReFS sentiment. If it was worth anything, they'd roll it out consumer machines. At least way back, that was considered an option. Nowadays, you're hard pressed to find any info besides what was released back during Windows 8 times.

Computer viking
May 30, 2011
Now with less breakage.

My impression of Microsoft and consumer file systems is that they want to treat Windows PCs like fat clients, with the primary storage being Onedrive, or a NAS for business desktops. If local disk is only really for impersonal data (software which can be downloaded again) and checked out copies from cloud/NAS, then there's less incentive to develop a fancier new file system.

Klyith
Aug 3, 2007

GBS Pledge Week

BlankSystemDaemon posted:

There's also a slight difference in what I was talking about with ZFS and Sun and how they used it; they moved their entire business to using it internally, and nothing about the article suggests Microsoft does that with storage spaces and ReFS - though if they do, that's awesome.

Ok yeah, that's true. MS doesn't think ReFS is a replacement for NTFS, because for applications where you need speed NTFS is faster.

But in the same way, did Sun really use ZFS for their entire business way back when? Nothing but ZFS on anything with storage? I super doubt it. ZFS and high-performance databases didn't mix for a long time and still needs careful setup, tuning, and tons of caching.

Also, Sun went out of business. So it's nice that they were extremely confident in the new FS they made, but maybe using it for everything wasn't actually the correct decision?

BlankSystemDaemon posted:

What I was getting at is more the situation with BTRFS and Facebook; Facebook will readily tell you how they use it for their load-dependent scale-out servers (ie. spin up more servers when there's a spike in demand) - but what they don't tell you is that those systems are entirely transient and are assumed to be volatile.
The actual storage solution they use has changed from a combination of HDFS and Hadoop that they used for many many years to something called TEUTONIC, which near as anyone can tell is a proprietary thing Facebook seems to have no interest in opening up.

On that level, I have no idea what MS is doing themselves to provide the storage back-end for Azure, I'd assume it is also all proprietary and custom. But when you're talking about hyperscale cloud storage, I don't think anybody is looking to add redundancy or error-correction at the base filesystem level. Oracle Exascale doesn't run on ZFS, it uses XFS.

I think the idea that one FS can be good for everything is self-evidently bogus. ZFS is great at a bunch of stuff, and for single-machine NAS like we talk about ITT it's the best! But the fact that a company that runs a giant platform at incomprehensible scale doesn't use btrfs or ReFS is no knock against those FSes. They don't run a mega-cloud on ZFS either. It's a fallacious argument.

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

Ok yeah, that's true. MS doesn't think ReFS is a replacement for NTFS, because for applications where you need speed NTFS is faster.

But in the same way, did Sun really use ZFS for their entire business way back when? Nothing but ZFS on anything with storage? I super doubt it. ZFS and high-performance databases didn't mix for a long time and still needs careful setup, tuning, and tons of caching.

Also, Sun went out of business. So it's nice that they were extremely confident in the new FS they made, but maybe using it for everything wasn't actually the correct decision?

On that level, I have no idea what MS is doing themselves to provide the storage back-end for Azure, I'd assume it is also all proprietary and custom. But when you're talking about hyperscale cloud storage, I don't think anybody is looking to add redundancy or error-correction at the base filesystem level. Oracle Exascale doesn't run on ZFS, it uses XFS.

I think the idea that one FS can be good for everything is self-evidently bogus. ZFS is great at a bunch of stuff, and for single-machine NAS like we talk about ITT it's the best! But the fact that a company that runs a giant platform at incomprehensible scale doesn't use btrfs or ReFS is no knock against those FSes. They don't run a mega-cloud on ZFS either. It's a fallacious argument.
That's much the same reason why UFS is still in FreeBSD, though - there are places where it doesn't make sense to use ZFS, but if you want the highest data availability you can get without clustering for high-availability, ZFS is pretty much it.

Sun didn't go out of business, they got acquired by Oracle - but that has little to do with them using ZFS internally, and more to do with the foundations for their making money disappearing under them with big iron going the way of the dodo and x86 roundly beating anything they could put out on the CPU side.
From what I've been told by several people who worked there (including Ahrens). they absolutely were using it internally for everything requiring storage - but I don't know about the details enough to tell you whether postgres was being hosted on something else.

I've heard a few war stories shared over drinks from people who worked at Microsoft, but nothing that bears repeating here since it's probably out-of-date and is at any rate hearsay.

Zorak of Michigan
Jun 10, 2006


Sun was headed out of business but I have to agree that it was not due to problems with their technology, it was that their tech was expensive and Linux was eating their lunch. Trying to compete by also making Solaris free was a decision that would have made a big difference five years earlier, but by the time they tried it, it was too little too late.

BlankSystemDaemon
Mar 13, 2009



Zorak of Michigan posted:

Sun was headed out of business but I have to agree that it was not due to problems with their technology, it was that their tech was expensive and Linux was eating their lunch. Trying to compete by also making Solaris free was a decision that would have made a big difference five years earlier, but by the time they tried it, it was too little too late.
The ironic part is that the first time they tried to opensource Solaris was back in the 90s, but they couldn't because there was a shitload of drivers in Solaris written by second-party companies or subcontractors who they couldn't source release forms from.

Getting Pandora to come out of her box is not as easy as all that, and it's a marvel of evil that Larry Ellison managed to put her back in there once she got free.

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

The ironic part is that the first time they tried to opensource Solaris was back in the 90s, but they couldn't because there was a shitload of drivers in Solaris written by second-party companies or subcontractors who they couldn't source release forms from.

Getting Pandora to come out of her box is not as easy as all that, and it's a marvel of evil that Larry Ellison managed to put her back in there once she got free.

nit: Pandora is the one who opens the box, she isn't in it

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



VostokProgram posted:

nit: Pandora is the one who opens the box, she isn't in it
It's all made-up anyway, and I like my version better - but clearly the Greeks didn't know about the closet and how damaging it is. :colbert:

Let Pandora be free to do her thing!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply