Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Computer viking
May 30, 2011
Now with less breakage.

VostokProgram posted:

Speaking of ashift - what is a good value for an SSD?

I have been wondering the same, and concluded that no matter what magic it does inside, the firmware will presumably be written to do ok with 4k-aligned blocks.

I really should test this assumption, though.

Adbot
ADBOT LOVES YOU

Computer viking
May 30, 2011
Now with less breakage.

Wibla posted:

Yeah the caveat is that you get to spend a fair chunk more money :v:

(At least over here in :norway: finding boards that support ECC is a pain in the rear end, and usually quite expensive)

My boyfriend is currently using an AM4 Ryzen on an ASRock desktop board with ECC RAM, they unofficially support ECC and it does seem to be reported correctly to the OS. Though I have no idea where he found the ECC sticks, it's not like Komplett has a selection of them.

(Intel, though - hah, no. Maybe on ebay from the great outlands.)

Computer viking
May 30, 2011
Now with less breakage.

Wibla posted:

Yeah - I was referring to Intel :v:

I guess it might be time to grab an ASRock card and a cheap 5000-series CPU to replace my E5-2670 v3?

It's not a bad platform, though it's annoying that you have to choose between ECC and an embedded GPU; it's apparently a Ryzen Pro feature to have both. Not an issue for us, since we use it in a gaming PC - but annoying for a server. I guess you could throw in a cheap Intel Arc, I think they do transcoding decently well while being small and low power? (And for the sheer novelty of a reverse AMD/Intel setup.)

Computer viking
May 30, 2011
Now with less breakage.

Wibla posted:

I have a P400 that I can use, so that's not a problem. But I need a SAS HBA and a 10gbe NIC as well, that might be tough to fit in the more and more gimped PCIe layouts these cards come with...

The PCIe layout is one of the things that makes me want a Threadripper, they seem to be overflowing with lanes.

That said, the board CV2 is using is actually not an ASRock, but a Gigabyte B550 Vision D-P that also explicitly says it supports ECC. It has three x16 slots with some spacing between them, enough to fit a GPU, HBA, and NIC. You may be able to find a board with a 10Gbit NIC onboard, which would save a slot; this one only has 2x 2.5GBit .
Looking at it, the only Gigabyte card with a 10Gbit NIC seems to be the Aorus Xtreme (a cool 7.490,- ) and the ASRock X570 Creator is an even more hair-raising 10.564,- ... and out of stock until March.

Computer viking fucked around with this message at 02:23 on Dec 21, 2023

Computer viking
May 30, 2011
Now with less breakage.

Comedy option: Lenovo will happily sell you a ThinkStation P620 with a Threadripper Pro, ECC RAM, an integrated 10Gbit NIC, four x16 slots, and I think you can fit five 3.5" drives in there if you pick the option to put one in the optical bay. You can configure it down to 27.500,- (about $2660 including Norwegian 25% sales tax) with 16 GB RAM and no drives or GPU.

Computer viking
May 30, 2011
Now with less breakage.

Twerk from Home posted:

Illumos is not only alive, there is a buzzy, funded startup building a brand-new multi-million dollar computing platforms on it: https://oxide.computer/. This thing is using ZFS for their storage, bhyve for the hypervisor, and illumos for the actual OS. I don't know if they're using ZFS for replication though, they may be doing it at a higher level on their storage application.

Kind of surprising that the BSD <-> Solaris code exchange is still going on, but I'm not opposed to people spending money in that area. Hopefully some of their code makes it back to FreeBSD.

Computer viking
May 30, 2011
Now with less breakage.

I've been using Kingston DC series disks as OS drives recently, but I can't yet say if they are any better in the long run. They claim endurance way beyond what I need, at least - and the 500 series I used were not that expensive.

Computer viking
May 30, 2011
Now with less breakage.

I have had no luck until I plugged it in myself - but may I ask what sort of PC you have without SATA ports? They still seem fairly standard on motherboards.

I guess you could find a thunderbolt SATA controller (or put a PCIe one in an eGPU enclosure), but that sounds pointlessly expensive.

Computer viking
May 30, 2011
Now with less breakage.

Diametunim posted:

Big oof on my part. I really do have the flu. Thanks for telling me to double check. I double checked, and looked at the product documentation. My motherboard (Gigabyte Z590 Arous Pro) does have SATA ports. They were obscured from view by my massive video card. All sorted now.

By far the easiest solution to your problem. :)

Computer viking
May 30, 2011
Now with less breakage.

TrueNAS as a file server with no other bells and whistles is pretty set-and-forget, and has been reliable for many years. As for rack servers - if you just want ~60TB of space, a 5-disk raidz1 of 20TB drives gets you something like 70TB of usable space. It's perfectly possible to fit five disks in a miditower (I've got one), though I don't know what's available new these days.

It looks like Seagate Exos X20 drives are $319 on Newegg at the moment, while a 10TB (the cheapest being a WD Red) is $240, so going up to 20TB looks sensible unless you really need the extra spindles for performance.

Computer viking fucked around with this message at 16:33 on Dec 28, 2023

Computer viking
May 30, 2011
Now with less breakage.

Beve Stuscemi posted:

Is there an easy way to DIY a DAS? I have a little stack of 4TB drives hanging around and it would be kinda nice to be able to raid them together and hook them up over USB.

Basically, can you home build something like a Lacie?

Huh, good question. The least plug-and-play solution would be to export them with iSCSI and hook it up with a USB NIC, but bleh.

All the parts for what you want to do are actually available, with various degrees of polish.
- A USB port that can be put in device mode. With USB-C I think that's more common?
- The linux Mass Storage Gadget kernel module does the main work: Given a list of devices (or backing files) it exports each as one mass storage device on any and all device mode ports. I think.
- Some way to bolt those disks together to a single block device, like mdraid or ZFS

The smoothest solution seems to be something like the Kobol Helios64 (which I had never heard of five minutes ago) - take a look at the "USB under Linux" section of their documentation.

e: Ha, they shut down in 2021. At least they kept the documentation up, and it shows that it's possible and not even that hard?

Computer viking fucked around with this message at 01:50 on Jan 10, 2024

Computer viking
May 30, 2011
Now with less breakage.

The big problem seems to be that all desktop and laptop USB-C controllers appear to only do host mode (except specifically for power delivery to laptops); having a controller that can be switched over to device mode appears to only be halfway common on ARM boards. I don't really get how this works with USB-C, since there are vague hints of "this is more of a software thing with USB-C". The only thing I can say is that playing with the FreeBSD install on my laptop and my windows desktop, I had zero luck getting the laptop to appear as a USB device. Though that may be me misunderstanding the FreeBSD documentation for this.

Computer viking
May 30, 2011
Now with less breakage.

Twerk from Home posted:

How are people doing modern flash NASes? I'd assume that parity based raid would be a bottleneck that limits writing to the array, and honestly striping would seem to add extra complexity that you don't really need because read/write speeds are already fast enough to saturate a 25gbit connection.

Couldn't you just pool the disks with lvm? SSD failure is less common than spinning disks and SSDs are so expensive that you probably have a backup somewhere else anyway, so why not run a flash NAS without any redundancy?

Eeeeh. SSDs die, especially when they get a lot of lifetime IO. I've got four M.2 NVME drives on a 4x card, and landed on raidz. The server only has a 2.5Gbit link anyway, it's more than fast enough.

On the other hand, you absolutely have a point - when you get to "2U server stacked full of enterprise NVME drives" that's a lot of parity calculation bandwidth. No idea how it stacks up to a modern server CPU.

Computer viking
May 30, 2011
Now with less breakage.

On several related notes, I had one of those days where everything failed at once.

First, a disk failed in our 9 year old fileserver. It did, of course, go in the most annoying possible way, where it hung when you tried to do IO, so just importing the zpool to see what was going on was super tedious. I ended up doing some xargs/smartctl/grep shenanigans to find the deadest-looking disk and pulled that, which immediately made things more pleasant. For good and bad, I configured this pool during the height of the "raid5 is dead" panic, so it's a raid10 style layout - which did at least make it trivial to get it back to a normal state; just zpool attach the new disk to the remaining half of the mirror. I'll try to remember how you remove unavailable disks later. Nevermind that I have run out of disks and had to pull the (new, blank) bulk storage drive from my workstation as an emergency spare.

Of course, the event that apparently pushed the disk over the edge was doing a full backup to tape, as opposed to the incrementals I've been doing since last January. It's 100TB of low-churn data, but I'm still not sure how smart that schedule is. Also, I do not really look forward to trying to remember how job management works in bacula; it's been a couple of years.

This file server does two things: It's exported with samba to our department, and with NFS over a dedicated (50 cm long) 10gbit link to a calculation server we use. Since the file server was busy and it's a quiet week, I thought I'd do a version upgrade on the calculation server, too.

FreeBSD upgrades from source are trivial, so that part went fine. However, it did not boot afterwards; the EFI firmware just went straight to trying PXE. Looking into it, the EFI partition was apparently 800 kB, which somehow has worked up to today? Shrinking the swap partition and creating a roomy 64 MB one, then copying over the files from the USB stick's EFI partition worked.

Which revealed the next problem: Both the disks in the boot mirror have apparently died to the point where there's a torrent of "retry failed" messages drowning out the console, despite it being seemingly fine when doing the upgrade. I don't think a modest FreeBSD upgrade (13.1 to 13.2 , I think) would massively break support for a ten year old intel SATA controller, but ... idk, I turned it off and left.


And yes we run a modern and streamlined operation that's definitely not me fixing things with (sometimes literal) duck tape and bailing wire while also trying to do a different job.

e: Not mentioned is how the file server is a moody old HPE ProLiant that takes forever to boot and turns all fans to fire alarm style max if you hotplug/hot-pull drives without the cryptographically signed HPE caddies.

Computer viking fucked around with this message at 19:39 on Jan 11, 2024

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

Sounds like a real lovely day, friend - sorry you had to go through that :(

I'm amazed you managed to get a copy of FreeBSD that had boot1.efi as the default, instead of loader.efi - was this manually configured, or did it come that way?

Given the age of the machine, it was probably installed as 10.0 or 10.1 and continuously upgraded. The boot disks are a gmirror setup, so I suspect I may have done something manual instead of going with whatever the sysinstall defaults were at the time? I really can't remember, it's been a while and it has just quietly worked through upgrades without needing to think about the details before now.

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

Yeah, gmirror is definitely not the default for bsdinstall (sysinstall went away a long time ago, but they look very similar and both use dialog, though nowadays it's bsddialog).
As boot1.efi says, it's been deprecated from before it was included in 13.0-RELEASE, so it was only really a matter of time.

Oh yeah, I forgot they changed over at some point. Specifically for 9.0 in 2011, apparently.

Computer viking
May 30, 2011
Now with less breakage.

I think it's good practice for large installations to avoid all your drives being made the same day, so they don't all die together if there was a problem with the production line. Using different makes or models should reduce the likelihood of them failing simultaneously even further, I guess?

Computer viking
May 30, 2011
Now with less breakage.

Things we have learned today: We have a mix of SR and LR transceivers, they just happened to be split between rooms and switches in a way that worked out fine, and the switches were interlinked with copper. Also, the guys pulling fiber through the department chose singlemode, so all the SR transceivers were unhappy.

Neither hard nor expensive to fix (we need a single-digit number of transceivers), but it took us an embarrassingly long time trying to figure out why nothing worked before we noticed that some parts said 1310nm and some said 850nm.

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

This is violence at work.
Quick, someone call OSHA!

Eh, partially my fault. None of us have any idea of what we're doing with fiber, it's a wonder it worked out at all.

On the positive side it's definitely a learning experience.

Computer viking
May 30, 2011
Now with less breakage.

The reason we use e..g TrueNAS is that the only low-level config you need to do is writing a USB stick, booting from it, picking which drive(s) to install to, and the rest is done in the web interface. It's not as smooth as a Synology, but it's also not "27) sudo vim /usr/local/etc/samba4/smb.conf and write your share definitions" like you'd get if you wanted to do it on plain FreeBSD or Debian.

That doesnt' mean a Synology is the wrong choice; I've used them before and will do so again. But the NAS distros have also come a long way. :)

Computer viking fucked around with this message at 00:11 on Feb 12, 2024

Computer viking
May 30, 2011
Now with less breakage.

Well Played Mauer posted:

I’m comfortable enough with docker and poo poo to install things I may want to use alongside the file system and I already janitor some other headless boxes from the command line. Am I missing any special features from TrueNAS or UnRAID that I couldn’t replicate with docker compose and ppvs?

At work I use TrueNAS so I don't have to do a samba+NFS+users from an ancient ActiveDirectory setup by hand again. At home I use it in the hope that it will require less management overall - though I think I'll just run a normal FreeBSD install again next time. So nah not really, unless you have complex file serving needs.

I'd suggest using the opportunity to do a bit of real world testing of different tools. Throw them in a ZFS pool and poke that for a bit. See what mdraid and lvm can do. Set up a btrfs raid just so you can say you've done it. After all, it's not that often that you have a stack of large empty drives to play with. :)

Computer viking fucked around with this message at 08:42 on Feb 16, 2024

Computer viking
May 30, 2011
Now with less breakage.

Aware posted:

I thought 2.5g existed purely as a sop to wifi marketing speeds in excess of 1gbps that is easily pointed out as irrelevant given the 1gbps uplink port on many of them. I guess >1gbps home internet will eventually become more widely spread but it's gotta be a tiny percentage of the market for the next few years. I'd have assumed anyone who really needed more than 1gbps would have either discovered link aggregation or just bit the bullet on 10g gear.

In my case, we noticed that both our desktops had 2.5g, so we bought a switch and a 2.5g card for the file server. Much, much cheaper than a 10gbit upgrade, and over double the speed in file transfers and steam peer-to-peer installations. I would have liked 10gbit, but the hardware is still a bit too expensive for the realistic benefits at home.

Computer viking
May 30, 2011
Now with less breakage.

Harik posted:

Wanted to comment on this because people actually seem to believe this. Businesses operate on a herd mentality so every consumer cloud backup provider will go away within a 2 year span when the "common wisdom" is that it's not a growth market.

And that's assuming a vulture equity group doesn't take a few hundred billion and "consolidate" the market, strip it for any cash and burn the carcass.

"cloud" can be part of a home-user backup strategy but it should be considered ephemeral and unreliable.

The risk of both OneDrive and Google Drive disappearing at all, never mind so quickly that I don't have time to download everything, is near zero. Note that he said cloud providers, not specifically cloud backup providers.

Sure, trusting either MS or Google to stick to projects is also a folly - but they're not getting bought out, and neither of them run their storage solution as a main income source in the first place.

Computer viking
May 30, 2011
Now with less breakage.

Oh goddammit the SAS controller for the external ports in the file server at work seems to have died. The expander box works fine connected to another machine, and everything looks happy here - but it just insists there's nothing connected. (I've tried both ports on the controller and all four on the MD1600, yes.)

Oh well I can probably find something compatible for sale somewhere.

Computer viking
May 30, 2011
Now with less breakage.

Moey posted:

This a powervault expansion shelf?

Sorry, typo - it's an MD1400. And yes. :)


BlankSystemDaemon posted:

One habit I never grew out of, even after getting rid of hardware RAID controllers and sticking solely to software RAID, was always having a tested-compatible backup controller.

Even if it can't be run hot-spare (in theory, it can - but PCIe hotplug can be the kind of Fun you get in Dwarf Fortress), just having the peace of mind of knowing that there's a known-good device to test with is more than worth it.

It is literally software raid (ZFS on plain FreeBSD), but indeed. Having a spare sitting around would be really nice right now; nobody has anything reasonable in stock. The best case looks like buying a Lenovo-branded card that's probably a rebranded LSI 9300 8-e and just hoping it'll work.
Alternatively, I have another newer fileserver sitting around connected to a sequencing machine - they won't run out of storage space on the actual instrument for months, so I could probably borrow that one to tide me over until I can get my hands on something proper.

e: Keep in mind that this is academia - nothing matters, the stakes are made up, we have no customers. It'll annoy a handful of postdocs a bit until I get it back up.

Computer viking
May 30, 2011
Now with less breakage.

Moey posted:

I was gonna say, that's a new one to me.

Now, guessing just newer faster expansion cards on the 1400 vs the 1200?

I've never used an MD1200, but that sounds about right. Looking at google, the MD1400 is 12Gbit with SFF-8644 connectors, the MD1200 is 6Gbit with SFF-8088 connectors. I've got it connected to a 6Gbit controller with an 8644 to 8088 cable anyway, so it's kind of academic. :)

Computer viking
May 30, 2011
Now with less breakage.

the spyder posted:

I have at least a dozen LSI external HBA's from my decom ZFS server. Full height and Low profile. I believe they are all 9300-8E, but can verify later.

I'd happily buy one off you, but I'm not sure how annoying shipping to Norway will be. :)

Computer viking
May 30, 2011
Now with less breakage.

On a positive note, I borrowed a 9300-16e from another machine and it just immediately worked, so that's nice.

E: the old card works fine driving another MD1400 in another server. I don't even know anymore but I'll replace it anyway just out of spite.

Computer viking fucked around with this message at 14:04 on Mar 20, 2024

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

One trick I was taught early was to truncate a small handful of files, give them GEOM gate devices so they're exposed via devfs the same way memory devices are, and create a testing pool to try commands on.

I still periodically do it if it's been a while since I've done some administrative task and want to make sure I'm doing it right.
This, of course, goes hand in hand with using the -n flag at least once before running it without, on any administrative command.

Something about playing with a large stack of real disks (before putting them to their final use) feels good, though. I can't explain why.

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

But truncate can do arbitrary-sized files???

Sure, but they don't make fun disk access noises. :colbert:

Computer viking
May 30, 2011
Now with less breakage.

As for adding another vdev to a pool: It's nice to avoid adding new vdevs to almost full pools, for performance reasons: The pool will prioritize the new vdev until they're about equally full, which slows down writes compared to spreading them equally over the entire pool. Reading back that data in the future will also be slower (since it's spread over fewer disks), but depending on your access pattern that may not be a problem. For bulk storage that's probably not a problem, doubly so if it's just connected over Gbit network.

Computer viking
May 30, 2011
Now with less breakage.

For the NVME side, do any ITX motherboards support PCIe bifurcation? If so, you can get PCIe cards that split an x16 slot into four proper NVME slots.

I have an Asus one, and apart from being the size of an old GPU it works fine in the ASRock motherboard I'm using. I had to fiddle with the BIOS to get bifurcation working; it would only show me the first drive until that was sorted. Otherwise it Just Works, the drives show up as normal and I have a pretty fast zpool on them.

IIRC from last time they came up, there are cheap AliExpress cards that work about the same.

e: This is a completely different scale from the "four NVME drives on an SBC" systems, of course; you would probably need one of the gaming ITX cases to make this fit. Cute compared to a tower or rackmount, but a big hunk of steel compared to an RPi-style board.

Computer viking fucked around with this message at 14:06 on Apr 14, 2024

Computer viking
May 30, 2011
Now with less breakage.

ryanrs posted:

Several years ago I needed to build a mini-server with bifurcation. I'm pretty sure I couldn't find a mini-itx board that did it, and had to move up to microATX (which is bigger than mini-itx).

It was for a 10-core Xeon server on a backpack frame, powered by an e-bike battery. We used it to record raw video from an 8-camera 3D imaging system.

e: As I recall, besides bifurcation, most of the mini-itx boards had major architectural issues with keeping 4 NVMEs fed. Things like number of memory channels, slots for incoming i/o, etc. If your multiple SSDs are mostly for capacity, not pure speed, maybe it doesn't matter.

Oh neat, that's a much more reasonable use than mine, which boils down to "this SATA controller seems to be failing and there's a sale on NVME drives".

Computer viking
May 30, 2011
Now with less breakage.

mekyabetsu posted:

With ZFS, do mirrored vdevs need to be the same size? Let's say I have three mirrored vdevs setup like so:

vdev1: 2x 8 TB drives
vdev2: 2x 10 TB drives
vdev3: 2x 2 TB drives

I would end up with a single 20 TB pool. Right?

Also, it's not a problem to add a new pair of drives as a mirrored vdev after the pool has been created and is in use, correct? I understand that you aren't really meant to add drives to expand the size of a pool in a RAIDZ setup, but if I'm just using mirrored pairs of drives, adding a new vdev is a simple and expected use case, right?

Sorry for the newbie questions. I'm slowly going through ZFS documentation, but there's a lot of it and I'm dumb. :(

You're mostly right - you will get 20TB, and you can add any vdevs you want to an existing pool. The debatable part is "not a problem" - ZFS tries to keep all vdevs at roughly equally full, so the new mirror will get near enough 100% of the write load until they catch up to the rest. If this is a problem or not depends on your use.

Computer viking
May 30, 2011
Now with less breakage.

Anime Schoolgirl posted:

I'm not sure the CPU of a NAS is something you'd ever upgrade unless you were doing some madcap "i'm delivering content to 100 users on the LAN" setup.

Depends on if you use it as your everything-server, I guess - with enough VMs or containers it could make sense.

Computer viking
May 30, 2011
Now with less breakage.

evil_bunnY posted:

4xNVMe but no 10GBE is some special blend of stupid.

Being able to serve a solid 1gbit with a home-friendly number of spinning drives is surprisingly hard. Long reads or writes, sure, but anything with smaller (or heaven forbid, mixed) reads or writes can be "cheap USB stick" levels of slow.

Of course it would absolutely not hurt to have a 10gbit port - or even a 2.5 - but nvme-only 1 gbit will still feel a lot faster than spinning-disk-only 1 gbit for certain loads.

Computer viking fucked around with this message at 17:00 on Apr 26, 2024

Computer viking
May 30, 2011
Now with less breakage.

MadFriarAvelyn posted:

I almost wonder if going with Ubuntu would be the way to go for my build. It's the Linux distro I'm most familiar with and looking up some guides setting up ZFS doesn't sound too terrible to do.

I know when I set up my pool I'll probably want to use one or more of my four drives as parity drives just in case a disk fails. So that means...RAID-Z or RAID-Z2? Any preferences between them?

The ZFS Raid-Z levels don't do dedicated parity drives, they spread the data and parity blocks evenly across all the drives - otherwise the parity drive(s) would see more traffic than the rest. You can think of it as writing data in groups of as many blocks as there are drives, and the number is how many of those are parity. So for a six-drive RAID-Z2, it would be writing groups of four data blocks and two parity blocks. (RAID-Z is RAID-Z1.)

In practice with your four disks, this means that RAID-Z1 would be 3+1, and RAID-Z2 would be 2+2. You could get the same 50% available space with two mirrors, though it would have different tradeoffs - it would probably be a bit faster, it's quicker to rebuild, but losing two disks could take out the pool if they're in the same mirror. (If you have multiple vdevs, writes are balanced across them based on their free space percentage - so ideally two mirrors would be similar to a RAID10.)

I'd probably do Z1.

Computer viking fucked around with this message at 23:49 on Apr 26, 2024

Computer viking
May 30, 2011
Now with less breakage.

I may have spent a lot of time today trying to figure out why an old rack server (a Dell R415) wasn't discovering any of the disks I put in it.

Apparently the backplane connector on the motherboard is just passthrough to the RAID controller card, and I've long since repurposed that. Oops. The onboard SATA controller drives ... SATA ports on the motherboard of the 1U rack server with a hotplug backplane and no SATA power cables? Weird.

Computer viking
May 30, 2011
Now with less breakage.

Super annoying: It looks like the $900 Tri-mode adapter I was (ab)using to connect two SAS HDDs suddenly died. It's a Megaraid 9560-16i, and I had it in a desktop tower with a 120mm case fan blowing straight up into the heatsink from a few cm away. Apparently not good enough; any machine I put it in hangs halfway through the early BIOS stages.

And of course, it's not entirely mine - I used it at work, but it was strictly speaking bought by another group who, in the end, didn't need it. I ended up with it since I was the only one involved who showed any interest, but I can't really bother them too much about trying to get it swapped under warranty.

(This is not a question, I'm just complaining.)

Adbot
ADBOT LOVES YOU

Computer viking
May 30, 2011
Now with less breakage.

Harik posted:

took nearly a year because my dog got sick and wiped out my toy fund (he's fine now, good pupper)



I don't remember if anyone answered the question about PCIe -> U.2 adapters? I want to throw in a used enterprise U.2 for my torrent landing directory (nocow, not mirrored because i'm literally in the process of downloading linux isos and can just restart if the drive dies) and dunno, a mirrored pair of small optanes for service databases and other fast, write-heavy stuff. Maybe zfs metadata?

I've got 2 weeks before the last of the main hardware arrives and I can finally do this upgrade.

I've used Startech U.3 to PCIe adapters at work, and they seem to be fine; I guess U.2 would be very similar.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply