|
VostokProgram posted:Speaking of ashift - what is a good value for an SSD? I have been wondering the same, and concluded that no matter what magic it does inside, the firmware will presumably be written to do ok with 4k-aligned blocks. I really should test this assumption, though.
|
# ¿ Dec 19, 2023 03:51 |
|
|
# ¿ May 15, 2024 05:06 |
|
Wibla posted:Yeah the caveat is that you get to spend a fair chunk more money My boyfriend is currently using an AM4 Ryzen on an ASRock desktop board with ECC RAM, they unofficially support ECC and it does seem to be reported correctly to the OS. Though I have no idea where he found the ECC sticks, it's not like Komplett has a selection of them. (Intel, though - hah, no. Maybe on ebay from the great outlands.)
|
# ¿ Dec 21, 2023 00:24 |
|
Wibla posted:Yeah - I was referring to Intel It's not a bad platform, though it's annoying that you have to choose between ECC and an embedded GPU; it's apparently a Ryzen Pro feature to have both. Not an issue for us, since we use it in a gaming PC - but annoying for a server. I guess you could throw in a cheap Intel Arc, I think they do transcoding decently well while being small and low power? (And for the sheer novelty of a reverse AMD/Intel setup.)
|
# ¿ Dec 21, 2023 01:13 |
|
Wibla posted:I have a P400 that I can use, so that's not a problem. But I need a SAS HBA and a 10gbe NIC as well, that might be tough to fit in the more and more gimped PCIe layouts these cards come with... The PCIe layout is one of the things that makes me want a Threadripper, they seem to be overflowing with lanes. That said, the board CV2 is using is actually not an ASRock, but a Gigabyte B550 Vision D-P that also explicitly says it supports ECC. It has three x16 slots with some spacing between them, enough to fit a GPU, HBA, and NIC. You may be able to find a board with a 10Gbit NIC onboard, which would save a slot; this one only has 2x 2.5GBit . Looking at it, the only Gigabyte card with a 10Gbit NIC seems to be the Aorus Xtreme (a cool 7.490,- ) and the ASRock X570 Creator is an even more hair-raising 10.564,- ... and out of stock until March. Computer viking fucked around with this message at 02:23 on Dec 21, 2023 |
# ¿ Dec 21, 2023 02:20 |
|
Comedy option: Lenovo will happily sell you a ThinkStation P620 with a Threadripper Pro, ECC RAM, an integrated 10Gbit NIC, four x16 slots, and I think you can fit five 3.5" drives in there if you pick the option to put one in the optical bay. You can configure it down to 27.500,- (about $2660 including Norwegian 25% sales tax) with 16 GB RAM and no drives or GPU.
|
# ¿ Dec 21, 2023 02:49 |
|
Twerk from Home posted:Illumos is not only alive, there is a buzzy, funded startup building a brand-new multi-million dollar computing platforms on it: https://oxide.computer/. This thing is using ZFS for their storage, bhyve for the hypervisor, and illumos for the actual OS. I don't know if they're using ZFS for replication though, they may be doing it at a higher level on their storage application. Kind of surprising that the BSD <-> Solaris code exchange is still going on, but I'm not opposed to people spending money in that area. Hopefully some of their code makes it back to FreeBSD.
|
# ¿ Dec 23, 2023 16:53 |
|
I've been using Kingston DC series disks as OS drives recently, but I can't yet say if they are any better in the long run. They claim endurance way beyond what I need, at least - and the 500 series I used were not that expensive.
|
# ¿ Dec 23, 2023 22:20 |
|
I have had no luck until I plugged it in myself - but may I ask what sort of PC you have without SATA ports? They still seem fairly standard on motherboards. I guess you could find a thunderbolt SATA controller (or put a PCIe one in an eGPU enclosure), but that sounds pointlessly expensive.
|
# ¿ Dec 24, 2023 21:43 |
|
Diametunim posted:Big oof on my part. I really do have the flu. Thanks for telling me to double check. I double checked, and looked at the product documentation. My motherboard (Gigabyte Z590 Arous Pro) does have SATA ports. They were obscured from view by my massive video card. All sorted now. By far the easiest solution to your problem.
|
# ¿ Dec 27, 2023 22:26 |
|
TrueNAS as a file server with no other bells and whistles is pretty set-and-forget, and has been reliable for many years. As for rack servers - if you just want ~60TB of space, a 5-disk raidz1 of 20TB drives gets you something like 70TB of usable space. It's perfectly possible to fit five disks in a miditower (I've got one), though I don't know what's available new these days. It looks like Seagate Exos X20 drives are $319 on Newegg at the moment, while a 10TB (the cheapest being a WD Red) is $240, so going up to 20TB looks sensible unless you really need the extra spindles for performance. Computer viking fucked around with this message at 16:33 on Dec 28, 2023 |
# ¿ Dec 28, 2023 16:27 |
|
Beve Stuscemi posted:Is there an easy way to DIY a DAS? I have a little stack of 4TB drives hanging around and it would be kinda nice to be able to raid them together and hook them up over USB. Huh, good question. The least plug-and-play solution would be to export them with iSCSI and hook it up with a USB NIC, but bleh. All the parts for what you want to do are actually available, with various degrees of polish. - A USB port that can be put in device mode. With USB-C I think that's more common? - The linux Mass Storage Gadget kernel module does the main work: Given a list of devices (or backing files) it exports each as one mass storage device on any and all device mode ports. I think. - Some way to bolt those disks together to a single block device, like mdraid or ZFS The smoothest solution seems to be something like the Kobol Helios64 (which I had never heard of five minutes ago) - take a look at the "USB under Linux" section of their documentation. e: Ha, they shut down in 2021. At least they kept the documentation up, and it shows that it's possible and not even that hard? Computer viking fucked around with this message at 01:50 on Jan 10, 2024 |
# ¿ Jan 10, 2024 01:46 |
|
The big problem seems to be that all desktop and laptop USB-C controllers appear to only do host mode (except specifically for power delivery to laptops); having a controller that can be switched over to device mode appears to only be halfway common on ARM boards. I don't really get how this works with USB-C, since there are vague hints of "this is more of a software thing with USB-C". The only thing I can say is that playing with the FreeBSD install on my laptop and my windows desktop, I had zero luck getting the laptop to appear as a USB device. Though that may be me misunderstanding the FreeBSD documentation for this.
|
# ¿ Jan 10, 2024 02:46 |
|
Twerk from Home posted:How are people doing modern flash NASes? I'd assume that parity based raid would be a bottleneck that limits writing to the array, and honestly striping would seem to add extra complexity that you don't really need because read/write speeds are already fast enough to saturate a 25gbit connection. Eeeeh. SSDs die, especially when they get a lot of lifetime IO. I've got four M.2 NVME drives on a 4x card, and landed on raidz. The server only has a 2.5Gbit link anyway, it's more than fast enough. On the other hand, you absolutely have a point - when you get to "2U server stacked full of enterprise NVME drives" that's a lot of parity calculation bandwidth. No idea how it stacks up to a modern server CPU.
|
# ¿ Jan 11, 2024 01:54 |
|
On several related notes, I had one of those days where everything failed at once. First, a disk failed in our 9 year old fileserver. It did, of course, go in the most annoying possible way, where it hung when you tried to do IO, so just importing the zpool to see what was going on was super tedious. I ended up doing some xargs/smartctl/grep shenanigans to find the deadest-looking disk and pulled that, which immediately made things more pleasant. For good and bad, I configured this pool during the height of the "raid5 is dead" panic, so it's a raid10 style layout - which did at least make it trivial to get it back to a normal state; just zpool attach the new disk to the remaining half of the mirror. I'll try to remember how you remove unavailable disks later. Nevermind that I have run out of disks and had to pull the (new, blank) bulk storage drive from my workstation as an emergency spare. Of course, the event that apparently pushed the disk over the edge was doing a full backup to tape, as opposed to the incrementals I've been doing since last January. It's 100TB of low-churn data, but I'm still not sure how smart that schedule is. Also, I do not really look forward to trying to remember how job management works in bacula; it's been a couple of years. This file server does two things: It's exported with samba to our department, and with NFS over a dedicated (50 cm long) 10gbit link to a calculation server we use. Since the file server was busy and it's a quiet week, I thought I'd do a version upgrade on the calculation server, too. FreeBSD upgrades from source are trivial, so that part went fine. However, it did not boot afterwards; the EFI firmware just went straight to trying PXE. Looking into it, the EFI partition was apparently 800 kB, which somehow has worked up to today? Shrinking the swap partition and creating a roomy 64 MB one, then copying over the files from the USB stick's EFI partition worked. Which revealed the next problem: Both the disks in the boot mirror have apparently died to the point where there's a torrent of "retry failed" messages drowning out the console, despite it being seemingly fine when doing the upgrade. I don't think a modest FreeBSD upgrade (13.1 to 13.2 , I think) would massively break support for a ten year old intel SATA controller, but ... idk, I turned it off and left. And yes we run a modern and streamlined operation that's definitely not me fixing things with (sometimes literal) duck tape and bailing wire while also trying to do a different job. e: Not mentioned is how the file server is a moody old HPE ProLiant that takes forever to boot and turns all fans to fire alarm style max if you hotplug/hot-pull drives without the cryptographically signed HPE caddies. Computer viking fucked around with this message at 19:39 on Jan 11, 2024 |
# ¿ Jan 11, 2024 19:35 |
|
BlankSystemDaemon posted:Sounds like a real lovely day, friend - sorry you had to go through that Given the age of the machine, it was probably installed as 10.0 or 10.1 and continuously upgraded. The boot disks are a gmirror setup, so I suspect I may have done something manual instead of going with whatever the sysinstall defaults were at the time? I really can't remember, it's been a while and it has just quietly worked through upgrades without needing to think about the details before now.
|
# ¿ Jan 11, 2024 19:52 |
|
BlankSystemDaemon posted:Yeah, gmirror is definitely not the default for bsdinstall (sysinstall went away a long time ago, but they look very similar and both use dialog, though nowadays it's bsddialog). Oh yeah, I forgot they changed over at some point. Specifically for 9.0 in 2011, apparently.
|
# ¿ Jan 12, 2024 00:35 |
|
I think it's good practice for large installations to avoid all your drives being made the same day, so they don't all die together if there was a problem with the production line. Using different makes or models should reduce the likelihood of them failing simultaneously even further, I guess?
|
# ¿ Jan 13, 2024 21:55 |
|
Things we have learned today: We have a mix of SR and LR transceivers, they just happened to be split between rooms and switches in a way that worked out fine, and the switches were interlinked with copper. Also, the guys pulling fiber through the department chose singlemode, so all the SR transceivers were unhappy. Neither hard nor expensive to fix (we need a single-digit number of transceivers), but it took us an embarrassingly long time trying to figure out why nothing worked before we noticed that some parts said 1310nm and some said 850nm.
|
# ¿ Feb 2, 2024 17:42 |
|
BlankSystemDaemon posted:This is violence at work. Eh, partially my fault. None of us have any idea of what we're doing with fiber, it's a wonder it worked out at all. On the positive side it's definitely a learning experience.
|
# ¿ Feb 3, 2024 05:53 |
|
The reason we use e..g TrueNAS is that the only low-level config you need to do is writing a USB stick, booting from it, picking which drive(s) to install to, and the rest is done in the web interface. It's not as smooth as a Synology, but it's also not "27) sudo vim /usr/local/etc/samba4/smb.conf and write your share definitions" like you'd get if you wanted to do it on plain FreeBSD or Debian. That doesnt' mean a Synology is the wrong choice; I've used them before and will do so again. But the NAS distros have also come a long way. Computer viking fucked around with this message at 00:11 on Feb 12, 2024 |
# ¿ Feb 12, 2024 00:08 |
|
Well Played Mauer posted:I’m comfortable enough with docker and poo poo to install things I may want to use alongside the file system and I already janitor some other headless boxes from the command line. Am I missing any special features from TrueNAS or UnRAID that I couldn’t replicate with docker compose and ppvs? At work I use TrueNAS so I don't have to do a samba+NFS+users from an ancient ActiveDirectory setup by hand again. At home I use it in the hope that it will require less management overall - though I think I'll just run a normal FreeBSD install again next time. So nah not really, unless you have complex file serving needs. I'd suggest using the opportunity to do a bit of real world testing of different tools. Throw them in a ZFS pool and poke that for a bit. See what mdraid and lvm can do. Set up a btrfs raid just so you can say you've done it. After all, it's not that often that you have a stack of large empty drives to play with. Computer viking fucked around with this message at 08:42 on Feb 16, 2024 |
# ¿ Feb 16, 2024 08:35 |
|
Aware posted:I thought 2.5g existed purely as a sop to wifi marketing speeds in excess of 1gbps that is easily pointed out as irrelevant given the 1gbps uplink port on many of them. I guess >1gbps home internet will eventually become more widely spread but it's gotta be a tiny percentage of the market for the next few years. I'd have assumed anyone who really needed more than 1gbps would have either discovered link aggregation or just bit the bullet on 10g gear. In my case, we noticed that both our desktops had 2.5g, so we bought a switch and a 2.5g card for the file server. Much, much cheaper than a 10gbit upgrade, and over double the speed in file transfers and steam peer-to-peer installations. I would have liked 10gbit, but the hardware is still a bit too expensive for the realistic benefits at home.
|
# ¿ Feb 16, 2024 12:35 |
|
Harik posted:Wanted to comment on this because people actually seem to believe this. Businesses operate on a herd mentality so every consumer cloud backup provider will go away within a 2 year span when the "common wisdom" is that it's not a growth market. The risk of both OneDrive and Google Drive disappearing at all, never mind so quickly that I don't have time to download everything, is near zero. Note that he said cloud providers, not specifically cloud backup providers. Sure, trusting either MS or Google to stick to projects is also a folly - but they're not getting bought out, and neither of them run their storage solution as a main income source in the first place.
|
# ¿ Feb 16, 2024 15:11 |
|
Oh goddammit the SAS controller for the external ports in the file server at work seems to have died. The expander box works fine connected to another machine, and everything looks happy here - but it just insists there's nothing connected. (I've tried both ports on the controller and all four on the MD1600, yes.) Oh well I can probably find something compatible for sale somewhere.
|
# ¿ Mar 19, 2024 16:57 |
|
Moey posted:This a powervault expansion shelf? Sorry, typo - it's an MD1400. And yes. BlankSystemDaemon posted:One habit I never grew out of, even after getting rid of hardware RAID controllers and sticking solely to software RAID, was always having a tested-compatible backup controller. It is literally software raid (ZFS on plain FreeBSD), but indeed. Having a spare sitting around would be really nice right now; nobody has anything reasonable in stock. The best case looks like buying a Lenovo-branded card that's probably a rebranded LSI 9300 8-e and just hoping it'll work. Alternatively, I have another newer fileserver sitting around connected to a sequencing machine - they won't run out of storage space on the actual instrument for months, so I could probably borrow that one to tide me over until I can get my hands on something proper. e: Keep in mind that this is academia - nothing matters, the stakes are made up, we have no customers. It'll annoy a handful of postdocs a bit until I get it back up.
|
# ¿ Mar 19, 2024 23:04 |
|
Moey posted:I was gonna say, that's a new one to me. I've never used an MD1200, but that sounds about right. Looking at google, the MD1400 is 12Gbit with SFF-8644 connectors, the MD1200 is 6Gbit with SFF-8088 connectors. I've got it connected to a 6Gbit controller with an 8644 to 8088 cable anyway, so it's kind of academic.
|
# ¿ Mar 19, 2024 23:41 |
|
the spyder posted:I have at least a dozen LSI external HBA's from my decom ZFS server. Full height and Low profile. I believe they are all 9300-8E, but can verify later. I'd happily buy one off you, but I'm not sure how annoying shipping to Norway will be.
|
# ¿ Mar 20, 2024 00:54 |
|
On a positive note, I borrowed a 9300-16e from another machine and it just immediately worked, so that's nice. E: the old card works fine driving another MD1400 in another server. I don't even know anymore but I'll replace it anyway just out of spite. Computer viking fucked around with this message at 14:04 on Mar 20, 2024 |
# ¿ Mar 20, 2024 13:59 |
|
BlankSystemDaemon posted:One trick I was taught early was to truncate a small handful of files, give them GEOM gate devices so they're exposed via devfs the same way memory devices are, and create a testing pool to try commands on. Something about playing with a large stack of real disks (before putting them to their final use) feels good, though. I can't explain why.
|
# ¿ Mar 25, 2024 23:09 |
|
BlankSystemDaemon posted:But truncate can do arbitrary-sized files??? Sure, but they don't make fun disk access noises.
|
# ¿ Mar 26, 2024 13:26 |
|
As for adding another vdev to a pool: It's nice to avoid adding new vdevs to almost full pools, for performance reasons: The pool will prioritize the new vdev until they're about equally full, which slows down writes compared to spreading them equally over the entire pool. Reading back that data in the future will also be slower (since it's spread over fewer disks), but depending on your access pattern that may not be a problem. For bulk storage that's probably not a problem, doubly so if it's just connected over Gbit network.
|
# ¿ Mar 28, 2024 13:26 |
|
For the NVME side, do any ITX motherboards support PCIe bifurcation? If so, you can get PCIe cards that split an x16 slot into four proper NVME slots. I have an Asus one, and apart from being the size of an old GPU it works fine in the ASRock motherboard I'm using. I had to fiddle with the BIOS to get bifurcation working; it would only show me the first drive until that was sorted. Otherwise it Just Works, the drives show up as normal and I have a pretty fast zpool on them. IIRC from last time they came up, there are cheap AliExpress cards that work about the same. e: This is a completely different scale from the "four NVME drives on an SBC" systems, of course; you would probably need one of the gaming ITX cases to make this fit. Cute compared to a tower or rackmount, but a big hunk of steel compared to an RPi-style board. Computer viking fucked around with this message at 14:06 on Apr 14, 2024 |
# ¿ Apr 14, 2024 14:03 |
|
ryanrs posted:Several years ago I needed to build a mini-server with bifurcation. I'm pretty sure I couldn't find a mini-itx board that did it, and had to move up to microATX (which is bigger than mini-itx). Oh neat, that's a much more reasonable use than mine, which boils down to "this SATA controller seems to be failing and there's a sale on NVME drives".
|
# ¿ Apr 14, 2024 23:07 |
|
mekyabetsu posted:With ZFS, do mirrored vdevs need to be the same size? Let's say I have three mirrored vdevs setup like so: You're mostly right - you will get 20TB, and you can add any vdevs you want to an existing pool. The debatable part is "not a problem" - ZFS tries to keep all vdevs at roughly equally full, so the new mirror will get near enough 100% of the write load until they catch up to the rest. If this is a problem or not depends on your use.
|
# ¿ Apr 16, 2024 17:46 |
|
Anime Schoolgirl posted:I'm not sure the CPU of a NAS is something you'd ever upgrade unless you were doing some madcap "i'm delivering content to 100 users on the LAN" setup. Depends on if you use it as your everything-server, I guess - with enough VMs or containers it could make sense.
|
# ¿ Apr 22, 2024 11:55 |
|
evil_bunnY posted:4xNVMe but no 10GBE is some special blend of stupid. Being able to serve a solid 1gbit with a home-friendly number of spinning drives is surprisingly hard. Long reads or writes, sure, but anything with smaller (or heaven forbid, mixed) reads or writes can be "cheap USB stick" levels of slow. Of course it would absolutely not hurt to have a 10gbit port - or even a 2.5 - but nvme-only 1 gbit will still feel a lot faster than spinning-disk-only 1 gbit for certain loads. Computer viking fucked around with this message at 17:00 on Apr 26, 2024 |
# ¿ Apr 26, 2024 16:57 |
|
MadFriarAvelyn posted:I almost wonder if going with Ubuntu would be the way to go for my build. It's the Linux distro I'm most familiar with and looking up some guides setting up ZFS doesn't sound too terrible to do. The ZFS Raid-Z levels don't do dedicated parity drives, they spread the data and parity blocks evenly across all the drives - otherwise the parity drive(s) would see more traffic than the rest. You can think of it as writing data in groups of as many blocks as there are drives, and the number is how many of those are parity. So for a six-drive RAID-Z2, it would be writing groups of four data blocks and two parity blocks. (RAID-Z is RAID-Z1.) In practice with your four disks, this means that RAID-Z1 would be 3+1, and RAID-Z2 would be 2+2. You could get the same 50% available space with two mirrors, though it would have different tradeoffs - it would probably be a bit faster, it's quicker to rebuild, but losing two disks could take out the pool if they're in the same mirror. (If you have multiple vdevs, writes are balanced across them based on their free space percentage - so ideally two mirrors would be similar to a RAID10.) I'd probably do Z1. Computer viking fucked around with this message at 23:49 on Apr 26, 2024 |
# ¿ Apr 26, 2024 19:32 |
|
I may have spent a lot of time today trying to figure out why an old rack server (a Dell R415) wasn't discovering any of the disks I put in it. Apparently the backplane connector on the motherboard is just passthrough to the RAID controller card, and I've long since repurposed that. Oops. The onboard SATA controller drives ... SATA ports on the motherboard of the 1U rack server with a hotplug backplane and no SATA power cables? Weird.
|
# ¿ Apr 27, 2024 01:33 |
|
Super annoying: It looks like the $900 Tri-mode adapter I was (ab)using to connect two SAS HDDs suddenly died. It's a Megaraid 9560-16i, and I had it in a desktop tower with a 120mm case fan blowing straight up into the heatsink from a few cm away. Apparently not good enough; any machine I put it in hangs halfway through the early BIOS stages. And of course, it's not entirely mine - I used it at work, but it was strictly speaking bought by another group who, in the end, didn't need it. I ended up with it since I was the only one involved who showed any interest, but I can't really bother them too much about trying to get it swapped under warranty. (This is not a question, I'm just complaining.)
|
# ¿ May 5, 2024 01:43 |
|
|
# ¿ May 15, 2024 05:06 |
|
Harik posted:took nearly a year because my dog got sick and wiped out my toy fund (he's fine now, good pupper) I've used Startech U.3 to PCIe adapters at work, and they seem to be fine; I guess U.2 would be very similar.
|
# ¿ May 9, 2024 20:04 |