|
Arishtat posted:WD Reds or Red Pros are still valid as are the Seagate Ironwolf line and so are Toshiba N300 Pros (my personal choice, but I have WD Reds as well). BlankSystemDaemon posted:WD Reds under that are 6TB and under have a risk of being SMR, so please be careful to check first. Wibla posted:A more important question would be: what are you doing for backups?
|
# ? Sep 28, 2023 22:53 |
|
|
# ? May 30, 2024 14:49 |
|
BlankSystemDaemon posted:Wait, there's an active goon IRC channel I'm not on? Yeah, but all we is sit around and talk poo poo about ZFS and it's maintainers/developers.
|
# ? Sep 29, 2023 01:49 |
Any recommendations for a Cyberpower CP1000AVRLCD replacement battery?
|
|
# ? Sep 29, 2023 06:03 |
|
the vast majority of ups' use sealed lead acid batteries that they also use in wheelchairs and scooters. Looks like the cp1000avrlcd uses a 12v 9ah one, which you can buy at any battery store, many auto parts stores (eg canadian tire), or even amazon for about $30. cyberpower branded replacement is about $50 msrp. You can also upgrade to a higher amperage version if it fits the same physical dimensions. I replaced the 2x 12v 7ah batteries in my APC ups with 2x 12v 9ah batteries.
|
# ? Sep 29, 2023 06:13 |
|
I recently got gigabit internet and it has increased the usage of my server a good bit since i am now opening it up to family/friends/etc. to stream. Additionally, I want to start playing around with VMs a little bit, so I am looking to give the server a little more horsepower. If it matters, this is an unraid box. The current processor is a Ryzen 5 2400G with 16 gigabytes of ram. That's a 4 core/8 thread processor from 2018. My current upgrade options I am looking at are: Ryzen 7 5700G - which is an 8 core processor from 2021. It is still an AM4 processor and compatible with my current motherboard. Still 65W TDP. I have to have something with a built in graphics card since I don't have any available PCI-E slots currently. I know I could run without having access to a graphics card, but I have had to hook up a monitor to the server in the past. The CPU costs $170 currently and comes with a heatsink. With this processor I would either like to upgrade to 32 gigabytes of ram (~$50) or 64 gigabytes of ram (~$100). The other option is to move to a AM5 platform. The Ryzen 9 7900 is a 12 core processor from this year, still 65w TDP. I would also have to buy a new motherboard. The new motherboard I have researched would give me onboard 2.5g ethernet, and it has lots of PCI-E slots for future expansion, the problem is this is going to cost ~$800 to upgrade as I would need to buy the processor ($400), the motherboard (~$200), and new ram (a 64 gigabyte kit that's 2x32 in DDR5 is still pretty high) (~$200). https://www.cpubenchmark.net/compare/3183vs5167vs4323/AMD-Ryzen-5-2400G-vs-AMD-Ryzen-9-7900-vs-AMD-Ryzen-7-5700G I don't foresee any other upgrades in the near future, so while the future expandability of the AM5 board is nice it isn't urgent to have now. I still feel like getting the 5700G feels more like a stop gap solution, but hopefully I can get 2-3 years out of it. Edit: Or buy an old epyc cpu/mobo w/64 gigs of ram: https://www.ebay.com/itm/1754264436...%3ABFBM0rmlkdxi nerox fucked around with this message at 14:18 on Sep 29, 2023 |
# ? Sep 29, 2023 13:44 |
|
Aloha! So, my 8TB external drive has over 6 years of uptime, and CrystalDiskInfo is giving it a yellow warning light for "current pending sector count" and "uncorrectable sector count", so I am going to be buying a new drive to replace it. The new drive I'm looking at is: Seagate Expansion 10TB External Hard Drive HDD - USB 3.0 So, questions: 1. Good drive? 2. What's the most efficient way to move ~7.5TB of data, mostly media files, from the old drive to the new one?
|
# ? Sep 29, 2023 14:22 |
|
nerox posted:Edit: Or buy an old epyc cpu/mobo w/64 gigs of ram: https://www.ebay.com/itm/1754264436...%3ABFBM0rmlkdxi If the budget to buy that and power/cool it are not problematic, I'd do this just to get IPMI and ECC.
|
# ? Sep 29, 2023 16:01 |
|
.
|
# ? Sep 29, 2023 16:11 |
|
IOwnCalculus posted:If the budget to buy that and power/cool it are not problematic, I'd do this just to get IPMI and ECC. poo poo, I'd do it just to never have to worry about running out of PCIe lanes. A GPU for me each VM? No problem. Multiple HBA cards? No problem. 10gb NIC? No problem. NVMe breakout board? No problem.
|
# ? Sep 29, 2023 17:00 |
|
Our raid1 array died today . It was a simple two disk array built with 18TB toshiba MG09 in an OWC thunderbolt enclosure. The scsi layer reported an error on one disk in the morning a few days ago and md disabled it. I only noticed some hours later due to the horrible clicking sound when I went in the room. I took the bad drive out and added " RMA bad drive" to my list of poo poo to do. But then of course this morning my lovely wife says it is clicking again and I told her that certainly wasn't possible because I removed the bad drive. But sure enough, the remaining drive loving failed as well. So no more anything. The first dead disk just immediately begins clicking when connected to a (powered) usb-sata thing. The most recently deceased spun up for a few minutes the one time I tried but then started clicking again. All of the important data is in ✨the cloud✨ but there were a lot of linux isos that I just depended on the raid to keep alive. Kind of pissed about that. I guess I can RMA one at a time and see if I can coax the other to work for ~8 hours to do a copy but uh I don't really see that happening. I'm also annoyed because even with two more of these drives I'm not going to trust any of it. Sigh.
|
# ? Sep 29, 2023 19:05 |
|
And people scoff at me for having backups of linux isos
|
# ? Sep 29, 2023 19:17 |
|
Wibla posted:And people scoff at me for having backups of linux isos The way I see it, your time is easily worth more than the cost of some drives, so it's worth having backups of anything you might want to use again. No-one wants to hunt down an obscure ISO for the second time
|
# ? Sep 29, 2023 19:20 |
|
IOwnCalculus posted:If the budget to buy that and power/cool it are not problematic, I'd do this just to get IPMI and ECC. The Epyc stuff was really a joke. I would have to buy a lot of poo poo to make that work right beyond the sketchy listing on ebay of stuff coming from china.
|
# ? Sep 29, 2023 19:31 |
|
I suppose a 4 bay enclosure and two more drives would have added another what, €700-800 to the initial cost. Even with it all gone now, that still seems like an excessive expense. I surely wouldn't pay that to have the data magically recovered. I'm in some period of mourning and keep going back and forth about how mad I am about it.
|
# ? Sep 29, 2023 19:36 |
|
Scruff McGruff posted:poo poo, I'd do it just to never have to worry about running out of PCIe lanes. A GPU for me each VM? No problem. Multiple HBA cards? No problem. 10gb NIC? No problem. NVMe breakout board? No problem. I hear you there too. I'm at the point where I'm debating figuring out how to make some custom PCB adapters to use the extra PCIe lanes on my DL380 G9. There's one with a slightly pin-swapped slot that's dedicated to their proprietary FlexLOM NICs - wouldn't be enough physical room for a large card, but a riser with a single m.2 NVMe would be trivial. Even more feisty would be figuring out the pinout for the mezzanine HBA slot that I'm not using, or the CPU2 PCIe riser that's trapped under drive bays - a super-low-profile connector with a ribbon cable could buy me another NVMe.
|
# ? Sep 30, 2023 01:38 |
|
I just want to warn people that amd sata controllers suck. I have a nas on a Gigabyte X570S AERO G with a 5900x. I had truenas in a vm on proxmox with pcie passthrough and decided I didn't like truenas and imported the zfs directly into proxmox which just worked. Then I did a scrub and all 6 of my 8TB drives has read errors which didn't make any sense, then I realized that truenas throttles scrub speed and proxmox doesn't. The sata controller couldn't handle 150MB+ from all drives at once. So I bought an 8 port lsi card off ebay and it just worked. Glad i figured that out now and not during a rebuild. I'm kinda disappointed in amd, if I went intel I wouldn't need a video card or sata card and would free up 2 slots but also you can't get more than 8 performance core on any consumer chip still. I am glad I got this board though. I have it full populated with 4 pcie cards, 2 more nvme in pcie adapter (bifurcated), a video card, a 25GB nic and sata adapter using an m.2 to pcie and also 2 nvme on board.
|
# ? Sep 30, 2023 02:44 |
|
nerox posted:I recently got gigabit internet and it has increased the usage of my server a good bit since i am now opening it up to family/friends/etc. to stream. Additionally, I want to start playing around with VMs a little bit, so I am looking to give the server a little more horsepower. If when you say opening up streaming to friends/family you mean via Plex, then I'd get an Intel CPU with an iGPU as it will be a champ for hardware transcoding & tone mapping due to QSV and I don't believe AMD iGPUs are supported.
|
# ? Sep 30, 2023 02:49 |
|
Perplx posted:I just want to warn people that amd sata controllers suck. I have a nas on a Gigabyte X570S AERO G with a 5900x. I had truenas in a vm on proxmox with pcie passthrough and decided I didn't like truenas and imported the zfs directly into proxmox which just worked. Then I did a scrub and all 6 of my 8TB drives has read errors which didn't make any sense, then I realized that truenas throttles scrub speed and proxmox doesn't. The sata controller couldn't handle 150MB+ from all drives at once. So I bought an 8 port lsi card off ebay and it just worked. Glad i figured that out now and not during a rebuild. You know, that would explain the weird problems I still have on my pool. It's my old gaming machine; a Ryzen 3600 on a B550 chipset ASRock card - and I get exactly the same number of checksum errors on both SATA disks in my mirror, while the NVME boot mirror is fine. I borrowed an LSI/Broadcom card that was spare from work, but it's actually too new; the 9600-16i only just got drivers in FreeBSD 14. The newest I can find used locally is a 9300; I may have to go brave international ebay to find a 9400 or 9500. I also tried a dingy old SATA-II SiS card I had lying around, but it keeps timing out and failing drives; it's probably time to throw it away.
|
# ? Sep 30, 2023 16:12 |
|
Any nerd with self respect has at least a couple of m1015 / LSI 9211-8i cards in a drawer somewhere
|
# ? Sep 30, 2023 17:56 |
|
Computer viking posted:You know, that would explain the weird problems I still have on my pool. It's my old gaming machine; a Ryzen 3600 on a B550 chipset ASRock card - and I get exactly the same number of checksum errors on both SATA disks in my mirror, while the NVME boot mirror is fine. Same, I even RMA'd my B550 board thinking the SATA controller was shagged because it threw up loads of CRC errors when my HBA card worked perfectly. Turns out the SATA controller is just trash for NAS applications.
|
# ? Sep 30, 2023 18:00 |
|
Wibla posted:Any nerd with self respect has at least a couple of m1015 / LSI 9211-8i cards in a drawer somewhere I'm about to, because I'm condensing two into a single 9300-16i.
|
# ? Sep 30, 2023 18:03 |
|
Random question but has anyone had trouble with GPU passthrough for a GT710 in a HP Z440 running Unraid? Worked fine in my other box with a ASRock mobo.
|
# ? Sep 30, 2023 18:40 |
|
Smashing Link posted:Random question but has anyone had trouble with GPU passthrough for a GT710 in a HP Z440 running Unraid? Worked fine in my other box with a ASRock mobo. All the virtualization settings correct in BIOS?
|
# ? Sep 30, 2023 19:17 |
|
Charles Leclerc posted:All the virtualization settings correct in BIOS? VT-x and VT-d settings are enabled in BIOS. I double checked the GPU is in the x16 PCIE lane. Processor is a Xeon 2650L V4 so should support virtualization.
|
# ? Sep 30, 2023 22:41 |
|
Splinter posted:If when you say opening up streaming to friends/family you mean via Plex, then I'd get an Intel CPU with an iGPU as it will be a champ for hardware transcoding & tone mapping due to QSV and I don't believe AMD iGPUs are supported. Unfortuantely, I would have to spend more money on a new motherboard to go Intel, plus I haven't paid attention to Intel since Ryzen came out, so I have no idea what I would even get. I found a 5700G locally for cheap, so I just did that upgrade. It's already in the box and I am just waiting on my ram upgrade to arrive.
|
# ? Oct 2, 2023 18:06 |
|
Huh. Don't know how (pleasantly) surprised I should be here, but with the same data and same drives/hardware, a scrub on a pool of three 11-drive raidz3 vdevs takes ~20h, while the last scrubs on the old pool with five four-drive raidz1 vdevs was in the three day range. I suspect having the extra 13 spindles in play helps a lot.
|
# ? Oct 2, 2023 18:09 |
|
My 8x14TB raidz2 array takes about 37 hours to scrub... with some weird IO patterns going on. Periods of 2-3MB/s and then periods of >130MB/s per drive.
|
# ? Oct 2, 2023 18:18 |
|
Wibla posted:Any nerd with self respect has at least a couple of m1015 / LSI 9211-8i cards in a drawer somewhere
|
# ? Oct 2, 2023 21:19 |
Well, mpsutil flash save [bios|firmware] /path/to/file followed by mpsutil flash update [bios|firmware] /path/to/file usually work fine on FreeBSD, as described in the manual page.
|
|
# ? Oct 2, 2023 21:31 |
|
Wibla posted:My 8x14TB raidz2 array takes about 37 hours to scrub... with some weird IO patterns going on. Periods of 2-3MB/s and then periods of >130MB/s per drive. There's tons and tons of reasons it could do that. check iostat, anything going on other than the scrub when it goes low? Scrubs are super low priority. How many times have you written to the array? Really large files that are written contigously are going to scrub fast, vs many small files that are all over the place due to fragmentation. Do you have deduplication or compression on? That'll affect things non-linearly as well. There's probably a dozen other things I'm not even thinking about...
|
# ? Oct 3, 2023 01:17 |
Wibla posted:My 8x14TB raidz2 array takes about 37 hours to scrub... with some weird IO patterns going on. Periods of 2-3MB/s and then periods of >130MB/s per drive. To get a real answer you're going need to correlate things from zpool iostat using the -w, -r and -l flags. You'll also want to run each command with the -Pv to ensure you're getting each individual disk with its full physical path. Hughlander posted:There's tons and tons of reasons it could do that. Scrub is the second lowest priority I/O, TRIM is the lowest priority. Scrub will also spend a fair amount of computation time figuring out the most sequential way to do I/O, and this is tracked via the 'scanned' and 'issued' amounts in zpool status during a scrub. Compression is in-line and doesn't really matter since the defaults decompression speed is faster than anything but the fastest SSDs on the market.
|
|
# ? Oct 3, 2023 11:52 |
|
BlankSystemDaemon posted:Well, mpsutil flash save [bios|firmware] /path/to/file followed by mpsutil flash update [bios|firmware] /path/to/file usually work fine on FreeBSD, as described in the manual page.
|
# ? Oct 3, 2023 18:53 |
The ones I mentioned are for the Fusion-MPS and -MPR product lines from LSI, but since those're the ones most commonly used, it's at least worth mentioning. There's both mpsutil as I mentioned (which also works with Fusion MPR devices) but also mptutil and even mfiutil/mrsasutil (which depend on whether you're using the mfi(4) or mrsas(4) driver). Each has its own manual page and most seem to include the flash subcommand, though only the Fusion-MPR and -MPS product lines include the ability to flash both the Option ROM as well as the firmware. Maybe newer versions simply include the Option ROM in the firmware? I seem to recall that it was initially developed back in the late 2000s by John Baldwin, when he was working at Yahoo (back when they used FreeBSD for everything), and was upstreamed in the mid-2010s - but LSI were a lot better about documentation than Broadcom have ever been. BlankSystemDaemon fucked around with this message at 19:33 on Oct 3, 2023 |
|
# ? Oct 3, 2023 19:30 |
|
Was hoping to get some feedback on a Synology NAS. Main and most likely only usage will be for storing photos locally from phones and being able to share them with family. We're talking like 4 people. I have Plex running on other hardware and it will stay on that. I'm currently looking at the DS223J. I'm ignoring the truth that going with Google Photos or similar would be cheaper and easier. How does Synology handle backing up to external services for a cold storage backup? Am I kneecapping myself with the 2 bays? If I wanted to increase size in the future, is it just a matter of getting bigger drives, slapping one in, rebuild, slap the other one in, and rebuild again?
|
# ? Oct 5, 2023 15:46 |
|
I would say to avoid the J series devices just because Synology are really tight on the hardware specs even at the high end, so at the low end you're getting something less powerful than a 10 year old desktop PC.
|
# ? Oct 5, 2023 16:53 |
|
Thanks Ants posted:I would say to avoid the J series devices just because Synology are really tight on the hardware specs even at the high end, so at the low end you're getting something less powerful than a 10 year old desktop PC. On the other hand I keep wondering. Over the last ten years the j series has gone from 1GHz single core to 1.7GHz quad core and from 128MB or ram to 1GB. The same DSM 7 technically mostly miserably functions on the DS115j (single core 800MHz/256MB combo) for serving files as what runs on the current j series. We've got to be nearing the point where the hardware is decently catching up with the software, right? The DS223 is also just specced weirdly. It's the same cpu as the j version with twice the ram.
|
# ? Oct 5, 2023 19:19 |
Drive 2 keeps reporting I/O errors on my Synology like once a month, but SMART tests of the drive always report it being a-ok. Is there some other test I can run on this thing or should I be concerned about this?
|
|
# ? Oct 6, 2023 13:18 |
|
I guess this is the place to ask this? The thing I'm looking for is basically a Jellyfin or Plex (or whatever is the good one, I use Jellyfin now) server for use on a local network, that is cost effective and quiet and that I can also use to store and serve legally acquired romsets to my various retro junk.
|
# ? Oct 7, 2023 19:03 |
|
Not sure if this is the right thread to talk about hard drives in general but I think one of my 1 TB 2.5" ssds might be failing, what's a good brand to replace it with? I'm in Canada, seems like 2 TB ssds are between 100-200$ so something in that range would be my budget, this would be for stuff I interact with regularly like media, I think my preference is for stability/reliability/endurance over pure speed, typical ssd speeds is fine for me. Quick google suggests this: https://www.amazon.ca/Samsung-MZ-77Q2T0B-AM-Internal-Version/dp/B089C6LZ42/ref which I've bought a couple of times before, is it fine to probably just buy again?
|
# ? Oct 7, 2023 20:29 |
|
|
# ? May 30, 2024 14:49 |
|
Raenir Salazar posted:Not sure if this is the right thread to talk about hard drives in general but I think one of my 1 TB 2.5" ssds might be failing, what's a good brand to replace it with? I'm in Canada, seems like 2 TB ssds are between 100-200$ so something in that range would be my budget, this would be for stuff I interact with regularly like media, I think my preference is for stability/reliability/endurance over pure speed, typical ssd speeds is fine for me. With those requirements, what about one of Kingston's datacenter SATA drives? The 1.92TiB version is $229 on Amazon.ca, which would be reasonable for a fast NVME disk but is a lot for a SATA model. On the other hand, they claim a lot of endurance and a five year warranty.
|
# ? Oct 7, 2023 23:55 |