Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
TACD
Oct 27, 2000

Arishtat posted:

WD Reds or Red Pros are still valid as are the Seagate Ironwolf line and so are Toshiba N300 Pros (my personal choice, but I have WD Reds as well).

Unless you're dead set on the rackmount form factor you get more bang for your buck from the DS line. I recently replaced an RS818+ and ended up going with a DS1821xs+ because it came with more bays and dedicated NVMe slots for the same or less than an RS822+.

Whatever you do you'll want to start out with at least two disks and go from there. If you format your pool SHR (Synology Hybrid RAID) you'll be able to add disks as you go as long as they are the same size or greater than your existing disks. Note that if you add a larger disk than what you already have (10TB new disk vs 8TB existing disks) you'll only get to use 8TB on the new disk.

RE: use case, # of concurrent users and security setup

Take some time and do it right the first time. Do set up users and permissions because it's a lot more painful to layer that on after the fact.

Also you mentioned saving high res images; you might want to consider getting NVMe cache drives to help speed up saving and processing images.
This is super useful, thanks! I don't think rack mountable kit is a hard requirement, but I'll put together a couple of options and see how people at work feel about them.

BlankSystemDaemon posted:

WD Reds under that are 6TB and under have a risk of being SMR, so please be careful to check first.
Ah yeah I forgot about that whole deal. I didn't know it was only ≤6TB drives though so that's good to know as well.

Wibla posted:

A more important question would be: what are you doing for backups?
Part of the reason I'm looking into this is that we can't afford to store everything on AWS… but as I'm writing this it occurs to me the price for archive data we'd only access in an emergency is probably much cheaper, so that might be feasible. Other than that, I think if the lab burns down or something then the whole company is toast anyway.

Adbot
ADBOT LOVES YOU

Moey
Oct 22, 2010

I LIKE TO MOVE IT

BlankSystemDaemon posted:

Wait, there's an active goon IRC channel I'm not on?

Yeah, but all we is sit around and talk poo poo about ZFS and it's maintainers/developers.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Any recommendations for a Cyberpower CP1000AVRLCD replacement battery?

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
the vast majority of ups' use sealed lead acid batteries that they also use in wheelchairs and scooters.

Looks like the cp1000avrlcd uses a 12v 9ah one, which you can buy at any battery store, many auto parts stores (eg canadian tire), or even amazon for about $30. cyberpower branded replacement is about $50 msrp.

You can also upgrade to a higher amperage version if it fits the same physical dimensions. I replaced the 2x 12v 7ah batteries in my APC ups with 2x 12v 9ah batteries.

nerox
May 20, 2001
I recently got gigabit internet and it has increased the usage of my server a good bit since i am now opening it up to family/friends/etc. to stream. Additionally, I want to start playing around with VMs a little bit, so I am looking to give the server a little more horsepower.

If it matters, this is an unraid box.

The current processor is a Ryzen 5 2400G with 16 gigabytes of ram. That's a 4 core/8 thread processor from 2018.

My current upgrade options I am looking at are:

Ryzen 7 5700G - which is an 8 core processor from 2021. It is still an AM4 processor and compatible with my current motherboard. Still 65W TDP. I have to have something with a built in graphics card since I don't have any available PCI-E slots currently. I know I could run without having access to a graphics card, but I have had to hook up a monitor to the server in the past. The CPU costs $170 currently and comes with a heatsink.

With this processor I would either like to upgrade to 32 gigabytes of ram (~$50) or 64 gigabytes of ram (~$100).

The other option is to move to a AM5 platform. The Ryzen 9 7900 is a 12 core processor from this year, still 65w TDP. I would also have to buy a new motherboard. The new motherboard I have researched would give me onboard 2.5g ethernet, and it has lots of PCI-E slots for future expansion, the problem is this is going to cost ~$800 to upgrade as I would need to buy the processor ($400), the motherboard (~$200), and new ram (a 64 gigabyte kit that's 2x32 in DDR5 is still pretty high) (~$200).

https://www.cpubenchmark.net/compare/3183vs5167vs4323/AMD-Ryzen-5-2400G-vs-AMD-Ryzen-9-7900-vs-AMD-Ryzen-7-5700G

I don't foresee any other upgrades in the near future, so while the future expandability of the AM5 board is nice it isn't urgent to have now. I still feel like getting the 5700G feels more like a stop gap solution, but hopefully I can get 2-3 years out of it.

Edit: Or buy an old epyc cpu/mobo w/64 gigs of ram: https://www.ebay.com/itm/1754264436...%3ABFBM0rmlkdxi

nerox fucked around with this message at 14:18 on Sep 29, 2023

Annath
Jan 11, 2009

Batatouille is a great and funny play on words for a video game creature and I love silly words like these
Clever Betty
Aloha!

So, my 8TB external drive has over 6 years of uptime, and CrystalDiskInfo is giving it a yellow warning light for "current pending sector count" and "uncorrectable sector count", so I am going to be buying a new drive to replace it.

The new drive I'm looking at is:

Seagate Expansion 10TB External Hard Drive HDD - USB 3.0

So, questions:

1. Good drive?

2. What's the most efficient way to move ~7.5TB of data, mostly media files, from the old drive to the new one?

IOwnCalculus
Apr 2, 2003





nerox posted:

Edit: Or buy an old epyc cpu/mobo w/64 gigs of ram: https://www.ebay.com/itm/1754264436...%3ABFBM0rmlkdxi

If the budget to buy that and power/cool it are not problematic, I'd do this just to get IPMI and ECC.

Aware
Nov 18, 2003

.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

IOwnCalculus posted:

If the budget to buy that and power/cool it are not problematic, I'd do this just to get IPMI and ECC.

poo poo, I'd do it just to never have to worry about running out of PCIe lanes. A GPU for me each VM? No problem. Multiple HBA cards? No problem. 10gb NIC? No problem. NVMe breakout board? No problem.

other people
Jun 27, 2004
Associate Christ
Our raid1 array died today :(. It was a simple two disk array built with 18TB toshiba MG09 in an OWC thunderbolt enclosure. The scsi layer reported an error on one disk in the morning a few days ago and md disabled it. I only noticed some hours later due to the horrible clicking sound when I went in the room. I took the bad drive out and added " RMA bad drive" to my list of poo poo to do.

But then of course this morning my lovely wife says it is clicking again and I told her that certainly wasn't possible because I removed the bad drive. But sure enough, the remaining drive loving failed as well. So no more anything.

The first dead disk just immediately begins clicking when connected to a (powered) usb-sata thing. The most recently deceased spun up for a few minutes the one time I tried but then started clicking again.

All of the important data is in ✨the cloud✨ but there were a lot of linux isos that I just depended on the raid to keep alive. Kind of pissed about that.

I guess I can RMA one at a time and see if I can coax the other to work for ~8 hours to do a copy but uh I don't really see that happening.

I'm also annoyed because even with two more of these drives I'm not going to trust any of it. Sigh.

Wibla
Feb 16, 2011

And people scoff at me for having backups of linux isos :haw:

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Wibla posted:

And people scoff at me for having backups of linux isos :haw:

The way I see it, your time is easily worth more than the cost of some drives, so it's worth having backups of anything you might want to use again. No-one wants to hunt down an obscure ISO for the second time

nerox
May 20, 2001

IOwnCalculus posted:

If the budget to buy that and power/cool it are not problematic, I'd do this just to get IPMI and ECC.

The Epyc stuff was really a joke. I would have to buy a lot of poo poo to make that work right beyond the sketchy listing on ebay of stuff coming from china.

other people
Jun 27, 2004
Associate Christ
:shrug: I suppose a 4 bay enclosure and two more drives would have added another what, €700-800 to the initial cost. Even with it all gone now, that still seems like an excessive expense. I surely wouldn't pay that to have the data magically recovered.

I'm in some period of mourning and keep going back and forth about how mad I am about it.

IOwnCalculus
Apr 2, 2003





Scruff McGruff posted:

poo poo, I'd do it just to never have to worry about running out of PCIe lanes. A GPU for me each VM? No problem. Multiple HBA cards? No problem. 10gb NIC? No problem. NVMe breakout board? No problem.

I hear you there too. I'm at the point where I'm debating figuring out how to make some custom PCB adapters to use the extra PCIe lanes on my DL380 G9. There's one with a slightly pin-swapped slot that's dedicated to their proprietary FlexLOM NICs - wouldn't be enough physical room for a large card, but a riser with a single m.2 NVMe would be trivial. Even more feisty would be figuring out the pinout for the mezzanine HBA slot that I'm not using, or the CPU2 PCIe riser that's trapped under drive bays - a super-low-profile connector with a ribbon cable could buy me another NVMe.

Perplx
Jun 26, 2004


Best viewed on Orgasma Plasma
Lipstick Apathy
I just want to warn people that amd sata controllers suck. I have a nas on a Gigabyte X570S AERO G with a 5900x. I had truenas in a vm on proxmox with pcie passthrough and decided I didn't like truenas and imported the zfs directly into proxmox which just worked. Then I did a scrub and all 6 of my 8TB drives has read errors which didn't make any sense, then I realized that truenas throttles scrub speed and proxmox doesn't. The sata controller couldn't handle 150MB+ from all drives at once. So I bought an 8 port lsi card off ebay and it just worked. Glad i figured that out now and not during a rebuild.

I'm kinda disappointed in amd, if I went intel I wouldn't need a video card or sata card and would free up 2 slots but also you can't get more than 8 performance core on any consumer chip still. I am glad I got this board though. I have it full populated with 4 pcie cards, 2 more nvme in pcie adapter (bifurcated), a video card, a 25GB nic and sata adapter using an m.2 to pcie and also 2 nvme on board.

Splinter
Jul 4, 2003
Cowabunga!

nerox posted:

I recently got gigabit internet and it has increased the usage of my server a good bit since i am now opening it up to family/friends/etc. to stream. Additionally, I want to start playing around with VMs a little bit, so I am looking to give the server a little more horsepower.

If it matters, this is an unraid box.

The current processor is a Ryzen 5 2400G with 16 gigabytes of ram. That's a 4 core/8 thread processor from 2018.

My current upgrade options I am looking at are:

Ryzen 7 5700G - which is an 8 core processor from 2021. It is still an AM4 processor and compatible with my current motherboard. Still 65W TDP. I have to have something with a built in graphics card since I don't have any available PCI-E slots currently. I know I could run without having access to a graphics card, but I have had to hook up a monitor to the server in the past. The CPU costs $170 currently and comes with a heatsink.

With this processor I would either like to upgrade to 32 gigabytes of ram (~$50) or 64 gigabytes of ram (~$100).

The other option is to move to a AM5 platform. The Ryzen 9 7900 is a 12 core processor from this year, still 65w TDP. I would also have to buy a new motherboard. The new motherboard I have researched would give me onboard 2.5g ethernet, and it has lots of PCI-E slots for future expansion, the problem is this is going to cost ~$800 to upgrade as I would need to buy the processor ($400), the motherboard (~$200), and new ram (a 64 gigabyte kit that's 2x32 in DDR5 is still pretty high) (~$200).

https://www.cpubenchmark.net/compare/3183vs5167vs4323/AMD-Ryzen-5-2400G-vs-AMD-Ryzen-9-7900-vs-AMD-Ryzen-7-5700G

I don't foresee any other upgrades in the near future, so while the future expandability of the AM5 board is nice it isn't urgent to have now. I still feel like getting the 5700G feels more like a stop gap solution, but hopefully I can get 2-3 years out of it.

Edit: Or buy an old epyc cpu/mobo w/64 gigs of ram: https://www.ebay.com/itm/1754264436...%3ABFBM0rmlkdxi

If when you say opening up streaming to friends/family you mean via Plex, then I'd get an Intel CPU with an iGPU as it will be a champ for hardware transcoding & tone mapping due to QSV and I don't believe AMD iGPUs are supported.

Computer viking
May 30, 2011
Now with less breakage.

Perplx posted:

I just want to warn people that amd sata controllers suck. I have a nas on a Gigabyte X570S AERO G with a 5900x. I had truenas in a vm on proxmox with pcie passthrough and decided I didn't like truenas and imported the zfs directly into proxmox which just worked. Then I did a scrub and all 6 of my 8TB drives has read errors which didn't make any sense, then I realized that truenas throttles scrub speed and proxmox doesn't. The sata controller couldn't handle 150MB+ from all drives at once. So I bought an 8 port lsi card off ebay and it just worked. Glad i figured that out now and not during a rebuild.

I'm kinda disappointed in amd, if I went intel I wouldn't need a video card or sata card and would free up 2 slots but also you can't get more than 8 performance core on any consumer chip still. I am glad I got this board though. I have it full populated with 4 pcie cards, 2 more nvme in pcie adapter (bifurcated), a video card, a 25GB nic and sata adapter using an m.2 to pcie and also 2 nvme on board.

You know, that would explain the weird problems I still have on my pool. It's my old gaming machine; a Ryzen 3600 on a B550 chipset ASRock card - and I get exactly the same number of checksum errors on both SATA disks in my mirror, while the NVME boot mirror is fine.

I borrowed an LSI/Broadcom card that was spare from work, but it's actually too new; the 9600-16i only just got drivers in FreeBSD 14. The newest I can find used locally is a 9300; I may have to go brave international ebay to find a 9400 or 9500. I also tried a dingy old SATA-II SiS card I had lying around, but it keeps timing out and failing drives; it's probably time to throw it away.

Wibla
Feb 16, 2011

Any nerd with self respect has at least a couple of m1015 / LSI 9211-8i cards in a drawer somewhere :colbert:

Theophany
Jul 22, 2014

SUCCHIAMI IL MIO CAZZO DA DIETRO, RANA RAGAZZO



2022 FIA Formula 1 WDC

Computer viking posted:

You know, that would explain the weird problems I still have on my pool. It's my old gaming machine; a Ryzen 3600 on a B550 chipset ASRock card - and I get exactly the same number of checksum errors on both SATA disks in my mirror, while the NVME boot mirror is fine.

I borrowed an LSI/Broadcom card that was spare from work, but it's actually too new; the 9600-16i only just got drivers in FreeBSD 14. The newest I can find used locally is a 9300; I may have to go brave international ebay to find a 9400 or 9500. I also tried a dingy old SATA-II SiS card I had lying around, but it keeps timing out and failing drives; it's probably time to throw it away.

Same, I even RMA'd my B550 board thinking the SATA controller was shagged because it threw up loads of CRC errors when my HBA card worked perfectly. Turns out the SATA controller is just trash for NAS applications.

IOwnCalculus
Apr 2, 2003





Wibla posted:

Any nerd with self respect has at least a couple of m1015 / LSI 9211-8i cards in a drawer somewhere :colbert:

I'm about to, because I'm condensing two into a single 9300-16i.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer
Random question but has anyone had trouble with GPU passthrough for a GT710 in a HP Z440 running Unraid? Worked fine in my other box with a ASRock mobo.

Theophany
Jul 22, 2014

SUCCHIAMI IL MIO CAZZO DA DIETRO, RANA RAGAZZO



2022 FIA Formula 1 WDC

Smashing Link posted:

Random question but has anyone had trouble with GPU passthrough for a GT710 in a HP Z440 running Unraid? Worked fine in my other box with a ASRock mobo.

All the virtualization settings correct in BIOS?

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

Charles Leclerc posted:

All the virtualization settings correct in BIOS?

VT-x and VT-d settings are enabled in BIOS. I double checked the GPU is in the x16 PCIE lane. Processor is a Xeon 2650L V4 so should support virtualization.

nerox
May 20, 2001

Splinter posted:

If when you say opening up streaming to friends/family you mean via Plex, then I'd get an Intel CPU with an iGPU as it will be a champ for hardware transcoding & tone mapping due to QSV and I don't believe AMD iGPUs are supported.

Unfortuantely, I would have to spend more money on a new motherboard to go Intel, plus I haven't paid attention to Intel since Ryzen came out, so I have no idea what I would even get.

I found a 5700G locally for cheap, so I just did that upgrade. It's already in the box and I am just waiting on my ram upgrade to arrive. :)

IOwnCalculus
Apr 2, 2003





Huh. Don't know how (pleasantly) surprised I should be here, but with the same data and same drives/hardware, a scrub on a pool of three 11-drive raidz3 vdevs takes ~20h, while the last scrubs on the old pool with five four-drive raidz1 vdevs was in the three day range.

I suspect having the extra 13 spindles in play helps a lot.

Wibla
Feb 16, 2011

My 8x14TB raidz2 array takes about 37 hours to scrub... with some weird IO patterns going on. Periods of 2-3MB/s and then periods of >130MB/s per drive.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Wibla posted:

Any nerd with self respect has at least a couple of m1015 / LSI 9211-8i cards in a drawer somewhere :colbert:
The question is how many of them have been properly flashed to IT and UEFI firmware by now. I'm sitting on now 5 cards evidently, one of which is an external SAS-8088 port variant. I have spent way too much time sitting in console windows running awkward sas2flsh commands and yet still don't know why the hell most of them that used to work don't anymore (suspect it's a UEFI v BIOS issue along with how option ROMs aren't getting loaded when I may need it for some cards)

BlankSystemDaemon
Mar 13, 2009



Well, mpsutil flash save [bios|firmware] /path/to/file followed by mpsutil flash update [bios|firmware] /path/to/file usually work fine on FreeBSD, as described in the manual page.

Hughlander
May 11, 2005

Wibla posted:

My 8x14TB raidz2 array takes about 37 hours to scrub... with some weird IO patterns going on. Periods of 2-3MB/s and then periods of >130MB/s per drive.

There's tons and tons of reasons it could do that.

check iostat, anything going on other than the scrub when it goes low? Scrubs are super low priority.

How many times have you written to the array? Really large files that are written contigously are going to scrub fast, vs many small files that are all over the place due to fragmentation.

Do you have deduplication or compression on? That'll affect things non-linearly as well.

There's probably a dozen other things I'm not even thinking about...

BlankSystemDaemon
Mar 13, 2009



Wibla posted:

My 8x14TB raidz2 array takes about 37 hours to scrub... with some weird IO patterns going on. Periods of 2-3MB/s and then periods of >130MB/s per drive.
Periods of 2-3MB/s would do me suggest periods of completely random I/O, but that's just a complete guess and you're gonna need data to confirm that.

To get a real answer you're going need to correlate things from zpool iostat using the -w, -r and -l flags.
You'll also want to run each command with the -Pv to ensure you're getting each individual disk with its full physical path.

Hughlander posted:

There's tons and tons of reasons it could do that.

check iostat, anything going on other than the scrub when it goes low? Scrubs are super low priority.

How many times have you written to the array? Really large files that are written contigously are going to scrub fast, vs many small files that are all over the place due to fragmentation.

Do you have deduplication or compression on? That'll affect things non-linearly as well.

There's probably a dozen other things I'm not even thinking about...
Number of times the array has been written to isn't a good indicator, as the dirty data buffer will ensure that asynchronous writes are done contiguously.

Scrub is the second lowest priority I/O, TRIM is the lowest priority.

Scrub will also spend a fair amount of computation time figuring out the most sequential way to do I/O, and this is tracked via the 'scanned' and 'issued' amounts in zpool status during a scrub.

Compression is in-line and doesn't really matter since the defaults decompression speed is faster than anything but the fastest SSDs on the market.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

BlankSystemDaemon posted:

Well, mpsutil flash save [bios|firmware] /path/to/file followed by mpsutil flash update [bios|firmware] /path/to/file usually work fine on FreeBSD, as described in the manual page.
The Linux and UEFI side of things is considerably more complicated and I find it hard to believe that a single utility could supplant it all, but that's certainly worth trying. Didn't think that FreeBSD would do any better given both Windows and Linux options have been rather complicated across 4-5 different tools and OEM ROMs being potentially loaded across different hardware versions via sas2flsh, sas2flash, MegaRaidCLI, et al.

BlankSystemDaemon
Mar 13, 2009



The ones I mentioned are for the Fusion-MPS and -MPR product lines from LSI, but since those're the ones most commonly used, it's at least worth mentioning.
There's both mpsutil as I mentioned (which also works with Fusion MPR devices) but also mptutil and even mfiutil/mrsasutil (which depend on whether you're using the mfi(4) or mrsas(4) driver).
Each has its own manual page and most seem to include the flash subcommand, though only the Fusion-MPR and -MPS product lines include the ability to flash both the Option ROM as well as the firmware. Maybe newer versions simply include the Option ROM in the firmware?

I seem to recall that it was initially developed back in the late 2000s by John Baldwin, when he was working at Yahoo (back when they used FreeBSD for everything), and was upstreamed in the mid-2010s - but LSI were a lot better about documentation than Broadcom have ever been.

BlankSystemDaemon fucked around with this message at 19:33 on Oct 3, 2023

c355n4
Jan 3, 2007

Was hoping to get some feedback on a Synology NAS. Main and most likely only usage will be for storing photos locally from phones and being able to share them with family. We're talking like 4 people. I have Plex running on other hardware and it will stay on that. I'm currently looking at the DS223J. I'm ignoring the truth that going with Google Photos or similar would be cheaper and easier.

How does Synology handle backing up to external services for a cold storage backup?
Am I kneecapping myself with the 2 bays? If I wanted to increase size in the future, is it just a matter of getting bigger drives, slapping one in, rebuild, slap the other one in, and rebuild again?

Thanks Ants
May 21, 2004

#essereFerrari


I would say to avoid the J series devices just because Synology are really tight on the hardware specs even at the high end, so at the low end you're getting something less powerful than a 10 year old desktop PC.

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



Thanks Ants posted:

I would say to avoid the J series devices just because Synology are really tight on the hardware specs even at the high end, so at the low end you're getting something less powerful than a 10 year old desktop PC.
If the money won't be missed otherwise, bumping up to the plus series is the thing to do, imo.

On the other hand I keep wondering. Over the last ten years the j series has gone from 1GHz single core to 1.7GHz quad core and from 128MB or ram to 1GB. The same DSM 7 technically mostly miserably functions on the DS115j (single core 800MHz/256MB combo) for serving files as what runs on the current j series. We've got to be nearing the point where the hardware is decently catching up with the software, right?

The DS223 is also just specced weirdly. It's the same cpu as the j version with twice the ram.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Drive 2 keeps reporting I/O errors on my Synology like once a month, but SMART tests of the drive always report it being a-ok. Is there some other test I can run on this thing or should I be concerned about this?

bone emulator
Nov 3, 2005

Wrrroavr

I guess this is the place to ask this?

The thing I'm looking for is basically a Jellyfin or Plex (or whatever is the good one, I use Jellyfin now) server for use on a local network, that is cost effective and quiet and that I can also use to store and serve legally acquired romsets to my various retro junk.

Raenir Salazar
Nov 5, 2010

College Slice
Not sure if this is the right thread to talk about hard drives in general but I think one of my 1 TB 2.5" ssds might be failing, what's a good brand to replace it with? I'm in Canada, seems like 2 TB ssds are between 100-200$ so something in that range would be my budget, this would be for stuff I interact with regularly like media, I think my preference is for stability/reliability/endurance over pure speed, typical ssd speeds is fine for me.

Quick google suggests this: https://www.amazon.ca/Samsung-MZ-77Q2T0B-AM-Internal-Version/dp/B089C6LZ42/ref which I've bought a couple of times before, is it fine to probably just buy again?

Adbot
ADBOT LOVES YOU

Computer viking
May 30, 2011
Now with less breakage.

Raenir Salazar posted:

Not sure if this is the right thread to talk about hard drives in general but I think one of my 1 TB 2.5" ssds might be failing, what's a good brand to replace it with? I'm in Canada, seems like 2 TB ssds are between 100-200$ so something in that range would be my budget, this would be for stuff I interact with regularly like media, I think my preference is for stability/reliability/endurance over pure speed, typical ssd speeds is fine for me.

Quick google suggests this: https://www.amazon.ca/Samsung-MZ-77Q2T0B-AM-Internal-Version/dp/B089C6LZ42/ref which I've bought a couple of times before, is it fine to probably just buy again?

With those requirements, what about one of Kingston's datacenter SATA drives? The 1.92TiB version is $229 on Amazon.ca, which would be reasonable for a fast NVME disk but is a lot for a SATA model. On the other hand, they claim a lot of endurance and a five year warranty.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply