SwissArmyDruid posted:2011-ish is the year I switched off spinning rust and onto an SSD and never looked back. I was one of the lucky motherfuckers that got an OCZ Vertex 3s that did NOT exhibit any of the controller problems that others were having.
|
|
# ? Aug 30, 2019 18:34 |
|
|
# ? May 30, 2024 13:09 |
|
I might have had enough money to get a small SSD, but not THAT much money, duder. It was only a 120GB model, after all, and I had to get it on deep discount from Newegg during Black Friday. And then they shipped me $400 of DoA parts, then tried to claim their "Iron Guarantee" didn't count during Black Friday, so I swore a fatwa against ever giving them my business again, but at least I got an SSD out of it. Who knows if I actually had a good sample, though. XP being XP, in retrospect, I'm not sure I would have been able to tell the difference between a malfunctioning controller and one that wasn't. Besides, all my documents folders and crap were mapped to my old boot spinning rust, now relegated to secondary storage, so nuking and paving was relatively painless. SwissArmyDruid fucked around with this message at 18:42 on Aug 30, 2019 |
# ? Aug 30, 2019 18:39 |
|
craig588 posted:This is a derail I didn't mean to cause. I just thought it was funny he went from tapes to lossless and still uses 10 dollar portable headphones. Skipped CDs entirely because they're too fragile. This is the same person that says my 10 GB very slow 265 rips might as well be unwatchable. They have source quality rips of blurays on their NAS because they say they can instantly see the difference. Am I the weird one for keeping source rips on my NAS? I do it because space is cheap and I don't feel like spending a bunch more time researching and encoding.
|
# ? Aug 30, 2019 18:58 |
|
D. Ebdrup posted:I was one of the people who bought the Intel X25-M that didn't have the SandForce controller which plagued basically every other ODM on the market? I think my subconscious worked very hard to repress memory of sandforce controllers, because I got a pretty strong instinctive revulsion on reading that, and it took me a moment to remember briefly supporting some badly-implemented pcie ssd's about six or seven years ago with my previous employer.
|
# ? Aug 30, 2019 19:33 |
|
NewFatMike posted:Am I the weird one for keeping source rips on my NAS? I do it because space is cheap and I don't feel like spending a bunch more time researching and encoding. Source quality >50GB BluRay rips? Yeah, you're weird. If you want to encode them yourself, Handbrake et al have a bunch of single-click profiles that are good enough these days that it's real hard to tell the difference between it and source. And if you don't want to encode them yourself, you can take the tactic that many of my friends have: buy the disk to "support the creators" or whatever, then torrent the actual video file for NAS use.
|
# ? Aug 30, 2019 19:58 |
|
DrDork posted:Source quality >50GB BluRay rips? Yeah, you're weird. If you want to encode them yourself, Handbrake et al have a bunch of single-click profiles that are good enough these days that it's real hard to tell the difference between it and source. And if you don't want to encode them yourself, you can take the tactic that many of my friends have: buy the disk to "support the creators" or whatever, then torrent the actual video file for NAS use. I tried doing this, buying an actual bluray and watching it, and the quality on the PS4 was dogshit compared to a torrent copy.
|
# ? Aug 30, 2019 20:09 |
|
Paul MaudDib posted:yeah I offered to help my (gainfully employed software-engineer) cousin pick parts for a gaming rig in 2016 and then he went out and built a bulldozer rig without asking me "That's why I buy Macs, maaaaan. They just *work*, y'know!?!" *exactly one day after the warranty expires* "Halp my computer box won't work anymore and the Geniuses say my data's gone because of this 'Tee-Two' chip."
|
# ? Aug 31, 2019 00:01 |
|
D. Ebdrup posted:I was one of the people who bought the Intel X25-M that didn't have the SandForce controller which plagued basically every other ODM on the market? Me too, and I'm still using it as a games drive. It only holds 1 game (BF4) lol
|
# ? Aug 31, 2019 00:48 |
|
I got lucky and as I was shopping for my first SSD the one I ultimately settled on was the 1 of the 3 I was looking at that didn't develop problems. AFAIK the guy I gave it to is still using it.
|
# ? Aug 31, 2019 00:59 |
|
I still have my original Intel 160 GB SSD (320 series which was the successor to the X25-M but with a more confusing name), these days I use it as a USB drive with one of those USB3 or type-C to SATA adapters. Handy for brute forcing large transfers that would be too slow over the wifi and also makes an excellent source for installing windows because it is way faster and more durable than the average thumb drive.
|
# ? Aug 31, 2019 02:43 |
|
Will this work with DDR4 3200 C16?
|
# ? Aug 31, 2019 02:50 |
|
3,000 NM (Or in the language of the time: 3 micron)
|
# ? Aug 31, 2019 02:54 |
|
I got my first SSD to move up from Raid 0 74G raptors on a P4 rig I believe. 128G Super Talent which ended up being somewhat the same as a few other Samsung Drives pre EVO days. Took me ages to find a flasher that would flash the latest FW to allow it to support Trim. Drive still works fine to this day somehow. Upgraded to Raid 0 Plextor M3 Pro's which were the fastest things on the block for a short bit. Those too have continued to work. I also did bite late jn the Vertex 3 days on a sale where 120G ones were like $30 (When they were usually still like $80+ normally. Bought 3 and spread them around as they were a fit. So far no issues years later which is strange... Now it's mostly all EVOs but I grab the occasional PNY or AData when the sale is just too drat good. As long as it's not the DRAM less versions I am happy. I have a Intel 320 or something I won in a raffle that was a refurb but still seems to work and hasn't hit it's death clock yet at least. SSD's have cured a lot of slow pc issues over the year for me. With my friends/family, if they want my help with something on the PC, they either follow my advice, or I will not assist if they ignore me and poo poo goes south with their Costco purchase. The Askholes have learned to trust me as while I am not perfect, 99% of the time I save them a lot of pain and misery in the long run as far as hardware purchases go.
|
# ? Aug 31, 2019 05:04 |
|
am I reading there that you have ssds in consumer software raid 0
|
# ? Aug 31, 2019 12:12 |
|
[desire to pontificate increasing]
|
# ? Aug 31, 2019 12:12 |
|
Potato Salad posted:[desire to pontificate increasing]
|
# ? Aug 31, 2019 15:24 |
|
Please Potato Salad don’t hurt em (Actually do) Even some of the engineers I work with (in an enterprise storage group!!) talk about wanting to use RAID 0 SSDs as backup “cuz it’s so fast” Because I want to keep a friendly workplace I just smile and nod.
|
# ? Aug 31, 2019 16:22 |
|
Raid 0 in an enterprise environment.... Doesn't seem like it ever has a place. Especially with drive speeds now... Is there one? And for Backups? Really? Hey, I will do stupid crazy stuff on my own hardware for non critical data. Come on Potato Salad. I can take it. I do also use Intel SSD Cashing for my Raid 0 2TB WE Blacks since spinning rust is just too slow on its own. It's only a game library so again, nothing really important, but man did it work wonders to stop the boot up thrashing that happens every restart. Now it's pure silence. Just.... Disable it before you plan to boot anything else that may see those drives or things will start acting really weird. EdEddnEddy fucked around with this message at 16:41 on Aug 31, 2019 |
# ? Aug 31, 2019 16:37 |
|
My last workplace had like 12 2 TB SSDs in a RAID 1+0 configuration for a pretty high throughput system (pushing 6 Gbps of egress per node with 28 cores and 256 GB of RAM). RAID0 is fine if your software systems can easily handle entire nodes’ local storage being inaccessible. Google and friends do similar architectures with distributed file systems so the primary job of each node would be to go as fast and hard as possible with as efficient power usage.
|
# ? Aug 31, 2019 16:53 |
|
Yea I can understand using it in a 1+0 form scaled up to have multiple levels of redundancy which makes sense. But nobody would do an actual Raid 0 all by its lonesome in an enterprise environment outside of maybe testing throughout or something right? For giggles, there is this rig at work that someone was throwing away that had 4 4TB HDD in it as well as a 970EVO as the boot drive. Yoinked that right away but what was funny was the 4 HDD were in a Windows software Raid 0. No idea what they planned to do with it like that, but I will say the throughout of 4 2TB drives was pretty drat high. Hit close to 1000MB/s tinkering with it and some large test files.
|
# ? Aug 31, 2019 17:08 |
How all ya'all feel about not using RAID0 is how I feel about using anything other than ZFS.
|
|
# ? Aug 31, 2019 21:13 |
|
I'm sure there are gonna be use cases for RAID 0 forever, but really, it's getting harder and harder to saturate storage these days. Just what kind of consumer workload is going to saturate an NVMe link? At PCIe 4.0?
|
# ? Aug 31, 2019 23:50 |
|
EdEddnEddy posted:Yea I can understand using it in a 1+0 form scaled up to have multiple levels of redundancy which makes sense. But nobody would do an actual Raid 0 all by its lonesome in an enterprise environment outside of maybe testing throughout or something right? It's the opposite. Enterprise is the only place where it makes sense. At that level, you should be able to pick any random system, destroy it completely, and not suffer any permanent setback. At that point, if you get significantly higher throughput almost all of the time at the cost of having to very occasionally re-do a bit of work, it makes perfect sense. Just for instance, take a cache or even a search index. They exist as specialized copies of other data. If they're gone, it's work to rebuild them, but no actual data is missing. There might be a performance hit if one goes offline, but more capacity while they're online usually more than outweighs the inconvenience of spinning up a new one.
|
# ? Sep 1, 2019 00:40 |
|
At work, we have many applications running without redundant disks, because there's redundancy elsewhere in the stack. We don't use RAID0, though. We just configure the disks as JBOD and present them to the application. Why end up needing to redistribute or re-create many drives worth of data because one drive failed?
|
# ? Sep 1, 2019 01:24 |
|
It does kinda suck that there's no easy way to pile a few SSDs together for convenience without either drastically increasing your chance of failure(RAID0), or drastically cutting your capacity(RAID10).
|
# ? Sep 1, 2019 02:47 |
|
AlternateAccount posted:It does kinda suck that there's no easy way to pile a few SSDs together for convenience without either drastically increasing your chance of failure(RAID0), or drastically cutting your capacity(RAID10). On Windows, at least, you can use Storage Spaces to do more or less exactly that: JBOD but addressable as a single drive letter.
|
# ? Sep 1, 2019 03:08 |
I have to say being an early adopter for an SSD allowed me to use my old Q9450 rig comfortably for way longer than I had any right to be. Sure, the difference going to a 1276v3 with a GTX 970 was definitely noticeable, but definitely wasn’t nearly as big as you would expect considering the age gap of the Q9450/GTX 580 3GB it replaced. Now they were a $300 CPU paired with a $600 GPU, so they were top of the line in their day, but still....
|
|
# ? Sep 1, 2019 04:00 |
|
I can't claim to be an early adopter, but I first used a SSD in the computer I built right when Windows 7 came out. It was a X25-M which lives on in someone else's laptop. It's been almost 10 years for that one. On a Samsung EVO now. I recently set up a new (but built out of old parts) computer to do some video capture and storage and used a hard drive for its OS because that's what I had laying around. I didn't expect the difference would be so great just doing basic Windows stuff, but it's really painful.
|
# ? Sep 1, 2019 06:21 |
|
Buy.com is dead, I think. That SSD's still running great in an i7-870 PC I gave to a friend and it turns 10 this week. Funny, I remember all the doom and gloom about SSD lifespans early on. It's amazing you can still buy laptops with spinning disks. Absolutely miserable.
|
# ? Sep 1, 2019 07:03 |
|
DrDork posted:On Windows, at least, you can use Storage Spaces to do more or less exactly that: JBOD but addressable as a single drive letter. I've been wondering what the merits of RAID0 vs a spanned volume are. Obviously in both cases if you lose a drive you lose the entire array, but I guess in principle RAID reads are aligned and every read hits every disk, while spanned could allow it to service multiple requests on different drives depending on how the data scatters out across the filesystem? And spanned can be easily grown by adding disks while RAID stripes can't really be rewritten. It seems like if it's supported, spanned should be preferable in most cases. You can't boot from a spanned volume though, while you could boot from hardware RAID. Paul MaudDib fucked around with this message at 07:47 on Sep 1, 2019 |
# ? Sep 1, 2019 07:44 |
DrDork posted:On Windows, at least, you can use Storage Spaces to do more or less exactly that: JBOD but addressable as a single drive letter. Paul MaudDib posted:I've been wondering what the merits of RAID0 vs a spanned volume are. Obviously in both cases if you lose a drive you lose the entire array, but I guess in principle RAID reads are aligned and every read hits every disk, while spanned could allow it to service multiple requests on different drives depending on how the data scatters out across the filesystem? And spanned can be easily grown by adding disks while RAID stripes can't really be rewritten. Also, gconcat(8) aka. GEOM CONCAT in FreeBSD can be booted from just fine, as long as you place the firmware-compatible boot-block on the firmwares first disk (ie. what the BIOS calls C: and what UEFI calls disk0) - so it's a question of whether Windows 10 still uses NTLDR62 or has been updated to support Storage Spaces/ReFS. BlankSystemDaemon fucked around with this message at 12:07 on Sep 1, 2019 |
|
# ? Sep 1, 2019 12:02 |
|
Paul MaudDib posted:I've been wondering what the merits of RAID0 vs a spanned volume are. Obviously in both cases if you lose a drive you lose the entire array, but I guess in principle RAID reads are aligned and every read hits every disk, while spanned could allow it to service multiple requests on different drives depending on how the data scatters out across the filesystem? And spanned can be easily grown by adding disks while RAID stripes can't really be rewritten. It really depends on how the spanning system has been implemented. For Storage Spaces with zero redundancy, for example, files are written to a single disk in the array, so compared to RAID0 you get lower performance, and mostly are just getting the convince of not having to deal with multiple drive letters. But you explicitly do NOT lose the entire array of you lose a disk--just what was on that disk. Other systems allow different trade-offs between speed, redundancy, and size.
|
# ? Sep 1, 2019 15:31 |
|
If you want a decent logical RAID, meaning you can read each drive individually and get files, StableBit Drivepool is amazing and I still love it. Running mine with 32TB of Hitachi NAS drives.
|
# ? Sep 1, 2019 16:09 |
|
I switched to unRaid ages ago and never looked back. It's not RAID at all, it's basically JBOD with parity, which is excellent for a shitload of reasons.
|
# ? Sep 1, 2019 16:32 |
|
Yeah, for consumer use, where the performance of even a single SSD is more than sufficient for just about anything, a JBOD-based system makes a lot more sense than RAID0.
|
# ? Sep 1, 2019 16:57 |
|
D. Ebdrup posted:I believe they're called CONCAT arrays. Right, you only lose what was on that disk... but files can be scattered across multiple disks at a block level, and likely will be for performance reasons, so you will lose half of every file on average. Sadly storage spaces cannot be booted like ZFS spans, or so I’ve read.
|
# ? Sep 1, 2019 17:02 |
Paul MaudDib posted:Right, you only lose what was on that disk... but files can be scattered across multiple disks at a block level, and likely will be for performance reasons, so you will lose half of every file on average.
|
|
# ? Sep 1, 2019 17:14 |
|
DrDork posted:On Windows, at least, you can use Storage Spaces to do more or less exactly that: JBOD but addressable as a single drive letter. Is the big difference between Storage Spaces and RAID a file vs. block level thing? redeyes posted:If you want a decent logical RAID, meaning you can read each drive individually and get files, StableBit Drivepool is amazing and I still love it. Running mine with 32TB of Hitachi NAS drives. Oh my gosh, I had totally forgotten about this. It's what we used way back when Windows Home Server came out with a new version that didn't support Drive Extender or whatever. I might still have a license lying around...
|
# ? Sep 3, 2019 04:53 |
|
While I was stress testing my old 4790K system with a 850 Evo prior to selling it, I was reminded by just how fast it booted from hitting the power button to the Windows desktop: 14 secs. My 8700K needed the same 14 secs to see the POST screen, and another 13 secs to the desktop despite having a EX920 NVMe OS drive.
|
# ? Sep 3, 2019 12:06 |
|
|
# ? May 30, 2024 13:09 |
Palladium posted:While I was stress testing my old 4790K system with a 850 Evo prior to selling it, I was reminded by just how fast it booted from hitting the power button to the Windows desktop: 14 secs. My 8700K needed the same 14 secs to see the POST screen, and another 13 secs to the desktop despite having a EX920 NVMe OS drive.
|
|
# ? Sep 3, 2019 12:43 |