Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

D. Ebdrup posted:

How does that work out against just sharing the files, on the machine that hosts the iSCSI initiator target, with samba?
Why pools, though? The whole point of ZFS is pooled storage.

Originally it was because my backup software didn't like old school opensolaris. That and all my poo poo is domain joined, and getting opensolaris to play nice with it via samba was like slowly, lovingly cramming glass in your dick.

All in all the solution works shockingly well. I get bitlocker'd storage pools, easy to manage permissions, and spend 95% of my computer janitor time dinking with stuff in windows vs. trying to remember how to set ACLs in solaris. It's also remarkably fire and forget. Last time I had to do anything to it was the last time there was a power outage, and that just consisted of making sure the boxes were turned back on in the correct order.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Methylethylaldehyde posted:

Originally it was because my backup software didn't like old school opensolaris. That and all my poo poo is domain joined, and getting opensolaris to play nice with it via samba was like slowly, lovingly cramming glass in your dick.

All in all the solution works shockingly well. I get bitlocker'd storage pools, easy to manage permissions, and spend 95% of my computer janitor time dinking with stuff in windows vs. trying to remember how to set ACLs in solaris. It's also remarkably fire and forget. Last time I had to do anything to it was the last time there was a power outage, and that just consisted of making sure the boxes were turned back on in the correct order.
I mean, that's fair - but setfacl and getfacl aren't exactly black magic that requires summoning of the elder gods. That's reserved for systemd. :v:

Crunchy Black
Oct 24, 2017

by Athanatos

Duck and Cover posted:

This isn't really an example or one person doing something worse then others have done, more me getting sick of reading it yet again. I certainly could have ignored it I choose not to, in an attempt to maybe change future behavior of not just Atomizer, but others.


No he didn't. The only one who did was PaulmcDib oh and Axe-Man calculator post. People assumed what I was asking and tried to answer that, which happens partly because I give it the chance to. I know my options I was just looking to weigh them properly not have options listed for me/people giving their opinion on what's best.

Whatever I've decided that the risk is worth the extra space. Well until I change my mind, the harddrive is still getting it's sector check. 8tb is probably enough extra though... hmmmm.

Please backup your poo poo that you care about offsite so that when you have an issue you don't come in here screaming that we mislead you or something. Thanks.

Methylethylaldehyde posted:

getting opensolaris to play nice with it via samba was like slowly, lovingly cramming glass in your dick.


This gave me nightmares on multiple levels but I also laughed at it so carry on you crazy bastard.

BlankSystemDaemon
Mar 13, 2009



Crunchy Black posted:

Please backup your poo poo that you care about offsite so that when you have an issue you don't come in here screaming that we mislead you or something. Thanks.
But his anims!

Duck and Cover
Apr 6, 2007

Crunchy Black posted:

Please backup your poo poo that you care about offsite so that when you have an issue you don't come in here screaming that we mislead you or something. Thanks.


This gave me nightmares on multiple levels but I also laughed at it so carry on you crazy bastard.

Wait you're saying my onsite copies aren't immune to fire? gently caress I had no idea! Thanks you are so smart! Anyway went with 1 backup for a 5 hard drive system. Thanks for your ever so helpful post!

This was only caused by me using the word backup instead of the proper term raid. Smart (at least relatively) folks realized what I was asking and responded. Dumb folks decided it was time to educate me! Even dumber folks ignored the posts where I made it loving clear I understood and kept going even though the thread moved on.

D. Ebdrup posted:

But his anims!

If my anims get deleted my waifus die.

(USER WAS PUT ON PROBATION FOR THIS POST)

BlankSystemDaemon
Mar 13, 2009



Duck and Cover posted:

If my anims get deleted my waifus die.
I'm only an old-school weeb so I don't know for sure, but I think at this point someone is supposed to say that your waifu is trash?

I think the reason you got such push-back on using the word backup wrongly when you meant RAID is that all too many people do assume that RAID is backup, and we can't read your mind to know that you know that it isn't and that you just typo'd.
And it's probably also because people in technical fields tend to be specific to avoid exactly such misunderstandings, because the English language even in its technical sub-fields can be incredibly easy to misunderstand.

Not Wolverine
Jul 1, 2007
My current file server is running Xubuntu 18.04 with MDADM and ext4, I know this probably not the smartest setup but it is what it is. This seems to be meeting my needs just fine but I feel like it could be better. Would ZFS actually improve my life much over MDADM and ext4? I believe I originally chose not to go with ZFS because the previous motherboard only had 4GB of RAM in the NAS which technically met the 1GB per TB recommendation but 4GB was still often less than the minimum RAM suggested. Would it have been just fine? Maybe, maybe not, I just knew it was a confusing mess with too many different opinions on how much RAM was necessary so I bailed on ZFS. Now I have 16GB of RAM and 4x 2TB drives, I think I could go up to 32GB of RAM if necessary, but I don't want to waste money. If anything, my only concern about going with FreeNAS now is I read talk about BSD's ZFS implementation is no longer the most actively developed and Z on Linux is the new hotness to a point where FreeBSD is now basing their ZFS on ZoL. Does FreeNAS still have the best ZFS file system implementation? If ZoL is better, is there any real downside to either staying with Ubuntu or should I switch to something like CentOS? I think the FreeNAS web GUI alone is probably why I want to change, but I still kinda like having a full Linux GUI on my file server since I already keep an old keyboard and monitor connected to it.

IOwnCalculus
Apr 2, 2003





ZFS is worth it over md just for the file integrity alone. I've had mdraid poo poo arrays for a single error during a rebuild, whereas ZFS will flag corrupted files and move on.

1GB/TB is overkill and was probably based on how to make ZFS survive in the enterprise world despite its shortcomings. I'm well under that and my system still has ~20% RAM free at any given time.

phongn
Oct 21, 2006

IOwnCalculus posted:

ZFS is worth it over md just for the file integrity alone. I've had mdraid poo poo arrays for a single error during a rebuild, whereas ZFS will flag corrupted files and move on.

1GB/TB is overkill and was probably based on how to make ZFS survive in the enterprise world despite its shortcomings. I'm well under that and my system still has ~20% RAM free at any given time.
It was a rule of thumb in an era of relatively small amounts of RAM and for business/enterprise loads that have lots of people hammering the NAS in a fashion that looks like huge amounts of random I/O. The real rule for enterprise loads these days is basically "as much RAM as you can afford and put into the server."

For home use? Streaming media loads don't even get loaded into ARC anyways!

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry

D. Ebdrup posted:

I'm only an old-school weeb so I don't know for sure, but I think at this point someone is supposed to say that your waifu is trash?

I think the reason you got such push-back on using the word backup wrongly when you meant RAID is that all too many people do assume that RAID is backup, and we can't read your mind to know that you know that it isn't and that you just typo'd.
And it's probably also because people in technical fields tend to be specific to avoid exactly such misunderstandings, because the English language even in its technical sub-fields can be incredibly easy to misunderstand.

Yeah, you have to remember where I work I hear people say this every day followed by "but I can't get that data back!"

At our company we have fired people for saying RAID is a backup.

Duck and Cover
Apr 6, 2007

Oh yeah and when I was talking about raid I was talking about The Synology Hybrid Raid not that it mattered towards what I was asking.

BlankSystemDaemon
Mar 13, 2009



Crotch Fruit posted:

my only concern about going with FreeNAS now is I read talk about BSD's ZFS implementation is no longer the most actively developed
I just wanna clear up this misconception, because it is demonstrably untrue that the FreeBSD fork of the Illumos code isn't still being developed and it's even the subject of benchmarking and +100-core scalability work that a (former?) RedHat Linux Kernel developer is now doing on FreeBSD.
Even ignoring all that, ZFS in FreeBSD is still scheduled to receive the full 5 years of support from whenever 13-STABLE is branched (unless the new OpenZFS code magically gets imported into HEAD before then, which I don't regard as likely), and that isn't going to happen for at least another year - so there will be at least 6 more years before any action need be taken.

Also, it's worth reiterating that ZoL won't be be taking over in any sense but the most Linux-facistic way (and again, ZFS isn't part of the main-line kernel, so it's up to distributions to get everything properly implemented), rather it's being repurposed as the repository of record for the OpenZFS project and will at some point actually be the OpenZFS repository in name and everything.

IOwnCalculus posted:

1GB/TB is overkill and was probably based on how to make ZFS survive in the enterprise world despite its shortcomings. I'm well under that and my system still has ~20% RAM free at any given time.
It's not as if there aren't good reasons to observe the 1GB/1TB ratio, but it's not a hard requirement for the typical prosumer workloads we work with in this thread.
On the other end of the scale, I've run ZFS on as little as 512MB RAM (which is less than the official minimum requirement for FreeBSD) - so the only thing ZFS really cares about is that it's a 64bit platform, because otherwise it has to split a lot of the (atomic) operations which on a 32bit platform will be handled in more instructions, rendering them non-atomic and thus breaking one of the core tenants of ZFS - namely that you can always roll back a transaction group (preferably when the pool is in read-only mode) and send|receive your way to recovering your data.

BlankSystemDaemon fucked around with this message at 22:57 on Oct 29, 2019

Shaocaholica
Oct 29, 2002

Fig. 5E
Just snagged a few of these for cheap. Probably the nicest direct attach enclosure I've had. Completely tool less disassembly with no bits to lose. Built in PSU. They make newer ones with USB3/C

https://www.youtube.com/watch?v=WgqEv7lMH_M

Crunchy Black
Oct 24, 2017

by Athanatos

D. Ebdrup posted:

Even ignoring all that, ZFS in FreeBSD is still scheduled to receive the full 5 years of support from whenever 13-STABLE is branched (unless the new OpenZFS code magically gets imported into HEAD before then, which I don't regard as likely), and that isn't going to happen for at least another year - so there will be at least 6 more years before any action need be taken.

I'm hopeful, given Ubuntu's adoption/support of ZFS firming up, we'll have widespread adoption and support as well as development for a long time.

Duck and Cover posted:

Wait you're saying my onsite copies aren't immune to fire? gently caress I had no idea! Thanks you are so smart! Anyway went with 1 backup for a 5 hard drive system. Thanks for your ever so helpful post!

This was only caused by me using the word backup instead of the proper term raid. Smart (at least relatively) folks realized what I was asking and responded. Dumb folks decided it was time to educate me! Even dumber folks ignored the posts where I made it loving clear I understood and kept going even though the thread moved on.


If my anims get deleted my waifus die.

Echoing D. Ebdrup, for someone whose posts in this thread are very focused on pedantry, you tend to not be very concise. Please do with your data as you wish but don't get mad when we tell you that what you're doing isn't scalable or a best practice.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Have you guys had SSDs die? What's expected lifespan for consumer SSDs? This guy kicked the bucket this week.

Only registered members can see post attachments!

H110Hawk
Dec 28, 2006

Twerk from Home posted:

Have you guys had SSDs die? What's expected lifespan for consumer SSDs? This guy kicked the bucket this week.



2 petabytes of write against that amount of host writes suggests offset issues. Though I haven't looked deeply into it. Yes I have had thousands die but in datacenter workloads, though most as os drives not heavy hit ones. (The heavy hitters nuked out way faster.)

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Yeah, you've got something wonky going on there to get that sort of write amplification.

That said, for a consumer 240GB drive, TBW specs are usually in the 75-150 TBW range, with SanDisk trending to the lower end of that range. You're at 153TB host and, as said, over 2PB NAND written. So either way you are between "a bit more than specced" and "Jesus, man, wtf?" depending on how you look at it, and should frankly have expected to have it die quite a while ago.

You're clearly using that drive for some business/enterprise workload: shell out a few extra bucks for a larger enterprise-level drive next time and it should last a good bit longer. A 512GB 970 Pro, for example, is warrantied to 600TBW, or probably 6x what that SanDisk was engineered to hit.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

H110Hawk posted:

2 petabytes of write against that amount of host writes suggests offset issues. Though I haven't looked deeply into it. Yes I have had thousands die but in datacenter workloads, though most as os drives not heavy hit ones. (The heavy hitters nuked out way faster.)

Yeah, I was curious about that but don't know how to address it in the future. This disk had a couple of hyper-V VMs, including 1 Win10 and 2 or 3 Ubuntu. It was running a plex server, minecraft server, Uni-Fi wifi controller, qbittorent scratch area, couple low-traffic NodeJS and PHP websites, TICK stack for monitoring & dashboards.

I have no idea which of those was writing enough to hit even 150TBW, because all of them were extremely low utilization. I had configs and database backups and didn't lose anything that I will miss, but it was still annoying.

Edit: I didn't need Windows, so I've already got everything set up on Ubuntu 18.04. I'll monitor to see what's writing so much, assuming that it wasn't something related to Hyper-V / Windows that was writing so much.

DrDork posted:

You're clearly using that drive for some business/enterprise workload: shell out a few extra bucks for a larger enterprise-level drive next time and it should last a good bit longer. A 512GB 970 Pro, for example, is warrantied to 600TBW, or probably 6x what that SanDisk was engineered to hit.

I replaced it with an 840 Evo out of my junk drawer and I am preparing to make the next drive failure even faster to recover from. :colbert:

Twerk from Home fucked around with this message at 17:21 on Oct 30, 2019

H110Hawk
Dec 28, 2006

Twerk from Home posted:

Yeah, I was curious about that but don't know how to address it in the future. This disk had a couple of hyper-V VMs, including 1 Win10 and 2 or 3 Ubuntu. It was running a plex server, minecraft server, Uni-Fi wifi controller, qbittorent scratch area, couple low-traffic NodeJS and PHP websites, TICK stack for monitoring & dashboards.

I have no idea which of those was writing enough to hit even 150TBW, because all of them were extremely low utilization. I had configs and database backups and didn't lose anything that I will miss, but it was still annoying.

Edit: I didn't need Windows, so I've already got everything set up on Ubuntu 18.04. I'll monitor to see what's writing so much, assuming that it wasn't something related to Hyper-V / Windows that was writing so much.


I replaced it with an 840 Evo out of my junk drawer and I am preparing to make the next drive failure even faster to recover from. :colbert:

What caused it is your host OS (Windows?) formatted the disk such that FS block boundaries were offset from the NAND ones. Two examples:

1. Host OS formats 512b blocks on a 4k NAND disk. Every "one" write is amplified by (4096 / 512) = 8.0x.
2. Host OS formats 4k blocks on a 4k NAND disk but starts at byte 1k. (Out of 240,000,000kb) It should have started at byte 4k. Now every write is written out to 2.0x as it crosses a NAND boundary. (The closer to 4k increments the lower this multiplier, but 1 bit changed will result in 8k written.)

It sorta looks like you had the former. Ubuntu as a host OS should handle it a LOT better, but I would check this out right at the start and make sure everything maps through correctly. gparted has a little helper as I recall which can help you align things from the get go. Make sure your VM os disks are the correct block size as well and because you're using files they will self-align. Also for your butttorrents and plex storage consider a HDD.

As the other poster said, go with a higher quality disk, and keep it under 100% utilization to help with wear leveling. If you were running it at 90+% utilization the whole time it's going to have a hard time shuffling data to keep wear down on "hotspots." I bet if you looked it up behind the curtain you've been remapping blocks for a year or more at this point and the drive finally ran out of spares. High wear disks keep huge reserves (think 20%) that your OS cannot see to remap dead blocks. You can read this out of your disk with a utility as I recall. It might be disk and os specific.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
people laugh at me when I tell them to keep an eye on write amplification, because that's one case where the old saw about "consumers will never write enough to wear out a SSD" isn't true.

153TBW is still a lot but you can still hit that with, say, Shadowplay/Relive Instant Replay Capture functionality (where you're continuously running a tape loop of your game sessions at 50 Mbit/s) or that kind of thing. I've always recommended you put that on HDD if you're doing that.

Running full all the time is another thing that can increase write amplification.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

H110Hawk posted:

What caused it is your host OS (Windows?) formatted the disk such that FS block boundaries were offset from the NAND ones. Two examples:

1. Host OS formats 512b blocks on a 4k NAND disk. Every "one" write is amplified by (4096 / 512) = 8.0x.
2. Host OS formats 4k blocks on a 4k NAND disk but starts at byte 1k. (Out of 240,000,000kb) It should have started at byte 4k. Now every write is written out to 2.0x as it crosses a NAND boundary. (The closer to 4k increments the lower this multiplier, but 1 bit changed will result in 8k written.)

It sorta looks like you had the former. Ubuntu as a host OS should handle it a LOT better, but I would check this out right at the start and make sure everything maps through correctly. gparted has a little helper as I recall which can help you align things from the get go. Make sure your VM os disks are the correct block size as well and because you're using files they will self-align. Also for your butttorrents and plex storage consider a HDD.

Thanks, I appreciate this. Hyper-V was the host, with Windows and Ubuntus installed on it. I didn't think about VM OS disks block sizes at all, and there's a separate long-term storage system that's on hard drives. I just wanted to use the SSD as some fast scratch space.

This drive should have been doing barely anything, so 150TBW does not add up with its extremely light usage. It's a mystery to me why it wrote so much. I also kept it under 80% full.

Shaocaholica
Oct 29, 2002

Fig. 5E
Is block alignment something that should most likely be good when using default settings in most built in OS partitioning tools? So when its incorrect then usually its user error by trying to goof around with advanced overrides right? I'm talking about modern tools not something partitioned 15 years ago in windowsXP.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Paul MaudDib posted:

people laugh at me when I tell them to keep an eye on write amplification, because that's one case where the old saw about "consumers will never write enough to wear out a SSD" isn't true.

153TBW is still a lot but you can still hit that with, say, Shadowplay/Relive Instant Replay Capture functionality (where you're continuously running a tape loop of your game sessions at 50 Mbit/s) or that kind of thing. I've always recommended you put that on HDD if you're doing that.

Running full all the time is another thing that can increase write amplification.

Ehhhh. For a normal consumer doing normal consumer things, it's still super hard to kill a drive. Even Shadowplay at 50Mbps works out to ~4,500hrs to hit 100TBW, and a 512GB 970 Evo is specced at 300TBW with actual endurance several multiples higher than that. Write amplification can happen, but with Win7/8/10 you have to muck with stuff intentionally to cause those issues, at which point you should know better, anyhow.

VMs and such are different issues, but also not something normal consumers ever poke at.

The point, really, is that a normal consumer will not produce 100+ TB/y. Most would struggle to hit 10 TB/yr. Whatever this guy was doing to hit 150+TB is beyond the scope of "normal consumer behavior," though I'd be real interested to see what he can turn up as to the root cause of those writes, since nothing immediately jumps off the page at me.

H110Hawk
Dec 28, 2006

Paul MaudDib posted:

people laugh at me when I tell them to keep an eye on write amplification, because that's one case where the old saw about "consumers will never write enough to wear out a SSD" isn't true.

Our friend here isn't a consumer, and I still laugh at you. :v:



Which tool is this by the way? I want to check my Win 7 disk.

cage-free egghead
Mar 8, 2004

H110Hawk posted:

Which tool is this by the way? I want to check my Win 7 disk.

Looks to be CrystalDiskInfo.

So I purchased a server to use as a homelab, a DL380 G7 and didn't read about the horrors of trying to use an OS like Unraid or FreeNAS with the P410i controller. Spent about 4 or 5 hours fiddling with the controller and wondering why I couldn't break my disks to a JBOD or ZFS. Come to find out that unless you have a cache memory module or different HBA then you can only do RAID 1 or 0! What joy! Luckily there was a pretty common LSI controller for cheap and I am eagerly awaiting for that to come.

I guess a workaround to getting it to work with Unraid is to just make every disk its own RAID 0 array and then throw it into your array in Unraid. Good luck ever using that disk in another system then.

H110Hawk
Dec 28, 2006

cage-free egghead posted:

Looks to be CrystalDiskInfo.

... this? https://crystalmark.info/en/software/crystaldiskinfo/ Does it also install animes on my computer? (I recognize the name from back in the day, but oh god the website. Is this the real one or the viruses and crytolocker one?)

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
I dug up my invoice for that dead drive: $68 from Amazon in the first half of 2015. Not a bad lifespan considering the price, and hopefully Ubuntu as host OS will reduce writes going forward.

How would I have hosed up block sizing or alignment? I just installed Win10 with defaults, added hyper-v role later, and used defaults everywhere possible. This time around, I just installed Ubuntu 18.04 server with defaults.

MagusDraco
Nov 11, 2011

even speedwagon was trolled

H110Hawk posted:

... this? https://crystalmark.info/en/software/crystaldiskinfo/ Does it also install animes on my computer? (I recognize the name from back in the day, but oh god the website. Is this the real one or the viruses and crytolocker one?)

It looks like the real one to me. Grab the standard edition if you don't want the animes

H110Hawk
Dec 28, 2006

MagusDraco posted:

It looks like the real one to me. Grab the standard edition if you don't want the animes

Thanks. Looks like I don't get R/W/NAND info.

Edit: and this is what I get for using the attachment feature. Sorry about that.

Only registered members can see post attachments!

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Ha, I had more than 36,000 power on hours and thirty-something power on count.

IOwnCalculus
Apr 2, 2003





Yeah, given how cheap SATA SSDs are at 0.5TB and below, meh. My worst-case scenario drive right now in that regard is a Sandisk 480GB with ~8400 power-on hours. 105TB total writes, 143TB NAND writes, and this thing has spent a lot of time near full.

Not all SSDs report GBW. My Kingston 96GB doesn't.

edit: How the hell do you only have 420 :2bong: hours on a Crucial M4? That thing is ancient.

edit 2: does that thing still have the firmware that has the power-on hour bug

IOwnCalculus fucked around with this message at 20:30 on Oct 30, 2019

Roundboy
Oct 21, 2008
Ugh, after finally completing my hdd migration, I am kicking myself for seemingly easy choices i made long ago.

WD easy store drives are taped up and running, but somehow I lost one during the server re-rack. Its already added as an LVM, i just need to extend it when i spin it back up.

My 2 post 12U rack has less then perfect threaded mount holes. This makes the heavy-ish 2U server hard to rack on and off, and i have a shelf that helps it now, but should have gone with cage nut mounts instead.

The case i have holding my little server has 3x HDD cages, but upon reading the amazon page more carefully, I see that it only holds 4xHDDs. meaning the middle cage has screw holes for 2 drives, but the other cages only have slots for 1, even though they are the same height . VERY annoying moving cages around to keep HDDs and power in check.

Already planning the next upgrade in a couple years for a proper case, hot swap hdd bays and either enough sata ports or enough PCI space to add cards. My poor little i7 microatx mobo only has 4 ports and 2 dimm slots. Looking into a big server bump in whatever the 'cheap' chip is at the time, but grabbing a bunch of new drives to play with RaidZ2

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Friends of mine had a SSD die on them over the weekend. It was only 128MB, I think they pretty much filled it up right away and it lasted about 9 months. Not 100% sure if it wore out but it came up blank after a bluescreen and when I tried to run recovery tools on it, the log was a solid wall of read errors.

MagusDraco
Nov 11, 2011

even speedwagon was trolled

H110Hawk posted:

Thanks. Looks like I don't get R/W/NAND info.

Edit: and this is what I get for using the attachment feature. Sorry about that.



That 98% is your wear % for the nand on the SSD. It's most likely fine still. The M4 hides total writes or something. Hard to tell when most links talking about it go to crucial's old forums. edit: What I've got below is the most I can pull out of the wayback machine

quote:

I think I should correct the post above. After using M4 for much longer period of time I would say "AD Wear Leveling Count" value multiplied by drive capacity is quite equal to "Total NAND Writes".
I believe my former opinion was based on the data gathered for a too short period of time at the very begining of drive use. There were no visible write amplification and I have had that impression that AD is related to Writes from Host.

Now I would definitely say it is related to Total NAND Writes :-)


edit 2: your AD raw value is 3F in hex so 63 * 256 gigs so roughly 16 TB of writes but only like 2% of wear on the flash media? I dunno how accurate that is but the drive is rated for at least 72 TB writes.

MagusDraco fucked around with this message at 21:19 on Oct 30, 2019

H110Hawk
Dec 28, 2006

IOwnCalculus posted:

edit: How the hell do you only have 420 :2bong: hours on a Crucial M4? That thing is ancient.

edit 2: does that thing still have the firmware that has the power-on hour bug

Easy: It's wrong. I've been using this computer as a daily driver for work for well over a year now, not to mention the countless hours spent in Path of Exile prior to that, and all the overnight handbrake marathons before nvenc was added, but there was a time where the computer sat off for days to weeks on end. (Moved to the burbs, had a baby, no more time for path of exile but still worked in an office.) Once I saw the :2bong: time I laughed and quickly took a screen shot.

Firmware: No. It's upgraded past the 000x series buggy stuff. It's in fact running the latest (from 2013).

MagusDraco posted:

That 98% is your wear % for the nand on the SSD. It's most likely fine still. The M4 hides total writes or something. Hard to tell when most links talking about it go to crucial's old forums. edit: What I've got below is the most I can pull out of the wayback machine

edit 2: your AD raw value is 3F in hex so 63 * 256 gigs so roughly 16 TB of writes but only like 2% of wear on the flash media? I dunno how accurate that is but the drive is rated for at least 72 TB writes.

And smart values count down not up right?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

H110Hawk posted:

And smart values count down not up right?

Depends. There's no actual standard for the implementation or interpretation of a lot of SMART values, so it's a bit of a poo poo show for stuff like that.

Shaocaholica
Oct 29, 2002

Fig. 5E
Why not use 2 data entries. 'value' and 'threshold'. Are they that cheap?

H110Hawk
Dec 28, 2006

Shaocaholica posted:

Why not use 2 data entries. 'value' and 'threshold'. Are they that cheap?

It was a different time back then. It hasn't improved much. (newer versions are less dumb, but truly until ssd's most of the failure conditions were pretty binary. In a hdd a single remapped means that the disk is as good as dead. In a ssd it's Tuesday.)

MagusDraco
Nov 11, 2011

even speedwagon was trolled

H110Hawk posted:


And smart values count down not up right?

hell if I know for that thing. Anyway I had to use the wayback machine to figure out your rough nand writes on the m4 Old Crucial forums and random other forums (xtremesystems)

edit: fixed first link

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





H110Hawk posted:

Firmware: No. It's upgraded past the 000x series buggy stuff. It's in fact running the latest (from 2013).

Maybe that was how they "fixed" it, by forcing it to wrap back to zero hours. I wonder if I still have my 64GB M4 kicking around somewhere.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply