|
Diviance posted:According to this, Windows 8/8.1 does perform defrag on SSD's when System Restore is enabled. Huh. The more you know.
|
# ? Jan 19, 2015 07:43 |
|
|
# ? Jun 8, 2024 02:26 |
|
wooger posted:I just won a Thinkpad x230 on ebay. Its coming with a 320GB HDD which I might remove: It has mSATA slot. Firmware bug should be fixed. To be sure in case you get an old drive from the warehouse, there's a performance restoration tool for the 840 EVO from Samsung's website that takes a few minutes to run if it detects the old firmware on your drive. It may be easiest to run the windows version initially when you get the laptop, but there's also a linux version (considering your drive will be new and with no data on it, just installing the new firmware itself would probably be fine in that case anyway). You can get the mx100, but Crucial's support is apparently poo poo if something ends up being wrong with it.
|
# ? Jan 19, 2015 08:47 |
|
Diviance posted:According to this, Windows 8/8.1 does perform defrag on SSD's when System Restore is enabled. Shouldn't this be automatically handled by the wear leveling algorithms? Bits are going to be constantly shuffled no matter what you do, so the wear leveling should be designed to prevent excessive fragmentation.
|
# ? Jan 19, 2015 09:05 |
|
Don Lapre posted:Windows 7 and Windows 8 both do trim instead of defrag on ssd's. Is TRIM something you have to do manually like a defrag? I thought it was a background feature?
|
# ? Jan 19, 2015 09:28 |
|
Megasabin posted:Is TRIM something you have to do manually like a defrag? I thought it was a background feature? Defrags are scheduled to run in the background by default in Windows 7/8. If the system detects a drive as being a SSD, it will run a TRIM pass instead.
|
# ? Jan 19, 2015 09:34 |
|
isndl posted:Shouldn't this be automatically handled by the wear leveling algorithms? Bits are going to be constantly shuffled no matter what you do, so the wear leveling should be designed to prevent excessive fragmentation. That's not the issue here. The SSD controller can transparently move data anywhere it likes, for reasons such as compression, security, and wear-leveling. However, at a software filesystem level, every fragment of each file needs to have its address (and sequence, and size) recorded. By design, this number of fragments [at least in NTFS, which is what we care about for a Windows system] has a maximum value. A quick search suggests to me that the maximum fragment count is extremely large: in the neighborhood of 2^32 - 1 (about 4.29 billion) fragments per NTFS directory. The filesystem can no longer address additional fragments after that point, which means that the directory cannot write additional data until files are removed from it. Another point raised by the article is that the access of meta-data in file systems is generally quite slow, and excessive fragmentation WILL reduce performance, because the list of fragment pointers must be determined before the actual file data can be retrieved. This is not likely to be an issue for home users, but it is definitely an issue when scanning directories with hundreds of thousands or millions of files (or more!). It is also intuitive as well: to access a file, the system needs to access both the file data and the metadata. Reducing the amount of metadata reduces the total amount of I/O, which naturally should improve read performance.
|
# ? Jan 19, 2015 09:42 |
|
Don Lapre posted:Well there ya go I still call bullshit. You can't 'line up' data on an SSD, and that article doesn't even say what it's supposedly doing. SSD's don't have a seek penalty like physical drives. Other things can be done to optimize performance but fragmentation in the traditional sense is not one of them.
|
# ? Jan 19, 2015 14:04 |
|
No good SSD sales lately...
|
# ? Jan 19, 2015 14:26 |
|
Bob Morales posted:I still call bullshit. You can't 'line up' data on an SSD, and that article doesn't even say what it's supposedly doing. SSD's don't have a seek penalty like physical drives. Other things can be done to optimize performance but fragmentation in the traditional sense is not one of them. You can call bullshit all you like, but the guy works for Microsoft and supposedly that info comes straight from developers working on Windows itself. Whether it is actually true or not I cannot say.
|
# ? Jan 19, 2015 14:58 |
|
Diviance posted:You can call bullshit all you like, but the guy works for Microsoft and supposedly that info comes straight from developers working on Windows itself. "Well it's called an oil change but we're not really changing the oil we're just warming the oil up. But we still call it an oil change because we used to do them."
|
# ? Jan 19, 2015 15:17 |
|
Bob Morales posted:It's true in the sense that an NTFS-specific set of routines are being performed, but it is in no way near the traditional disk fragmentation people would think it is. It appears the same thing would need to be done on a platter drive. I couldn't honestly say one way or the other. I just posted it because it seemed relevant to what was being asked at the time. I am curious if how things will work on ReFS if it actually does become a standard filesystem, bootable and all, in Windows 10.
|
# ? Jan 19, 2015 15:33 |
|
Diviance posted:I couldn't honestly say one way or the other. I just posted it because it seemed relevant to what was being asked at the time. Wasn't WinFS going to come with XP? I just see NTFS lasting for-ev-er.
|
# ? Jan 19, 2015 15:51 |
|
Bob Morales posted:Wasn't WinFS going to come with XP? I just see NTFS lasting for-ev-er. No, longhorn.
|
# ? Jan 19, 2015 16:18 |
|
HalloKitty posted:No, longhorn. WinFS worked on xp as well http://windowsitpro.com/windows-xp/surprise-microsoft-ships-winfs-beta-1
|
# ? Jan 19, 2015 16:20 |
|
Bob Morales posted:Wasn't WinFS going to come with XP? I just see NTFS lasting for-ev-er. Yeah, WinFS didn't go so well. Maybe ReFS will. NTFS needs to die someday and make way for an improved file system.
|
# ? Jan 19, 2015 17:04 |
|
blowfish posted:You can get the mx100, but Crucial's support is apparently poo poo if something ends up being wrong with it. Odd, I've always found them good in the Uk with ram / USB stick replacements. I was considering an M500 mSATA in fact due to price / Anandtech guide. There appear to be no deals on 500GB ssds right now in the UK.
|
# ? Jan 19, 2015 17:05 |
|
isndl posted:Defrags are scheduled to run in the background by default in Windows 7/8. If the system detects a drive as being a SSD, it will run a TRIM pass instead. Where in windows can you see the options for this? Somewhere in control panel?
|
# ? Jan 19, 2015 18:17 |
|
Bob Morales posted:I still call bullshit. You can't 'line up' data on an SSD, and that article doesn't even say what it's supposedly doing. SSD's don't have a seek penalty like physical drives. Other things can be done to optimize performance but fragmentation in the traditional sense is not one of them. There are two levels of fragmentation going on, NAND fragmentation (which the controller handles) and LBA/filesystem fragmentation. What's going on here is that the hardware-agnostic way the filesystem handles fragmentation is itself becoming the bottleneck, adding extra I/O operations and limitations to the complexity of the LBA layout regardless of what the hardware could or could not handle. Megasabin posted:Where in windows can you see the options for this? Somewhere in control panel? It's part of the standard Defrag tool. I think the TRIM pass is Win8 and Win10 only.
|
# ? Jan 19, 2015 18:44 |
|
The original question re: fragmentation was, roughly speaking, whether you should run "defrag everything now" tools on a SSD, and the answer to that question is still no. Also, this extreme fragmentation where the OS must clean it up to avoid running into filesystem data structure limits seems likely to be the sort of thing that will only be observed if you're running a giant data center. One with a lot of frequent, random writes, and where said writes often extend the length of a file (this is how you get lots of fragments).
|
# ? Jan 20, 2015 01:23 |
|
BobHoward posted:The original question re: fragmentation was, roughly speaking, whether you should run "defrag everything now" tools on a SSD, and the answer to that question is still no. If you're running a giant data center and using NTFS, god help you. You're more likely to be using an ext variant, zfs or something else that has mostly put this issue to bed in TYOOL 2015.
|
# ? Jan 20, 2015 02:08 |
|
I understand that overprovisioning is no longer necessary, but there were quite a few times I used about 95%+ space on my HDD before freeing up space again. Recently, I've had it consistently at 80%+ capacity. Should I overprovision by 20% anyway or not?
|
# ? Jan 20, 2015 02:54 |
|
Factory Factory posted:There are two levels of fragmentation going on, NAND fragmentation (which the controller handles) and LBA/filesystem fragmentation. What's going on here is that the hardware-agnostic way the filesystem handles fragmentation is itself becoming the bottleneck, adding extra I/O operations and limitations to the complexity of the LBA layout regardless of what the hardware could or could not handle. If I have Windows 7 how would I regularly run TRIM on my drives then?
|
# ? Jan 20, 2015 05:10 |
|
Last week I found my old Intel X-25m G2 and put it in my even older Lenovo R61i, I had no idea this laptop could be this quick and useful (for basic web browsing and such). I installed the Intel ssd toolkit and was amused that after 23018 power on hours and 5.58TB writes it's still at 98% on the media wearout indicator. I think this combination of Lenovo and Intel could last decades of use.
|
# ? Jan 20, 2015 13:51 |
|
Megasabin posted:If I have Windows 7 how would I regularly run TRIM on my drives then? Keeping in mind that it shouldn't be necessary, there are a few bare-bones programs made to manually TRIM free space. As an alternative, you could image the drive, secure erase it, and restore the image, if you don't mind the rewriting.
|
# ? Jan 20, 2015 14:30 |
|
Megasabin posted:If I have Windows 7 how would I regularly run TRIM on my drives then? Trim should run all the time, you shouldn't need to run it manually. You can check to see if trim is working using this tool. http://www.tweaktown.com/articles/5203/trim-check-overview-of-an-essential-ssd-trim-functionality-tester/index.html source page: https://github.com/CyberShadow/trimcheck compiled exe: http://files.thecybershadow.net/trimcheck/
|
# ? Jan 20, 2015 17:49 |
|
Grabbed a 240GB PNY Optima (sandforce) at Best Buy for $89 with a $15 Best Buy rewards coupon. Swapped it out for the Crucial M500 I had in my work desktop with an mSATA adapter, and then installed the Crucial in a Lenovo laptop. Whee.
|
# ? Jan 20, 2015 20:25 |
|
Diviance posted:Yeah, WinFS didn't go so well. Maybe ReFS will. NTFS needs to die someday and make way for an improved file system. NTFS is perfectly adequate for a large number of cases Remember that it has decades of bugfixes before jumping onto the file system du jour.
|
# ? Jan 21, 2015 00:39 |
|
Malcolm XML posted:NTFS is perfectly adequate for a large number of cases I don't see NTFS being dropped for a long time yet, it is simply everywhere. But that is no reason not to work on a new file system from scratch for a more modern OS to one day replace it.
|
# ? Jan 21, 2015 07:42 |
|
edit wrong thread
Josh Lyman fucked around with this message at 15:18 on Jan 21, 2015 |
# ? Jan 21, 2015 15:11 |
|
Diviance posted:I don't see NTFS being dropped for a long time yet, it is simply everywhere. But that is no reason not to work on a new file system from scratch for a more modern OS to one day replace it. Sure but it's more likely than not to be something that's only really useful in high load scenarios like datacenters. Unless it offers significant power savings or something i very much doubt it will roll out to consumers.
|
# ? Jan 21, 2015 16:06 |
|
Tagfs still makes sense to me
|
# ? Jan 21, 2015 18:18 |
|
I have a Lenovo ThinkPad X220 that already has an SSD (a 120 GB Renice X3 mSATA) which I installed aftermarket. I also have the 320 GB HGST SATA drive it shipped with in the drive bay. The performance is good, but as I love new toys, I'm tempted to get an 840 EVO mSATA. That said... I probably don't need it. Also, yes, I run a Linux (Mint 17.1) on this machine. Performance graph (reads only) for the drive: Can you guess where my swap partition is? I'll bet you can. smartctl output: pre:1 Raw_Read_Error_Rate 0x000f 117 099 050 Pre-fail Always - 0/157325939 5 Retired_Block_Count 0x0033 100 100 003 Pre-fail Always - 0 9 Power_On_Hours_and_Msec 0x0032 100 100 000 Old_age Always - 21449h+22m+02.040s 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 524 171 Program_Fail_Count 0x0032 000 000 000 Old_age Always - 0 172 Erase_Fail_Count 0x0032 000 000 000 Old_age Always - 0 174 Unexpect_Power_Loss_Ct 0x0030 000 000 000 Old_age Offline - 5 177 Wear_Range_Delta 0x0000 000 000 000 Old_age Offline - 0 181 Program_Fail_Count 0x0032 000 000 000 Old_age Always - 0 182 Erase_Fail_Count 0x0032 000 000 000 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 194 Temperature_Celsius 0x0022 030 030 000 Old_age Always - 30 (Min/Max 30/30) 195 ECC_Uncorr_Error_Count 0x001c 117 099 000 Old_age Offline - 0/157325939 196 Reallocated_Event_Count 0x0033 100 100 000 Pre-fail Always - 0 231 SSD_Life_Left 0x0013 100 100 010 Pre-fail Always - 0 233 SandForce_Internal 0x0000 000 000 000 Old_age Offline - 2752 234 SandForce_Internal 0x0032 000 000 000 Old_age Always - 1728 241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age Always - 1728 242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age Always - 8576 pre:# df / Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb2 98534004 17883104 75622640 20% / TL;DR: Should I spend money or no? If so, how much?
|
# ? Jan 22, 2015 05:11 |
|
Just a friendly reminder to always back your poo poo up - everything will fail one day. I seem to have lost an old Corsair Nova 128GB that I was using as my primary disc for about 4 years. I can still see it in the BIOS but can't boot to it, and while system recovery can make it to the desktop, it never displays anything but the background and my cursor. When booting off another drive I can also see it in the drive list, but I get an I/O error when I try to access it. I've tried the idling in BIOS trick with no success, which leads me to believe that the drive is probably dead and my data is gone. I guess I'll be getting a new Samsung or Intel drive now, but it was still hard to lose that data.
|
# ? Jan 22, 2015 16:02 |
|
I've had drive failures in the past, it sucked. I backup weekly to my file server that's running Unraid. It used to be really basic, but the latest beta has added a lot of good functionality, email notifications, smart monitoring, docker for your apps, vm support, etc. I used to use external drives but that got old a long time ago, and those can fail too, unraid has some fault tolerance via parity drive so it can completely rebuild a failed disk.
Tanbo fucked around with this message at 16:40 on Jan 22, 2015 |
# ? Jan 22, 2015 16:35 |
|
The Samsung EVO 850 250GB drive is on Amazon for £108; the Samsung EVO 840 250GB is £89. Is there enough of a performance boost in the 850 over the 840 to justify the price difference?
|
# ? Jan 22, 2015 16:37 |
|
WattsvilleBlues posted:The Samsung EVO 850 250GB drive is on Amazon for £108; the Samsung EVO 840 250GB is £89. Is there enough of a performance boost in the 850 over the 840 to justify the price difference? The difference is more in endurance than performance. The 850 Evo is made with V-NAND, which means the invidual cells can be larger while maintaining the same capacity, which leads to more resilient storage (in very non-technical terms)
|
# ? Jan 22, 2015 17:10 |
|
Actually, 850 EVO has much better performance consistency. If you're using the drive with RAPID in a typical Windows environment, this doesn't matter, but on Macs and with heavy random write use (e.g. memory paging if your system has less RAM), the 850 provides a better experience.
|
# ? Jan 22, 2015 17:24 |
|
Factory Factory posted:Actually, 850 EVO has much better performance consistency. If you're using the drive with RAPID in a typical Windows environment, this doesn't matter, but on Macs and with heavy random write use (e.g. memory paging if your system has less RAM), the 850 provides a better experience. So the 850 would be good for OS X instead of an Intel drive?
|
# ? Jan 22, 2015 17:39 |
|
WattsvilleBlues posted:So the 850 would be good for OS X instead of an Intel drive? Yup. An Intel drive wouldn't be a bad choice, especially the 730 or a DC S3xxx models, but those are typically more expensive, and the 850 EVO is a better performer than the 33x/530 SandForce-based drives... but it really doesn't matter all that much in the end.
|
# ? Jan 22, 2015 17:59 |
|
|
# ? Jun 8, 2024 02:26 |
|
Has anyone tried out the Kingston Digital HyperX FURY drives? I know the standard Kingston HyperX 3K models were an acceptable, though not preferred, model in the last thread and I've had a few years of solid luck with them. However, my new job is far less flexible about spending, so can't buy a bunch and see what happens. I just need them for some classroom computers, so there will be lightweight usage for web browsing and PowerPoint presentations. This article indicates it uses the same memory as the Crucial M500, but the problems were with the firmware on the Crucial's, right? http://www.hardwarezone.com.sg/review-kingston-hyperx-fury-240gb-sounds-furious-only-name Amazon link: http://www.amazon.com/gp/product/B00KW3MTBS/ Fair bit cheaper than other 120 drives and I really only need 40-50GB of space.
|
# ? Jan 23, 2015 20:02 |