Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

GRINDCORE MEGGIDO posted:

Is there any point in setting a fixed size pagefile for SSD wear?

For wear, no. For keeping it from eating 32GB off your 240GB drive or whatever, sure.

Adbot
ADBOT LOVES YOU

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
Unless you are using windows 7 (you shouldn't be) with 32gb of ram then the page file should be ok. Windows 7 likes to make the page file the same amount as your system ram.

Nalin
Sep 29, 2007

Hair Elf
A website did an SSD endurance test. A Samsung 840 256GB SSD logged its first uncorrectable error around 300TB of writes. It failed around 900TB of writes. The Samsung 840 Pro 256GB SSD lasted until around 2500TB.

I don't give any poo poo about writing to my 840 Pro. I have a pagefile enabled, I let Firefox/Chrome write poo poo to my HD all the time. I have 19.96TB written over 1201.25 days worth of power-on hours. That is about 16.6 GB of writes per day. Round that up to 20GB per day. After doing the math, the hard drive should technically out-live me. You should be replacing your SSDs with future-tech before you realistically run out of available writes.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Nalin posted:

A website did an SSD endurance test. A Samsung 840 256GB SSD logged its first uncorrectable error around 300TB of writes. It failed around 900TB of writes. The Samsung 840 Pro 256GB SSD lasted until around 2500TB.

I don't give any poo poo about writing to my 840 Pro. I have a pagefile enabled, I let Firefox/Chrome write poo poo to my HD all the time. I have 19.96TB written over 1201.25 days worth of power-on hours. That is about 16.6 GB of writes per day. Round that up to 20GB per day. After doing the math, the hard drive should technically out-live me. You should be replacing your SSDs with future-tech before you realistically run out of available writes.

Nalin is referring to The Tech Report's SSD Endurance Experiment:

The beginning: http://techreport.com/review/24841/introducing-the-ssd-endurance-experiment

The end: http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead

TL;DR: In a non-server environment, you will replace your drive with faster/denser/different flash or newer interfaces before you actually exhaust the NAND.

SwissArmyDruid fucked around with this message at 22:42 on Apr 26, 2017

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
"But no, someone whose name I can't remember told me a long time ago that if I had enough system RAM I could run Windows without a page/swap file and that's been my unshakeable belief since 1998 and I've been maxing out my DIMM slots on every new board I buy ever since!"

redeyes
Sep 14, 2002

by Fluffdaddy

Nalin posted:

A website did an SSD endurance test. A Samsung 840 256GB SSD logged its first uncorrectable error around 300TB of writes. It failed around 900TB of writes. The Samsung 840 Pro 256GB SSD lasted until around 2500TB.

I don't give any poo poo about writing to my 840 Pro. I have a pagefile enabled, I let Firefox/Chrome write poo poo to my HD all the time. I have 19.96TB written over 1201.25 days worth of power-on hours. That is about 16.6 GB of writes per day. Round that up to 20GB per day. After doing the math, the hard drive should technically out-live me. You should be replacing your SSDs with future-tech before you realistically run out of available writes.

Counterpoint: I had an 840 Pro 512GB poo poo its pants in an interesting way. It got super slow. Slower than a spinner by a long shot. A secure erase seems to have to have fixed it though.

Stanley Pain
Jun 16, 2001

by Fluffdaddy
Page File Good.

SSD Good.

Now let's move on.

redeyes posted:

Counterpoint: I had an 840 Pro 512GB poo poo its pants in an interesting way. It got super slow. Slower than a spinner by a long shot. A secure erase seems to have to have fixed it though.

How exactly is this a counterpoint?

craig588
Nov 19, 2005

by Nyc_Tattoo
840s had a controller bug, they tried all sorts of firmware and driver updates to fix it, but at best it was worked around and would eventually come back until you secure wiped the drive to write something to every sector and "wake" them up again to full performance. The 850 fixed that problem.

In a twist of fate it actually got worse faster the less you wrote to the drive.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

craig588 posted:

In a twist of fate it actually got worse faster the less you wrote to the drive.

The 840's cells just needed to be taken out and exercised regularly. Like doing crosswords so you don't get Alzheimer's.

fatman1683
Jan 8, 2004
.
The pagefile still exists because of the difference between committed memory and allocated memory.

When a process runs, it typically commits i.e. reserves a block of memory that is as large as it thinks it will ever need. This ensures that when it wants to write something to memory, it has that memory available and doesn't have to go through the time-consuming process of requesting more memory from the operating system.

As the process continues running, it writes to memory which allocates memory to that process. This allocation is called the working set, and will fluctuate over time as the process reads, writes and dumps memory.

What this means to the user is that even though you may have enough physical memory to run all the processes that are ever running at once, you may not have enough to satisfy all of their commits for all the memory they think they'll need.

Hence, virtual memory and the pagefile. Windows won't let your total commits exceed the total virtual address space (physical + pagefile), so having a pagefile ensures that you'll be able to satisfy a lot more commits, and that if those commits ever end up being allocated over the amount of physical memory in the machine, that they will be able to do so and will continue to run instead of crashing with an 'out of memory' error.

This matters even if you never allocate 100% of your physical memory, because a lot of programs expect to be able to commit a certain amount of memory, and won't function correctly if they can't.

Chrome is a prime example of this. Chrome has its own task manager and memory allocation system, and it commits a shitload of RAM. Chrome tabs will start to crash with 'out of memory' error messages when the system hits the limit of its virtual address space.

This is easiest to see when opening a new tab or refreshing a tab that has been idle for awhile, since this causes that tab's process to start the commit process to load the contents of that tab into memory. If the virtual address space can't accommodate that commitment, the tab will crash with an 'Aw, Shucks' error, even if there's a shitload of physical memory still unallocated and 'available'.

For what it's worth, Task Manager is incredibly deceptive about this whole concept. The 'memory' number in Task Manager only shows the private working set of the process, which is just the amount of memory actually allocated (i.e. in use and containing data) to that process that is not available to be reallocated and shared with any other process. If you want a better picture of your system's actual memory situation, Resource Monitor can show you information about shareable working sets, commit, and a couple of other metrics.

Bottom line, keep your pagefile. Just because you may not ever run out of physical memory doesn't mean a given piece of software won't think you're running out because it can't reserve as much space as it wants to.


edit: Whoops, looks like Win10 task manager does show some commit data after all, on the performance tab. Here's my system:



The 'In Use' and 'Available' numbers refer to physical memory. The 'Committed' number shows the amount of committed memory vs the total virtual address space, and the 'Cached' number shows memory that is allocated but is basically idle and could be reallocated if needed. Paged and nonpaged pool are kernel-specific memory allocations that aren't relevant to this discussion.

In the screenshot, you can see that even though my system is currently only 'using' 7.5 GB of RAM, it's actually committed 13GB. If I was to launch Chrome and fire up a bunch of new tabs, the 'In Use' number would barely move, but the 'Committed' number would start to rise rapidly. Incidentally, this memory management technique is one of the things that makes Chrome fast, since it rarely has to request a new commit when loading a webpage, no matter how big the page actually is.

fatman1683 fucked around with this message at 07:52 on Apr 27, 2017

GRINDCORE MEGGIDO
Feb 28, 1985


You rock. There's some really good posts in this thread.

GRINDCORE MEGGIDO fucked around with this message at 07:41 on Apr 27, 2017

Kazinsal
Dec 13, 2011
Memory management and kernel level programming and design is extremely my poo poo. I'm not an expert or a professional in the field but it's been something I've had an interest in for a long time and one of the best ways to learn it if you're a programmer is to dive into it.

It may or may not be surprising that there's a scene for hobby operating systems development. Lots of people in it with all different levels of experience, from noobs to people who do systems research and development for ARM and Google.

..btt
Mar 26, 2008

fatman1683 posted:

Windows won't let your total commits exceed the total virtual address space (physical + pagefile)

People keep stating this as fact. Would love to see an actual source for it. It also contradicts your conclusion of:


fatman1683 posted:

Bottom line, keep your pagefile.

Since from it we can extrapolate there is no difference between a machine with 16Gb RAM and a machine with 8 + 8Gb fixed size page file when it comes to allocating memory.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
i had an issue with a program at my old job that would crash with an out of memory error even though it never used more than 1/2-1 GB of memory (out of 8gb total) and task manager said there was still plenty of ram available. So that is what happened here?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Watermelon Daiquiri posted:

i had an issue with a program at my old job that would crash with an out of memory error even though it never used more than 1/2-1 GB of memory (out of 8gb total) and task manager said there was still plenty of ram available. So that is what happened here?

Potentially exactly what fatman1683 is talking about : despite never actually using it, it went right on over to the OS's memory manager and requested some presumably huge chunk of memory that the memory manager was unable to satisfy. Eg, despite only having an active set of 1GB, it attempted to commit 10GB or whatever.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Watermelon Daiquiri posted:

i had an issue with a program at my old job that would crash with an out of memory error even though it never used more than 1/2-1 GB of memory (out of 8gb total) and task manager said there was still plenty of ram available. So that is what happened here?

Nah, it's most likely some old 32-bit appliation with a 2GB addressing limit. It can be raised to 4, but the default is 2. As far as I know.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

HalloKitty posted:

Nah, it's most likely some old 32-bit appliation with a 2GB addressing limit. It can be raised to 4, but the default is 2. As far as I know.

Also possible, especially if it's some legacy program written in the 90's. 2GB is, indeed, the default, and while 4GB was possible it's pretty rare to see that implemented. Thankfully we won't run into the same issue with 64b systems within our lifetime (on desktop systems, at least).

..btt
Mar 26, 2008

DrDork posted:

Thankfully we won't run into the same issue with 64b systems within our lifetime (on desktop systems, at least).

I wonder how many people said this about 32-bit when that became the norm in the 90s :)

fatman1683
Jan 8, 2004
.

..btt posted:

People keep stating this as fact. Would love to see an actual source for it. It also contradicts your conclusion of:


Since from it we can extrapolate there is no difference between a machine with 16Gb RAM and a machine with 8 + 8Gb fixed size page file when it comes to allocating memory.

It's kind of buried but here's one example:

https://www.codeproject.com/Articles/29449/Windows-Memory-Management

Mark Russinovich posted:

What would happen if we did not Control-C (terminate) the process? Would it have exceeded the amount of virtual address space allocated? It would, in fact, be stopped sooner than that by reaching an important private bytes limit called the “commit limit”. The system commit limit is the total amount of private virtual memory across all of the processes in the system and also the operating system that the system can keep track of at any one time. It is a function of two sizes: the page file size(s) (you can have more than one) + (most of) physical memory.

edit: better source http://materias.fi.uba.ar/7508/WI6/Windows%20Internals%20Part%202_6th%20Edition.pdf

Still Mark Russinovich posted:


Commit Limit
On Task Manager’s Performance tab, there are two numbers following the legend Commit. The
memory manager keeps track of private committed memory usage on a global basis, termed commitment
or commit charge; this is the first of the two numbers, which represents the total of all committed
virtual memory in the system.
There is a systemwide limit, called the system commit limit or simply the commit limit, on the
amount of committed virtual memory that can exist at any one time. This limit corresponds to the
current total size of all paging files, plus the amount of RAM that is usable by the operating system.
This is the second of the two numbers displayed as Commit on Task Manager’s Performance tab. The
memory manager can increase the commit limit automatically by expanding one or more of the paging
files, if they are not already at their configured maximum size.
Commit charge and the system commit limit will be explained in more detail in a later section.

There's a whole chapter in that PDF on the commit limit, virtual address space and the pagefile. It's a great read.

And you're correct that from a virtual address space standpoint, 8+8=16. Depending on your workload and your usage habits, it is entirely possible to have enough physical memory to cover your commits without needing a pagefile.

But this results in memory always going unused, since your commits will always be greater than your allocations, and without a pagefile you'll never be able to commit more than your physical memory.

So, yes, with enough RAM you can hypothetically live without a pagefile, but only if you're ok with that (relatively) expensive RAM never, ever being used for anything except processes' peace of mind. It's much more practical to have a pagefile, and best of all a Windows-managed pagefile size, so that you get the maximum benefit out of your physical memory.

fatman1683 fucked around with this message at 13:37 on Apr 27, 2017

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

..btt posted:

I wonder how many people said this about 32-bit when that became the norm in the 90s :)

Well, there was the famous "640k ought to be enough for anyone" quote.

Though while the 4GB max of a 32b system is ~6,553x the size of 640k, the max addressable 64b memory space is 16 Exabytes, or about 42,949,672x the size of 4GB. So we have quite a bit more headroom, comparatively speaking. I mean, that's some super detailed 3d porn we're talking about.

..btt
Mar 26, 2008

Thanks for this, will have a read!

Obsurveyor
Jan 10, 2003

Run a page file, don't run a page file, no one but you is gonna know or care. Windows isn't going to magically turn to poo poo if you don't have one, as long as you have enough memory. It just starts killing apps when you run out instead of paging. It's not a big deal either way. Can we just move on?

Theris
Oct 9, 2007

..btt posted:

I wonder how many people said this about 32-bit when that became the norm in the 90s :)

No one who knew what they were talking about. There's a reason workstations and "big iron" computers were already 64bit around the time 32bit was becoming mainstream in the consumer space. (32bit OSes, anyway, since 32bit CPUs had been in mass market systems for almost a decade before Windows got around to fully supporting them)

It would have been entirely possible at the time to look at trends and see software hitting the 4GB barrier in the not too distant future. That's not really the case for the 16EB limit.

..btt
Mar 26, 2008
I don't know, in the lifetime of people still alive today, computers have gone from (arguably) not existing to being limited by factors such as the speed of electron flow propagation. I think "not in our lifetime" is a short-sighted thing to say. It was just a flippant comment though.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.

HalloKitty posted:

Nah, it's most likely some old 32-bit appliation with a 2GB addressing limit. It can be raised to 4, but the default is 2. As far as I know.

yeah, its almost undoubtedly this as well- i was trying to call a loving huge set of data from an oracle db, and trying to get around this issue was one of the most horribly frustrating and tedious workarounds ive ever done, since I wanted to look at EVERYTHING that ran through the facility lol

Truga
May 4, 2014
Lipstick Apathy

..btt posted:

I don't know, in the lifetime of people still alive today, computers have gone from (arguably) not existing to being limited by factors such as the speed of electron flow propagation. I think "not in our lifetime" is a short-sighted thing to say.

I think you posted the reason why yourself. We're hitting the limits of both speeds (electricity too slow) and size (atoms too big). Hell, we might hit the limits of 64bits in our lifetime for all I know, if there's some good breakthroughs, but it's not very likely at this point tbh.

e: also, I run my laptop without a pagefile and I've yet to see any issues with it. My SSD is only 64gb tho, which is why I do it.

Truga fucked around with this message at 17:08 on Apr 27, 2017

Junior Jr.
Oct 4, 2014

by sebmojo
Buglord
I'm looking at Overclockers UK and for some reason they have TWO types of Zotac 1080Ti AMP Extreme cards, one that's Extreme Core Spectre and Extreme Spectre.

The only difference I can see is the Extreme Spectre card has a slightly higher boost and core clock but that's it. I can't see myself paying an extra £50 just for that.

Anyway, is it worth buying one of these cards (or a Strix) NOW or should I just wait for a sale soon?

craig588
Nov 19, 2005

by Nyc_Tattoo
I'd avoid Zotac because of their extra loud fans. Maybe they fixed them with the 1080 TI, but their 1080 had the worst fans I'd seen on a high end product in a very long time.

I'd go for the Asus because they had one of the best coolers and though their custom PCB made bios editing for overclocking harder in the past, Nvidia sured up the DRM on their Pascal bioses so you can't edit them in the first place.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Junior Jr. posted:

Anyway, is it worth buying one of these cards (or a Strix) NOW or should I just wait for a sale soon?

It's your call whether you want to give Zotac a shot or not, but either way I wouldn't expect to see a sale on a 1080Ti any time soon. Cards are still being released--at best some of the unscrupulous retailers might "drop" artificially high launch prices back down to MSRP, but cards that are already there are likely to stay there for a while.

Junior Jr.
Oct 4, 2014

by sebmojo
Buglord

craig588 posted:

I'd avoid Zotac because of their extra loud fans. Maybe they fixed them with the 1080 TI, but their 1080 had the worst fans I'd seen on a high end product in a very long time.

I'd go for the Asus because they had one of the best coolers and though their custom PCB made bios editing for overclocking harder in the past, Nvidia sured up the DRM on their Pascal bioses so you can't edit them in the first place.

Honestly, loud fans aren't a big deal for me 'cause I usally wear headphones at max volume anyway. I was thinking of getting a Strix, but the Zotac cards were slightly cheaper, so I'm leaning towards them at the moment. Unless there's another site selling the Strix a bit cheaper, I might go for that one instead.

GWBBQ
Jan 2, 2005


If the newest game I'm playing is Fallout 4 at 1080p, a GTX 1060 6GB should be fine, right?

1gnoirents
Jun 28, 2014

hello :)

GWBBQ posted:

If the newest game I'm playing is Fallout 4 at 1080p, a GTX 1060 6GB should be fine, right?

Yeah

https://www.youtube.com/watch?v=VDZNXRAtXs0&t=465s

Not really GPU thread material but just look at his joy. For GPU content though, Nvidia provided two updated Titan Xp's at the last minute before even Jayz got one to review. Pretty rad



Also "its... its ok.. im an actor.. we can work this out" :laffo:

Moonshine Rhyme
Mar 26, 2010

Hate Hate Hate Hate Hate
Looking at possibly picking up an HTC Vive, in which case I am thinking I might want an upgrade from my GTX 560ti. Was thinking about picking up a refurbished GTX 960 from Newegg because I am cheap and its on sale. Am I right in thinking that is probably good enough to run most things comfortably at decent settings? My Graphics card is the weak link in my rig right now, everything else has been upgraded more recently.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Moonshine Rhyme posted:

HTC Vive, GTX 960, Am I right in thinking that is probably good enough to run most things comfortably at decent settings?

Not even a bit. 970 is the bare minimum. I have an overclocked 290X, and I'm still left wanting. I honestly reckon you should hold out until you can afford at least a 1070.

VR's no joke, and not because it looks especially amazing or is incredibly high resolution (although SteamVR targets a higher resolution than native for image clarity reasons, and cranking that up even more can give you a better image), but because you need to hit that 90Hz v-sync'd target, and if you don't, the workarounds that are in place will give you an "OK" experience until you drop below 45 FPS, and then you're in for a stuttery, unpleasant time!

HalloKitty fucked around with this message at 19:38 on Apr 27, 2017

Seamonster
Apr 30, 2007

IMMER SIEGREICH

1gnoirents posted:

Yeah

https://www.youtube.com/watch?v=VDZNXRAtXs0&t=465s

Not really GPU thread material but just look at his joy. For GPU content though, Nvidia provided two updated Titan Xp's at the last minute before even Jayz got one to review. Pretty rad



Also "its... its ok.. im an actor.. we can work this out" :laffo:

I understand the sponsors this, sponsors that, but anything short of a 2TB 960 Pro SSD in there is...kinda insulting.

Moonshine Rhyme
Mar 26, 2010

Hate Hate Hate Hate Hate

HalloKitty posted:

Not even a bit. 970 is the bare minimum. I have an overclocked 290X, and I'm still left wanting. I'd honestly reckon you should hold out until you can afford at least a 1070.

Wow! I did not know that the VR stuff was that heavy. Thanks for the advice!

Regrettable
Jan 5, 2010



1gnoirents posted:

Yeah

https://www.youtube.com/watch?v=VDZNXRAtXs0&t=465s

Not really GPU thread material but just look at his joy. For GPU content though, Nvidia provided two updated Titan Xp's at the last minute before even Jayz got one to review. Pretty rad



Also "its... its ok.. im an actor.. we can work this out" :laffo:

This video is great and I really like the build, but I wish he would have sanded and buffed out the bumps from spray painting the case. There are a few close-ups where you can see how uneven the paint is and it irritates the poo poo out of me. It's built for a celebrity and is basically an advertisement for his skill and dedication and he couldn't be bothered to spend a couple of extra hours making it look like a professional paint job. I did that for a case that only like five other people have seen and he didn't do it for a youtube channel with 1,000,000 subscribers.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless
Jay's not exactly a pro or anything, right? If case modding isn't his thing, maybe he just...doesn't know about how to paint properly?

1gnoirents
Jun 28, 2014

hello :)

PerrineClostermann posted:

Jay's not exactly a pro or anything, right? If case modding isn't his thing, maybe he just...doesn't know about how to paint properly?

Yeah he's just some guy at the end of the day, he does drop that particular caveat many many times before he saw it. Also something along the lines of "I wanted to make a build anybody could do" which is frankly true

Adbot
ADBOT LOVES YOU

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

1gnoirents posted:

Yeah he's just some guy at the end of the day, he does drop that particular caveat many many times before he saw it. Also something along the lines of "I wanted to make a build anybody could do" which is frankly true

"A build anybody could do..."

*uses a limited-edition case*
*uses limited-edition RAM*
*gets a guy to custom-sleeve the power cables in an less-than-overnight job because he's a Youtube celebrity catering to a real celebrity*

Also, he's in LA - that case is made largely of glass and is sitting on a buffed surface. He needs to shock-mount it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply