|
Hmmmm, my 5820K@4GHz w/ DDR4-2400 CL12 does 1781 and 11292.
|
# ¿ Feb 18, 2017 02:30 |
|
|
# ¿ May 15, 2024 17:45 |
|
eames posted:Asus motherboard leak seems to confirm DDR4 ECC support Fun, if you have ancillary stuff.
|
# ¿ Feb 20, 2017 20:15 |
|
How much overlap is there between the SATA/USB ports and the PCIe slots? On my current board, using the NVMe slot for instance disables the PCIe 4x slot. One PCIe 1x on the southbridge also goes, if I use certain SATA ports.
|
# ¿ Feb 20, 2017 21:26 |
|
--edit: ^^^ An urban legend? Really? I thought it was proved that it does happen?Klyith posted:e: since the memory corruption can possibly flip more than one bit, and ECC can only handle 1 bit errors. If your system isn't detecting the rowhammer attack, multiple attempts eventually work. but current and near future hardware has protections against this type of attack, without the need for ECC.
|
# ¿ Feb 25, 2017 01:13 |
|
Anime Schoolgirl posted:The scuttlebutt of low bandwidth RAM is that it's much harder to find low latency 2400 than it is to find average latency 3200 I got DDR4-3000 CL15 sticks and run them at 2400, lowered the CL proportionally to 12 (same for the other main timings). Works fine. Could probably do 11.
|
# ¿ Feb 27, 2017 04:06 |
|
Cardboard Box A posted:This is not borne out by the OBS tests https://obsproject.com/forum/threads/comparison-of-x264-nvenc-quicksync-vce.57358/
|
# ¿ Mar 5, 2017 04:36 |
|
eames posted:PCGH has some interesting graphs on 720p core scaling, too bad they didn't add min fps. Quadcores aren't looking so hot.
|
# ¿ Mar 5, 2017 23:34 |
|
Anyone care to explain to me, why a purely CPU and memory bound benchmark like Cinebench has similar multithread results with a tiny edge for the Intel on single thread, yet two of the games have a noticeable/considerable framerate advantage?
|
# ¿ Mar 7, 2017 03:19 |
|
How exactly can you get NVMe performance wrong? Isn't it just shoveling data back and forth over PCIe?ufarn posted:Another benchmark showing 30% DX12 gains: Combat Pretzel fucked around with this message at 22:33 on Apr 1, 2017 |
# ¿ Apr 1, 2017 22:30 |
|
So even if general IOs are chunkier than 4KB, mix in fragmentation, and there's potential of it being split up in more actual IOs. NTFS cluster size is 4KB.
|
# ¿ Apr 2, 2017 17:16 |
|
Truga posted:All the files are 100% "fragmented" all the time on every decent SSD due to internal wear balancing shenanigans.
|
# ¿ Apr 2, 2017 18:29 |
|
Reading say 1MB will always end up being 256 requests at 4KB regardless. The problem is when the data is scattered all over the place. As I've just mentioned, the internal block size is rather large. If all these requests happen to be located on a single of these 2MB slabs (say a sequental read or winning the block allocation lottery), it'll get read into RAM once and the SSD can service all the requests cached. If a lot of these 256 blocks are all over the place, it has to keep reading 2MB slabs to get all the data. The absolute worst case is that it'd need to read 512MB of data from the NAND to service that 1MB request from the app (256 requests of 2MB internal slabs).
|
# ¿ Apr 2, 2017 18:36 |
|
|
# ¿ May 15, 2024 17:45 |
|
That buffering thing would only be valid for someone using the Intel RST drivers. If you're running MSAHCI, for all intents and purposes, the Intel just sees the OS communicating to something via PCIe.
|
# ¿ Apr 2, 2017 19:29 |