|
I remember my old Q6700 computer. Bought it when I knew nothing about computers way back when, paid like 2300. The thing also had a 9800GTX+ and an explosive Huntkey V500 power supply that likely ended up being the death of my motherboard. Had a 1tb HDD which BLEW my mind at the time. Ran XP on the bastard until 2011.
|
# ¿ May 2, 2014 13:10 |
|
|
# ¿ May 14, 2024 18:03 |
|
ShaneB posted:So is there any word on whether or not the new CPUs will work on the previous generation of motherboard? New CPUs won't work in Z87, only old Haswells. Everything LGA1150 will work in Z97, up until and including Broadwell. Skylake is going to be LGA1151 or something I believe.
|
# ¿ May 6, 2014 00:52 |
|
ShaneB posted:Is it possible with a BIOS upgrade? Supposedly no, but people have managed to get overclocking working on b85/h87 before, so I'm guessing people will try to find a way.
|
# ¿ May 6, 2014 06:42 |
|
Broadwell is guaranteed to be at least "unofficially supported" by Z97, similar to the situation with Devils Canyon/Refresh and Z87. I'm guessing this means we will see a Z107 or similar as the launch platform for Broadwell.
|
# ¿ May 24, 2014 11:26 |
|
If you are going to be using customer water, why not buy an EK Supremacy (The Best Waterblock™) and then use the Precisemount Naked Ivy addon to mount directly? Saves buying a motherboard you don't want just for the guard, and easier than trying to buy a resold one.
|
# ¿ Jun 9, 2014 22:34 |
|
Shaocaholica posted:That's not really the same thing. To be fair if he is using custom water it serves the same purpose and is only 4 dollars for the mounting kit. Won't work for aircoolers or whatever, but who uses aircoolers with 400 dollar motherboards and delidded CPUs anyway.
|
# ¿ Jun 10, 2014 01:14 |
|
The Lord Bude posted:I sate my itch by posting in the PC part picking thread. The best way to scratch the itch I've found is to build computers for everyone I know (like 20k AUD worth this last year). In lull periods though I usually do build lists for myself and then convince myself I don't need it .
|
# ¿ Aug 15, 2014 04:00 |
|
The biggest thing I got from those benches is how much of a hideous bottleneck the AMD APUs are in every single game. In some tests it is the difference between 60fps and 40fps. And the A10-7850k costs more than an i5
|
# ¿ Aug 31, 2014 02:26 |
|
To be fair, you can't actually use SLI alongside a M.2 3.0 4x SSD (like the Samsung is) on LGA1150 without a PLX chip. Honestly if you really had to do it, the cheapest option would be a 4690k/4790k alongside an ASRock Extreme9 (only motherboard I know of with both PCIE 3.0 4x M.2 and a PLX chip, also a good 200 dollars cheaper than other PLX boards). But you shouldn't.
|
# ¿ Aug 31, 2014 09:25 |
|
The Lord Bude posted:The Samsung drive is PCIe 2.0 x4, not 3.0, and you can get an adapter for like $20 to plug it into a PCIe 2.0 x4 slot which is fairly common on just about every ATX sized board. A lot of this is wrong. The XP941 is 3.0, it just wasn't marketed as such because until ASRock did 3.0 M.2 it wasn't an option. See Anandtech's review of the Extreme6 specifically addressing this. Also it isn't possible to sli using 3.0 4x, Titan Z's or otherwise, as Nvidia disables it under 8x. You're right about A. The extreme9 being a huge amount of motherboard for the money, and B. normal SATA SDDs on a cheaper board make way more sense.
|
# ¿ Aug 31, 2014 11:07 |
|
Alereon posted:The Samsung XP941 is not PCI-Express 3.0 quote:"one of the few native PCIe 3.0 x4 drives in OEM circulation, the Samsung XP941" To be clear, I'm not recommending the drive at all here, I just wish to be as factual as possible. (I love your work in the SSD thread)
|
# ¿ Aug 31, 2014 13:47 |
|
A confusingly worded Anandtech review then, especially considering that later on they question the performance of the drive compared to the 3.0 4x theoretical maximum. (Also I'm sorry for correcting you on that Bude, my mistake). I think the moral is the story is that there is a variety of crazy ways to strap PCI-E devices to a system, and its all fricking fascinating. Thanks for clearing that up Alereon.
|
# ¿ Aug 31, 2014 14:08 |
|
I don't think we should stop recommending ASRock based entirely off speculation and anecdotes, ignoring the multitudes of objective reviews showing them to be good, excellent even. The only spec in that review that the ASRock is not fantastic is a single audio benchmark where it is middle off the pack instead of the absolute best (like it is for DPC latency. The added connector for PCI-E power is not an indicator of PCI-E trace quality, it's a feature. It's useful for stability when you have a whole pile of add in cards each requesting 75w from the slot. It's entirely optional. You know what else has one? The ~Asus~ Rampage V Extreme super ultra gamer edition.
|
# ¿ Sep 1, 2014 07:13 |
|
Alereon posted:Like I said above, the point is that the quality of isolation of the audio traces tells you a lot about the quality of the board in general. That's a board loaded with typically higher-end features, but with a mid-range price corners have been cut to get there. Some of those, like the limited number of VRM phases, are pretty reasonable. Poor quality trace layout and isolation, which you see as that poor THD+N result, is not. The difference between "good" and "crap" is making the right tradeoffs, and lower quality for more features is almost never the right trade. You are making some interesting jumps in logic here. While the audio results aren't fantastic, they certainly aren't terrible. The isolation of audio components is not something that depends just on quality of tracings, but also how they are routed, nearby components, shielding etc. By the logic of poor tracing solely from that result, we shouldn't recommend boards from MSI, EVGA, Gigabyte and ASUS as well, as they post motherboards with worse distortion figures than the Extreme6. I also find your comment about low end boards and molex connectors amusing, considering that the aforementioned Rampage V Extreme uses a molex connector. Anecdotally, I remember it being quoted of Asus and ASRock that they both choose molex for auxiliary power as it is more commonly a spare connector on modern power supplies, compared to PCI-E connectors which are commonly either all used, or on long dual connectors for graphics cards. Like I said earlier, the additionally power connector is a feature (ignoring lovely outlying gigabyte boards) that is popular amongst enthusiasts. It is definitely not needed for normal operation on the ASRock boards, as quoted by reviewers and also from my own anecdote (my friend has an Extreme4 with two 780s on it without the connector merely because he did not want to have to route it). It is commonly used for one of three situations; 3 or 4 card setups, insane GPU overclocking/volting situations, and mining setups with a large amount of cards without PCI-E connectors that will stress the PCI-E power draw immensely (think 6 or 7 750ti's). In 99% of these situations even, it is merely placebo as well to have an effect. It certainly is popular in crowds such as [H]ardForum, overclock.net etc. I'd be willing to bet that ASRock gets more sales because of it. EDIT: I forgot to address your comment on power phases. If you could expand on this, that would be good. It is an interesting comment considering that the Extreme6 has a 12 Phase design with good Nippon Chemi-Con caps rated at 12k hours. The equivalent Asus board, the Z97-A (which is more expensive), has an 8 phase design with 10k hour rated caps. BurritoJustice fucked around with this message at 10:05 on Sep 1, 2014 |
# ¿ Sep 1, 2014 09:46 |
|
GokieKS posted:Ugh, seems like Dragon Age: Inquisition just flat out refuses to work on a dual-core machine regardless of how fast those cores may be. So my plan of just using this G3258 to tide me over until ASUS releases a mATX X99 board (which doesn't seem like it's going to happen until next year at the earliest) is coming back to bite me. Even if you are deadset on X99 (no reason to be for gaming), why not just get a X99 mATX board from any other manufacturer. Bonus points because you won't have to pay the ASUS tax. Don't buy a Rampage V Extreme, there is no justification for that ever. The logical solution is to just grab a 4690K if you need an upgrade, it will swap right in and be absolutely enough for every game out. edit: Dunno what Gravitas is saying though. 5xxx CPUs aren't worse for gaming than 4xxx, and it certainly isn't Intel being misleading. The base clocks are lower, sure, but even an average 5960x will hit 4.5GHz with proper cooling. The extra cores might not help in most games, but they won't hinder and you'll hit the same clocks as a 4790k within margins of error. BurritoJustice fucked around with this message at 06:20 on Nov 19, 2014 |
# ¿ Nov 19, 2014 06:18 |
|
GokieKS posted:While I don't necessarily *need* it, I do, in fact, want Haswell-E. This machine will be doing a decent amount of video editing and encoding in addition to gaming. Killer NIC on that one, forget it.
|
# ¿ Nov 19, 2014 06:32 |
|
Welmu posted:It's also possible to analyze this noise and use it to extract RSA keys. Can anyone explain to me why the "top" (at least model number wise) part is a 4 core 140w part. Seems a bit insane, the frequency difference is relatively minor to the 18 core part.
|
# ¿ Feb 13, 2015 13:02 |
|
mayodreams posted:A 22% increase in clock speed is not a minor difference. Some workloads are not heavily threaded, so clock speed becomes king, and a 140W TDP 4 core proc will get you better results than an 18 core with 25W more TDP that will throttle cores to maintain that TDP max. But Intel has 4.4GHz 4 cores in 88w TDPs, and the only difference I can see really is the extra cache and QPI links which can't make up the difference. Noting the clock speeds I should've compared to the 10 core. 6 more cores and 12.5% slower clocks for only a 25w delta?
|
# ¿ Feb 14, 2015 00:23 |
|
r0ck0 posted:The asus hero maximus VII hero just got a BIOS update: It would have to be USB3.1 over a PCIE card or similar, as USB3.1 requires a whole new physical controller which is only just coming out to some MSI and ASRock boards.
|
# ¿ Feb 19, 2015 01:05 |
|
Darkpriest667 posted:If USB 3.1 has a different plug and controller it will take forever for adoption to occur. Everyone remember firewire? USB C (the new plug everyone is hyping) is completely optional and in fact independent of USB3.1. There will be USB A (the old rectangle hole) 3.1 plugs, and they'll likely be around for a long time to come. The controller is just different in the sense that a USB 3 controller can't run things at 3.1, just how a USB 2 controller can't run things at 3 speeds. Sure it is a burden in the meantime, but the industry is used to periodically adding a new standard for USB. I have a feeling that USB 3.1 will replace 3 but USB 2 will instead still be seen in parallel to 3.1, instead of the usual "this generation and the generation before are common" that we see with USB adoption.
|
# ¿ Feb 19, 2015 23:00 |
|
Panty Saluter posted:My 4670K (at 4.2 gHz) will tickle 80C but only if I'm stretching it (benchmarking or video encoding). Games only hit 60ish. I have a Noctua D15 (something like that) with one fan. Would the optional second fan help temperatures noticeably if I start pushing it a lot? That sounds awfully high for what is the most overkill air cooler you can buy.
|
# ¿ May 30, 2015 06:50 |
|
All those years AMD worked on APUs, and Intel just up and doubles their graphics performance in one release. Impressive, especially the power consumption figures.
|
# ¿ Jun 2, 2015 13:36 |
|
Skylake is out in Aus and the CPUs and motherboards are hilariously expensive. $529 for a 6700K, and the motherboards are X99 level prices. I mean, Asus Z97-Deluxe for 529 loving dollars, or the comedy option Gigabyte Z97X Gaming for $800.
|
# ¿ Aug 5, 2015 13:20 |
|
Anime Schoolgirl posted:The bins must be incredibly inconsistent then since a lot of people have actual trouble getting past 4.0-4.1. Every data point I've seen has put 4.5 as an average chip, 4.6 as a good one and 4.4 as a crappy chip. People not getting past 4.0 are doing something wrong.
|
# ¿ Aug 22, 2015 01:27 |
|
Palladium posted:Preliminary analysis from a value hunter perspective: Turbo bins require much less thought once you realise that every motherboard worth buying includes some form of MCE.
|
# ¿ Sep 2, 2015 14:06 |
|
Twerk from Home posted:What does this acronym mean? I'm assuming it means "can run at full turbo bins all the time as thermals allow", basically ignoring the TDP if you have sufficient cooling. Can this be done with a non-K CPU and and non-Z motherboard? If that's a switch I can flip on an H170 board with a 6600, then there's a value king. MCE = Multi Core Enhancement. Basically it is an alternate turbo behaviour that has every core at the max bin. And yes, it is on H series motherboards and can be used with non-K SKUs.
|
# ¿ Sep 2, 2015 23:25 |
|
necrobobsledder posted:Wasn't there a guy either in this thread or another one in SH/SC that wound up buying a Xeon Phi and running some benchmarks for his workloads at home? That was "No Gravitas" if I remember correctly. It was a cool series of posts.
|
# ¿ Oct 16, 2015 14:37 |
|
Malloc Voidstar posted:Would the rumored i7-6850k (6 cores @ 3.6GHz) work well in gaming vs an i7-2600k @ 4.0GHz? The increase in IPC between sandy bridge and broadwell is greater than the difference in clockspeed, so even if you don't overclock the Broadwell-E CPU (you should) you won't lose per core performance (which is what matters in games).
|
# ¿ Nov 17, 2015 09:11 |
|
Twerk from Home posted:Woo. Anybody who wants >4 cores right now should still be putting Haswell-E on one of the nice new X99 boards, right? BW-E really looks more expensive for the same performance. Broadwell-E is actually basically worse than Haswell-E anyway, because the IPC increase isn't enough to offset the significantly worse overclocking. The E chips are overclocking about as well as the 1150 socket Broadwells, which is to say the worst overclocking headroom of any architecture in the last 5 years. If someone offered me a 5960X or a 6900K I'd take the Haswell all day long.
|
# ¿ May 31, 2016 15:00 |
|
PerrineClostermann posted:What exactly is NVMe, anyway? It's a storage protocol (think AHCI or IDE), that is made to handle both the increased throughput of PCI-E as well as the random access speeds of SSDs. It's distinct from PCIE/SATA (interfaces) and M.2/SATA (connectors). Yes it's all a bit of letter soup.
|
# ¿ Jul 21, 2016 05:10 |
|
BIG HEADLINE posted:With ~25% of 76/7700Ks making 5Ghz, I really wonder what Kaby-X's 'secret sauce' is going to be - 5Ghz native plus Sky-X's PCIe lanes? Where did you get 25% from? SiliconLottery is quoting 62% for 7700K's.
|
# ¿ Jan 17, 2017 08:02 |
|
Anime Schoolgirl posted:things can go up to 5ghz in prime but crash and burn in variable workloads such as "running photoshop" and playing video games Would it not be in their better interest to quote a lower figure, therefore making their "guaranteed 5GHz+" CPU more rare and therefore more desirable over rolling the dice with a retail one? If it was 1/4 they wouldn't be selling 5GHz 7600K's for only $10 above retail. Also the idea of something being stable in prime95 and unstable otherwise is completely counter to the usual goon groupthink of "don't bother with prime95 stability as it is a way higher load than your CPU will see in literally any other situation". Not to mention the shitloads of reviews and anecdotal reports you can find all over the internet of people getting 5GHz easily in the 1.3-1.4v range. E: Also why would we put more weight on "I heard this on the internet unsourced" over a company that exists solely to buy CPUs in bulk and bins them, getting objective data on average overclock ability. BurritoJustice fucked around with this message at 22:17 on Jan 17, 2017 |
# ¿ Jan 17, 2017 22:14 |
|
Multi-rail designs are perfectly fine as long as the rails are big enough for your usage. Single-rail PSUs are simpler and easier, but there is nothing inherently wrong with a multi-rail PSU and you shouldn't throw yours out just for that. OklahomaWolf of JonnyGuru fame famously prefers multi-rail designs, even if they have all but disappeared from the market.
|
# ¿ Mar 5, 2017 02:52 |
|
Jago posted:It's interesting, the consoles have 8 cores, but even the new scorpio in the xbox is otherwise anemic in the horsepower department in terms of single threaded performance. 2.3ghz not jaguar , not ryzen isn't impressive. (Though it is running GDDR5 RAM) Consoles having lots of hot garbage cores means that even more effort will have to be put into multithreaded optimisation. If they were eight even average cores they could just coast on using the first four mostly, but they are working with what are basically ULV laptop chips so they really need to squeeze everything they can.
|
# ¿ Apr 18, 2017 02:36 |
|
Scorpio is looking to be neat and I can't wait for all the "no but see if you buy this $30 xeon and this old bitcoin fury with a dollar store PSU and a cardboard box you can smash the scorpio on performance and price!" builds on r/PCMR. They always get whipped up to prove consoles are never a consideration to be purchased, ever.
|
# ¿ Apr 18, 2017 02:45 |
|
To be fair, a Skylake-E 6 core and a Coffee-Lake 6 core wouldn't conflict as much as you'd think. LGA2066 gets you quad channel memory and massive amounts of PCI-E lanes while LGA1151 gets you better clocks and an iGPU. They'll probably be roughly the same price as with the 7700K/6800K, ignoring platform cost differences, which lines them up as meaningful sidegrades depending on use case. The platform features are almost all dependant on chipset and I'm sure Intel will line them up so there won't be a big delta like there is with X99/Z270 right now. drat I'm excited for Coffee-Lake to hopefully hit the best of both worlds of the 7700K and Ryzen/HEDT Intels.
|
# ¿ Apr 23, 2017 04:40 |
|
If it carries over the better DDR4 controller of mainstream Skylake we'll probably see DDR4-3866+. Haswell-E got to 3466 with quad channel on the highest end boards so it wouldn't be too unreasonable of a jump.
|
# ¿ May 15, 2017 06:40 |
|
Actuarial Fables posted:Who are the people buying an m-ITX motherboard then sticking it in a case large enough to fit 2+ GPUs? Founders 1080Ti x2 + EK waterblocks with single slot brackets + two slot bridge to fit two cards into the two slots that all ITX cases have? Of course there is the issue of aligning the graphics cards with the standard slots with the addition of the riser, so case modification would still be required for most cases. Should work with the ones with a separate GPU mount connected by a riser though. Palladium posted:Meanwhile in the Asrock exec meeting room: AM4 ITX? Who cares?
|
# ¿ May 29, 2017 15:42 |
|
MaxxBot posted:I guess in one Intel slide at Computex they were touting the i9 for "12k gaming." I mean come the gently caress on you can't even do 8k properly with SLI 1080 Tis, even 5k is pushing the limits of practicality in modern games. "12K" as is commonly marketed is an incorrect way of naming 3x4K multi-monitor (11,520x2,160), so it is actually less than "8K" as properly marketed (7,680x4,320). But then again "4K" is actually 4096x2160 not 3840x2160, so whatever, hail satan
|
# ¿ Jun 10, 2017 05:52 |
|
|
# ¿ May 14, 2024 18:03 |
|
Just a comment on the De8auer Skylake-X overclocking video, because a few people commented saying they distrust him because Intel is providing him with so many expensive CPUs to bin. He works for caseking.de, which bins Intel CPUs and sells them for more than retail (think SiliconLottery.com). That's where he got the piles of Skylake-X CPUs. Like with SiliconLottery, assuming they'd be malicious, it would be in their best interest to under-exaggerate the average overclockability of retail processors so that their own service is more appealing. He even states in the video that 5GHz is a binned chip and standard chips are closer to 4.8GHz. As with SiliconLottery last time this was discussed, I think it is logical to not immediately distrust his stats; even if no weight is given to his reputation in the overclocking community. E: My personal guess is that with the 14nm+ process of Kaby and the IVR the chips will have fantastic voltage scaling up to 5GHz but will be heavily temperature limited making it unfeasible for most. The higher core count chips will cut a few hundred MHz off that like with past HEDT platforms. BurritoJustice fucked around with this message at 18:56 on Jun 14, 2017 |
# ¿ Jun 14, 2017 18:45 |