|
fishmech posted:and then 10% "people might use this to rip blu ray movies". dang that would have been cool to do tbh having linux on a ps3 just sounds overall like a ton of fun and im sad i cant do that
|
# ? Apr 18, 2019 01:56 |
|
|
# ? May 22, 2024 14:24 |
|
it wasn't all that good in practice because linux couldn't use the gpu software rendering is no fun
|
# ? Apr 18, 2019 02:03 |
|
Statutory Ape posted:dang that would have been cool to do bluray crypto was compromised for years before this happened using linux on the ps3 sucked because you only had access to 192 megs of system ram, the main cpu and io was really slow and there was no gpu access, probably to avoid people making unlicensed games for the system. someone did find a way to use the gpu under linux at some point, but it was patched out in the next firmware revision.
|
# ? Apr 18, 2019 02:07 |
|
Wasnt the whole point of the linux thing to use as a bargaining chip to keep homebrew people from widely releasing copyright cracks? Like "we know you can crack this system but we gave you linux, so dont release the crack to Johnny P. Fucko and we will let you keep it"
|
# ? Apr 18, 2019 02:23 |
|
back in the mid-naughts ibm and sony seemed to really believe that the cell would be this amazing new processor that would take over the world and wanted to get it out there for people to use. ibm's cell-equipped blade servers cost somewhere in the neighborhood of $20k each, so letting people use linux on the ps3 was seen as an ideal way for students and others of lower means to get experience with the processor of the future. except it only took a few years for ibm to go "whelp, this thing has no future" and gave up on it i think the linux thing was also a tax dodge in certain countries since it got the ps3 classified as a computer instead of a video game machine
|
# ? Apr 18, 2019 02:41 |
|
The_Franz posted:back in the mid-naughts ibm and sony seemed to really believe that the cell would be this amazing new processor that would take over the world and wanted to get it out there for people to use. ibm's cell-equipped blade servers cost somewhere in the neighborhood of $20k each, so letting people use linux on the ps3 was seen as an ideal way for students and others of lower means to get experience with the processor of the future. except it only took a few years for ibm to go "whelp, this thing has no future" and gave up on it Yeah, I have a Cell CPU powered blade for my IBM Bladecenter. Its nothing great
|
# ? Apr 18, 2019 02:59 |
|
The wikipedia page on Cell still has a lot of that mid-naughts hype lolling about how all the talk about possible future applications just ends around 2007-2008 though
|
# ? Apr 18, 2019 19:41 |
|
repiv posted:it wasn't all that good in practice because linux couldn't use the gpu I confirm that, having had the PS3 as my main system for a period (it was the reason I bought it at launch, actually.. I had no computer and that seemed a decent compromise) and could tolerate it for a few months because linux was good enough for browsing, barely I am glad they removed it frankly, it was piss poor, if I was in the USA I might've gone for the class action to get some money back
|
# ? Apr 18, 2019 22:04 |
|
TorakFade posted:I confirm that, having had the PS3 as my main system for a period (it was the reason I bought it at launch, actually.. I had no computer and that seemed a decent compromise) and could tolerate it for a few months because linux was good enough for browsing, barely I wasn't being hyperbolic when i used the smilie for the settlement amount. It was exactly $10.07 USD.
|
# ? Apr 18, 2019 23:23 |
|
TheFluff posted:The wikipedia page on Cell still has a lot of that mid-naughts hype It wasn't like they were totally on an island by themselves with the Cell, remember that bulldozer had the weird off-balance 1 ALU / 2 FPU design as well. Even though bulldozer FPUs were not the restricted and difficult Cell SPEs, there was still the idea that the multimedia future was going to call for lots of FLOPS. Turned out not to be the case, magic compilers that extract parallelism still don't exist and getting programmers to write code for your gimmicks is a tough sell. edit: oh welp I totally got bulldozer backwards this was a dumb post Klyith fucked around with this message at 16:13 on Apr 19, 2019 |
# ? Apr 19, 2019 05:08 |
|
Klyith posted:It wasn't like they were totally on an island by themselves with the Cell, remember that bulldozer had the weird off-balance 1 ALU / 2 FPU design as well. Even though bulldozer FPUs were not the restricted and difficult Cell SPEs, there was still the idea that the multimedia future was going to call for lots of FLOPS. Other way around, 1 FPU per 2 integer clusters. Itanium and Larabee are much better comparisons
|
# ? Apr 19, 2019 09:11 |
|
Klyith posted:It wasn't like they were totally on an island by themselves with the Cell, remember that bulldozer had the weird off-balance 1 ALU / 2 FPU design as well. Even though bulldozer FPUs were not the restricted and difficult Cell SPEs, there was still the idea that the multimedia future was going to call for lots of FLOPS. Sure, it's just funny to me how the article still has this very optimistic tone about the future of the Cell despite the fact that it was pretty much discontinued ten years ago.
|
# ? Apr 19, 2019 12:06 |
|
The_Franz posted:I wasn't being hyperbolic when i used the smilie for the settlement amount. It was exactly $10.07 USD. Still better than the big fat $0.00 I got, and that thing also YLOD'd two months after the warranty expired (2 years). Luckily by then I could get a slim for relatively cheap, but still... worst console purchase I ever did. Speaking of which, Ryzen 3600x 8 core @4.5Ghz when? I kind of want to upgrade, now that we know next gen consoles will have 8 cores at least I'd love to get 8 myself in order to "future-proof", and I "only" have a 6 core 2600x
|
# ? Apr 19, 2019 13:42 |
|
Computex at the earliest.
|
# ? Apr 19, 2019 22:23 |
|
If we don't get any new info on Zen 2 (specs/price/release date) at Computex I will start to worry.
|
# ? Apr 19, 2019 22:50 |
|
Would it be safe to say that DDR5 won't be supported until Zen 2's successor?
|
# ? Apr 20, 2019 00:06 |
|
ConanTheLibrarian posted:Would it be safe to say that DDR5 won't be supported until Zen 2's successor?
|
# ? Apr 20, 2019 00:13 |
|
Wouldn't DDR5 require a different socket? Or at a bare minimum a different motherboard, because DDR5 is probably going to be keyed differently even if it has the same number of contacts?
|
# ? Apr 20, 2019 01:05 |
|
Indiana_Krom posted:Wouldn't DDR5 require a different socket? Or at a bare minimum a different motherboard, because DDR5 is probably going to be keyed differently even if it has the same number of contacts? It'll require a different motherboard because the RAM itself will likely have a different socket and pinout. It'll also require a different memory controller on the CPU. It won't necessarily require a different CPU socket, though, and if AMD includes both a DDR4 and a DDR5 memory controller on Zen 3 chips, they could still be backward-compatible with previous AM4 motherboards. There's some precedent - I believe Skylake included DDR3 and DDR4 controllers?
|
# ? Apr 20, 2019 01:32 |
|
spasticColon posted:If we don't get any new info on Zen 2 (specs/price/release date) at Computex I will start to worry. Same, my goddamn computer is almost 7 years old and I'm eyeballing a 9900k real hard. It will be disappointing if it's only 8C/16T though and I fear not competitive at the high end either.
|
# ? Apr 20, 2019 03:45 |
|
ItBurns posted:Same, my goddamn computer is almost 7 years old and I'm eyeballing a 9900k real hard. It will be disappointing if it's only 8C/16T though and I fear not competitive at the high end either. My system is almost 8 years old so if Zen 2 is a delayed wet fart I'll just build a system around a 2700X because I want 8C/16T for parity with the next-gen consoles.
|
# ? Apr 20, 2019 04:39 |
|
Stickman posted:It'll require a different motherboard because the RAM itself will likely have a different socket and pinout. It'll also require a different memory controller on the CPU. It won't necessarily require a different CPU socket, though, and if AMD includes both a DDR4 and a DDR5 memory controller on Zen 3 chips, they could still be backward-compatible with previous AM4 motherboards. There's some precedent - I believe Skylake included DDR3 and DDR4 controllers? Indeed. But it wouldn't even be the first time that a motherboard supported multiple flavors of DDR. From Intel, no less: https://www.gigabyte.com/Motherboard/GA-P35C-DS3R-rev-21#ov https://www.asus.com/us/Motherboards/P5G41CM_LX/ And then of course, you know that there are all manner of monstrous creations in ASRock's history, so of COURSE they have motherboards that supported multiple types of RAM. But the key here is that they will only support one or the other at any given time, and you will not be able to mix types.
|
# ? Apr 20, 2019 06:18 |
|
Khorne posted:No one knows if it will be DDR5 or not yet. If it isn't, there's a decent chance it will be AM4. AM4 flat out can't support DDR5, AMD has already said as much. AMD also tends to lag the industry a bit when supporting new memory standards too. I doubt they're gonna rush to ditch AM4 with its cheap and "good enough" DDR4 for extremely expensive DDR5. The DRAM OEM's are saying DDR5 isn't really expected to be a thing for consumers until 2021 at the earliest and more realistically won't get mainstream acceptance/volume until well into (Q2 or Q3) 2022. If you look around you can find plenty of 1-2yr old articles saying DDR5 is coming in early to late 2019 and that the chips are done and demo'd and all that but a completed design and a demo is very very different from shipping a high volume of finished parts. Part of the problem is the expected high prices of DDR5 (at launch, which will happen well before 2021, but about the only people who will be able to pay for it at those prices will be server guys), another is the expected technical challenges, but also continued slumping sales of PC's in general which will slow adoption rates. Looks like the DRAM OEM's are gonna end up pushing DDR4 3200 into the mainstream to try and deal with the expected slow rollout of DDR5 for now.
|
# ? Apr 20, 2019 10:20 |
|
GPD is making a Ryzen based mini gaming laptop https://liliputing.com/2019/04/gpd-win-max-will-be-an-amd-ryzen-powered-handheld-gaming-pc.html
|
# ? Apr 20, 2019 15:41 |
|
RAM is finally affordable... Long live ddr4
|
# ? Apr 20, 2019 15:44 |
|
CommieGIR posted:GPD is making a Ryzen based mini gaming laptop I would say Smatch Z but like that was ever going to actually ship, so instead I'll say Smatch Z backers.
|
# ? Apr 20, 2019 19:42 |
|
Khorne posted:This hasn't been an issue since the IHS became a thing. Modern motherboards are also increasingly thick. there were people cracking Skylake's PCB after it released. It's a thinner PCB than previous generations, and those people were probably also applying incorrect amounts of pressure and letting the cooler torque on the ILM, but it has actually been a thing recently. Paul MaudDib fucked around with this message at 09:10 on Apr 23, 2019 |
# ? Apr 23, 2019 07:29 |
|
BangersInMyKnickers posted:Yeah, if you're okay with your servers being essentially disposable commodities or doing a lot of work to build your own automation tools then Supermicro can be the right choice but Dell and HP do a fair bit of work to make sure things are validated and you have the right management/recovery tools. I wouldn't be touching them unless I was running some kind of large-scale standardized infrastructure. Even my NAS server, I had to go into BIOS to enable a custom "BMC DMA fixup" setting to get FreeBSD to boot. Took me a while to figure out that it might be a DMA issue, then to hunt down the option that fixed that. And that's on Intel. That said, if no one has commented on the AM4 IPMI board that SuperMicro just released... it's basic but it's a start. No 10 GbE, no quad NICs, but it's a place to start. Lambert posted:Optane as it relates to caching is nothing special (and pretty much obsolete), AMD has had a similar solution forever. Yup, Optane is just NVMe unless you do DIMM-level support. It could potentially be a thing to provide a large, fast cache for stuff like open world games. You would have to specifically code your engine around the idea of a multi-level cache. Sony does exclusives and they could pull that off. Not highly likely but there is a concept there. Mister Facetious posted:What were the 6/7 stream processors of the Cell even supposed to be good for? They have very high bandwidth and computational intensity if the data processing you need is a pipeline-like arrangement. There is a reason that PS3 takes a lot of horsepower to emulate. It actually is fast compared to a general-purpose uarch of its time.
|
# ? Apr 23, 2019 09:11 |
|
https://www.youtube.com/watch?v=qU0V5OcHE_4 DF offers some speculation on next-gen console APUs.
|
# ? Apr 23, 2019 09:14 |
|
Does anyone know what the gently caress Gonzalo is? Another semicustom part? Or is that the -G parts that Adored was talking about? Seems too early for the PS5/XB2 but also too powerful for a regular APU. (20 CUs on 2ch DDR4 doesn't make sense to me but neither does Gonzalo, either in timing or configuration.) edit: just saw the above
|
# ? Apr 23, 2019 09:15 |
|
I've noticed a whole bunch of Zen+ laptop chips just came out, I wonder how they are vs Intel. Like Ryzen 3750H vs 9300H (also new from Intel but its just 8300H+100mhz) Haven't seen any reviews/benchmarks yet.
|
# ? Apr 25, 2019 14:26 |
|
Are they still single stick slow speed RAM?
|
# ? Apr 25, 2019 14:31 |
|
If anyone has a threadripper with an Enermax Liqtech TR4 AIO, your AIO is probably dying as we speak. I was having thermal issues with my 280mm version and Steve just did a video on this as well. https://www.youtube.com/watch?v=HC1kzO_gIp4
|
# ? Apr 26, 2019 03:36 |
|
Mister Facetious posted:What were the 6/7 stream processors of the Cell even supposed to be good for? Games had access to six, but they could really only count on the capacity of five and a half. They were good for quite a few things. The local memory wasn't that large at a glance but it was blazing fast. It was like each SPU had 256kb of L1 cache. There were a number of workloads that suited them natively, things where a lot of math was needed (animation, audio, graphics setup). They were used to make up for deficiencies in other parts of the system (patching values in shader programs because the RSX was from between GPU generations and didn't have things the later ones did, like constant registers). They could also be used to run general purpose code to offload the main processor -- decompressing assets at load time, etc. -- though they weren't as efficient at it.
|
# ? Apr 26, 2019 04:22 |
|
ehnus posted:The local memory wasn't that large at a glance but it was blazing fast. It was like each SPU had 256kb of L1 cache. Thats not just a semantics issue. From what I recall of some of the comments from the guys over at B3D that actually had to make games on the thing that meant that pretty much all the memory management on the LSU's had to be done by the programmer by "hand" (compilers were supposed to help (since the SPU's supported branch hints) and did but never well enough to make up for the deficiencies of the LSU or lack of a proper branch predictor in the SPU's themselves), which was apparently quite difficult to do and was a major limiting factor in getting performance out of the SPU's. Particularly for the first few years the PS3 was out. Also the LSU's latency wasn't all that great (neither was the EIB's if you couldn't sustain a high level of concurrency with your work load which made things worse) and so any sort of cache misses or branch mispredicts were massively penalizing to performance (made even worse by the deeply pipelined and in order nature of the SPU's). Performance on general code for the SPU's wasn't just inefficient, it was abysmal, and as a result pretty much anything that required general (read: has branches and/or not highly parallel in nature) performance was ran on the PPE and not the SPU's by developers for the entire life of the PS3. ehnus posted:There were a number of workloads that suited them natively, things where a lot of math was needed (animation, audio, graphics setup). While it was cool that it could do stuff like graphics or animations pretty quickly ultimately those tasks could've been more efficiently handled by the GPU or perhaps a task dedicated processor like a DSP. It was Cell's inability to perform as well as it was advertised to initially on generalized work loads, short of heroic feats of programming effort, that caused it to be seen as largely a failure as a CPU. Old but pretty cool, short and sweet commentary from a developer who worked on all kinds of consoles that seems relevant here: quote:PS3: A 95 pound box shows up on your desk with a printout of the 24-step instructions for how to turn it on for the first time. Everyone tries, most people fail to turn it on. Eventually, one guy goes around and sets up everyone else’s machine. There’s only one CPU. It seems like it might be able to do everything, but it can’t. The SPUs seem like they should be really awesome, but not for anything you or anyone else is doing. The CPU debugger works pretty OK. There is no SPU debugger. There was nothing like PIX at first. Eventually some Sony 1st-party devs got fed up and made their own PIX-like GPU debugger. The GPU is very, very disappointing… Most people try to stick to working with the CPU, but it can’t handle the workload. A few people dig deep into the SPUs and, Dear God, they are fast! Unfortunately, they eventually figure out that the SPUs need to be devoted almost full time making up for the weaknesses of the GPU. edit: another cool quote from a guy who seemed to have worked at Naughty Dog who commented the above developers thoughts: quote:agavin: PC LOAD LETTER fucked around with this message at 06:42 on Apr 26, 2019 |
# ? Apr 26, 2019 05:57 |
|
i had to dig up this old post because lmaoSuspicious Dish posted:The Sony answer: "everything". You were supposed to do all your work on these SPUs, and they tried to ship a lot of middleware. First problem: no common task scheduler, so middleware had no good way of being run on an SPU. Sony tried to ship a task scheduler, but then realized that no game would use it because engines already have their own task schedulers and asking game developers to rip out their own stuff (one of the most core parts of the engine) in favor of required middleware is probably the easiest way to get nobody to use your thing. it's a miracle that anything got shipped on the ps3
|
# ? Apr 26, 2019 13:20 |
|
Biostar confirmed in a round about way that the X570 boards are going to launch at Computex. Weirdly, AMD is also launching a "50th Anniversary Edition" 2700X next month (no clock speed bumps or anything, its just a 2700X). Seems like mixed signals about Zen2. Like if they had Zen2 availability planned the same month, I imagine one of those would be branded the "Anniversary Edition". Unless there is more than 1? Computer part naming is stupid.
|
# ? Apr 27, 2019 00:32 |
|
repiv posted:i had to dig up this old post because lmao Well now we know why there's like three engines in the world. Pity the Square and Konami people who didn't just adopt Unreal or whatever and had to deal with that mess.
|
# ? Apr 27, 2019 01:15 |
|
SlayVus posted:If anyone has a threadripper with an Enermax Liqtech TR4 AIO, your AIO is probably dying as we speak. I was having thermal issues with my 280mm version and Steve just did a video on this as well. I had an Enermax on my Threadripper 1950x and just took apart my cooler. Seriously, if you have one of these your system is in danger. I switch to a ThermalRight Silver Arrow TR4 with an additional two Noctua NF-A12s two weeks ago. I just opened my Enermax cooler up to find this. This is all crusted into the microfins, it's not liquid. https://i.imgur.com/ZGvhiAU.jpg The white glob is hard to the touch. https://i.imgur.com/tV237Wx.jpg Basically, the bio-growth inhibited I would say more than 60 or 70% of the cooler's cooling capacity. The white glob is on the outlet side of the microfins, so the majority of the water was never getting through. SlayVus fucked around with this message at 02:10 on Apr 27, 2019 |
# ? Apr 27, 2019 02:07 |
|
|
# ? May 22, 2024 14:24 |
|
SlayVus posted:I had an Enermax on my Threadripper 1950x and just took apart my cooler. Seriously, if you have one of these your system is in danger. I switch to a ThermalRight Silver Arrow TR4 with an additional two Noctua NF-A12s two weeks ago. I just opened my Enermax cooler up to find this. Stuff like this makes me never want to sway from air cooling. A NH-D15 may be huge but I can go years without having to even think about it.
|
# ? Apr 27, 2019 02:14 |