Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

hackbunny posted:

thank you, I try :tipshat:

current progress: just learned how to use enable_if and how to use it with copy and move constructors. slurps mad rips this is your cue to start laughing your rear end off

speaking of which i keep waiting for the groundswell of people laughing their rear end off at you for missing the True Way To Use C++ that doesn't run into all these insane things

and then i remember, oh yeah. c++

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Gazpacho posted:

no major numerical library relies on C++ for performance nor will they, seeing as they don't even rely on the C preprocessor

numerical library hackers are your ultimate "we live the hard life and we like it" crowd

i have no direct experience but ive been told that nvidia cuda relies heavily on c++ features inside compute kernels

specifically, they use templates everywhere. you write a templated compute kernel to do some number crunching thing (matrix mult, FFT, whatever), and now you can use it with any numeric type the gpu supports (fp16, fp32, fp64, int32, and more). lots of gpu optimization work is, apparently, tweaking precision down to the minimum you can get away with, because the narrower you go the more parallelism and/or effective fetch bandwidth you get. nvidia chose c++ templates as the standard way of writing compute code once for reuse with any data type

also, in gpu land, unlike people who target avx etc., almost nobody wants to go close to the metal by writing assembly or intrinsics or whatever. gpu vendors have heard of the concept of a well defined isa which lasts across many generations of hardware, and have thoroughly rejected it

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Dongslayer. posted:

what is the current hipster reference to learn python with or does it even matter

googling for poo poo has worked pretty well for me tbh

python irritates me. i've been learning it, and despite idiotic things like semantic whitespace, overall it started feeling like a nice perl replacement. was even finding that despite all the ludicrous compactness tricks in perl, i was using fewer lines of code in python, and it was like 1000x more readable.

then [needle drop] i port a simple perl script that reads files and regexes each line and discover that python only manages about 1/10th the throughput

how do they do that. i refuse to believe perl's implementation is anything other than a garbage fire to match the externally visible plang i've been using for years, so how hard can it be to match its performance

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

rjmccall posted:

perl is generally extremely good at string processing and specifically has a fantastic regexp implementation

iirc perl's implementation is overall quite good, especially for something that hasn't had the same effort put into it as javascript. javascript regexp implementations are also generally superior, btw

huh. well count me surprised (not at js, though)

Symbolic Butt posted:

try compiling the regex objects, it used to help a lot with the performance

and use the ASCII flag if you're not dealing with unicode

comedyblissoption posted:

python 3 used utf-32 encoding instead of utf-8 so everything is going to be unnecessarily slower

edit: apparently it's a lot more complicated than that with later versions of python 3
http://stackoverflow.com/a/9079985

will look into these things. annoying if it turns out you have to jump through extra hoops to get on the performance path

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Slurps Mad Rips posted:

i was introduced to python and shown how to make basic pygame things by Michael Dawson (yes, the guy who wrote and starred in the video game Darkseed)

i mean i knew mike dawson taught bideo jame programming thanks to slowbeef's various darkseed things but i am :vince: that a goon actually took that class

did he talk about darkseed at all

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Arcsech posted:

haven't looked but guessing theyre really dumb and never left the 80s

looks rly dumb but it's a new dumb I think? he invented a new universal assembly language so you don't have to learn manufacturer specific asm syntax. it doesn't sound like your program becomes portable because the universal syntax isn't universal enough to actually hide all processor/abi details. also unclear to me whether it might not actually behave more like a macro assembler in practice, i.e. no guarantee one loc doesn't expand to multiple machine instructions, but don't quote me on that because i closed the browser tab before completing the slide deck to save my sanity

in summary he is boldly blazing a new trail into the uncanny valley between asm and high level languages which nobody ever asked for because lol it's loving pointless. hopefully no one will ever walk with him on this path.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

JawnV6 posted:

i still disagree that ucode is meaningfully called "risc" but it's one of those things idk how to argue such a fine point when everyone uses that label anyway

same

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
i'm the one weird trick for building muscle fast ad popping up over the ui because lol web

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
yeah I was surprised recently to find out that bf is kinda neat, had always assumed it was purely a "hurr durr look how awful a language can be" thing and it turns out it's a simple utm. the brainfuckery is that simple utms aren't fun to program to do anything useful

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Arcsech posted:

haven't read the article yet but Dan Luu is a heckin smart guy so it's probably not total garbage at least

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Shinku ABOOKEN posted:

quote:

The Unexpected Results From A Hardware Design Contest; Cooley, J

...
During the expierment, there were a number of issues that made things easier or harder for some subjects. Overall, Verilog users were affected more negatively than VHDL users. The license server for the Verilog simulator crashed. Also, four of the five VHDL subjects were accidentally given six extra minutes. The author had manuals for the wrong logic family available, and one Verilog user spent 10 minutes reading the wrong manual before giving up and using his intuition. One of the Verilog users noted that they passed the wrong version of their code along to be tested and failed because of that. One of the VHDL users hit a bug in the VHDL simulator.
...
hardware programmers: is this normal?

eda (electronic design automation) tools are shoggothic monstrosities from alien dimensions, byzantine software implementing poo poo languages which were inflicted upon the world by an unholy combination of accident and committee. licenses for these tools can cost hundreds of thousands of dollars per seat per year. despite the expense, bugs and jankiness are common because one of the ways you wring bugs out of complex software is sheer scale -- the more users you have the faster they hit edge cases and report them to you. this is a niche market with a very limited user base so bugs are distressingly common.

also note that the task described in that contest included the hardware equivalent of optimizing the output of your c compiler by looking at the assembly it emits, because that's just what you do. that's what the "logic family manuals" refers to: documents analagous to a super detailed cpu manual complete with detailed timing information for every instruction, except the 'instructions' are far more primitive and the number of variations on a theme can be immense. page after page of slightly different AND gates, each optimized for different tradeoffs between propagation delay, leakage current, and output drive strength. this isn't like a cpu where basically all you need is one integer add instruction

so, yes

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Suspicious Dish posted:

really what i want is to burn the berkley sockets api to the ground. replace it with something that doesn't suck.

have you heard of sysv STREAMS my friend























i don't actually know anything about streams or sockets (other than i remember a minor mac graybeard revolt when mac os x did away with streams api support because osx is a berkeley) so yospos pls dont murder me if streams is actually rly bad

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

rjmccall posted:

there's no reason for that thing about div/mod to be in the spec at all. it's just didactically informing you, the sort of ignorant and inferior coder who uses go, that certain cases of integer division can be implemented without using a full divide instruction. (incidentally this is true of i believe all constant divisors, it's just not always as cheap as a shift.)

it's often 1 multiply by a precalculated magic constant followed by a shift, with cases where more ops are required

http://ridiculousfish.com/blog/posts/labor-of-division-episode-i.html

(lol that anything like this made it into the spec. a competently written spec would be pared down to the minimal amount of language to clearly & unambiguously define the language. bad lectures on peephole optimization tricks that the spec part of the spec allows a compliant go compiler to use are completely out of place)

(i really just wrote this reply to link to ridiculous fish's blog post, which is a p. great read if you're interested in this topic)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
eh i think he's just a shithead, as this rebuttal to another of his "learn X the hard way" books makes clear: http://hentenaar.com/dont-learn-c-the-wrong-way

(at least there's truth in advertising in his book titles - you'd have a lovely time learning c from a book that does such an inferior job of explaining it)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
applescripts english dialect is an abomination, they should’ve finished the programmer dialect and not shipped the any of the pseudo natural language bullshit

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

rjmccall posted:

also you can just use preview and it’s faster, quicker to load, and much more secure

why does preview keep getting slower and less useful though

i specifically remember when it went off the rails too. it was the rewrite when you guys used it as the dogfooding app for apple's then-new sandbox apis. it has never been as fast and good as it was in the last pre-sandbox release, and i despair of it ever getting back there again because it just keeps getting new useless features added rather than optimized back to its original gloriously fast performance. (useless features = fancier search highlights, too many editing tools that are enabled by default, the lovely autosave thing that nobody likes, especially when comboed with editing tools that do not understand that clicking in a read-only document shouldn't turn editing on and create a text box that modifies the document and then it autosaves a new version even if no actual change has been made, jesus christ 99% of the time i just want a viewer not a clumsy and bad editing tool.)

in recent times that team added some kind of new feature that seems maybe supposedly about performance but it's actually terrible. it renders a page thats only partially visible at half resolution (so it's all blurry), so you have to scroll the document up to get that page to look normal, and then the page that's now going off the top goes blurry.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
why is that guy so convinced the c++ stdlib needs a graphics api that won't be any good on any platform

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Plorkyeran posted:

cognitive dissonance plus sunk cost fallacy? he's put a bunch of work into it therefore it must be something valuable that the world needs

i read some more of his blog and it sounds like (a) he is self-aware of being a somewhat sheltered c++ nerd with little experience outside that sphere and (b) is envious of other languages with broad libraries that get new programmers into them because they have the kitchen sink at their fingertips, hence his enthusiasm for adding something like this to c++

my friend, i have bad news for you re: the suitability and desirability of c++ as a language for newbie programmers

(incoming comical emptyquote with newbie struck out)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Suspicious Dish posted:

Maybe I overestimate JavaScript developers though....

:thunk:

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Notorious b.s.d. posted:

wearing jeans to the office because the important decision makers wore jeans was like pulling teeth for the first few days

lol

if the decision makers wear jeans they probably dgaf what you wear

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

rjmccall posted:

speaking of strings and swift, i don’t know if we talked about it here, but swift switched to utf-8 for its string storage

what was it before and why'd it switch?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

JawnV6 posted:

https://www.anandtech.com/show/13678/western-digital-reveals-swerv-risc-v-core-and-omnixtend-coherency-tech

hot news from risc-V, western digital is claiming they can beat a cortex-A15 with a 2-way in-order design they've named... "SweRV"

what's not to love? "Cache Coherency over Ethernet"? :2bong: the CPU doesn't like... OWN memory maaaaaaan....

ccoenet? ahahahaha so that management-high-on-own-supply stuff about how they were gonna revolutionize the cloud by moving compute into storage back at the original announcement is being taken seriously by said management

also lmao at announcing simulated performance

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Deep Dish Fuckfest posted:

more like coolest. that poo poo is legit impressive and chip designers make me feel really inadequate considering the ridiculous gap in quality and complexity between their output and that of a "software engineer" like me

in current $job i wrangle fpgas (and also c/python/perl/tcl/shell)

the way I would put it is that even when you don’t have the pressure to deliver a 99.99% good design the first time it’s tried for real (which I don’t, because fpga), you are working with bad languages and utterly wretched debugging tools, the modify-test-debug cycle is several orders of magnitude slower than software, and if your design has to run at a decently high clock speed you are going to be forced to do the equivalent of writing super low level c code where you pay close attention to how coding style affects the compiler’s output, maybe even resorting to inline asm to get best results, and even then you may find yourself trying several full rewrites of a block to get performance where it needs to be

and that’s fpga, which is kinda easy mode. cutting edge high perf gpu/cpu cores are insanely difficult and expensive to design

all that said you may be surprised to know that you folks up in the sky routinely and effortlessly do much more complex things than those of us in the deep mines. we don’t have high level abstractions and efficient debugging tools. we’re typically narrowly focused on small optimizations rather than big complicated ideas.

a few years ago i had to correct the mistakes of someone with a software background who wrote a bunch of verilog which I inherited. most of these could be described as “did something in a hardware state machine that should’ve been pushed up to the driver”. good hardware design is about keeping it stupid simple. you only put complicated algorithms into hw when there’s no other way to achieve the desired result

this is so true that, with modern transistor densities, when there’s a call for a state machine that’s complex but also needs hard real-time guarantees, chip designers often throw in a whole embedded CPU core and have a software engineer implement the state machine instead. every modern gpu or cpu or cellphone chip is sprinkled with dozens of cortex-M0 class microcontrollers. these are often not even touted in marketing materials since they typically cannot run user supplied code. they’re just testament to the difficulty of using hardware design techniques to solve complicated problems.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

DELETE CASCADE posted:

does the simulator run fast enough for those arm chips to test comprehensively? i remember a computer architecture student in grad school saying when they eventually build a test chip and turn it on for the first time, it runs more instructions in a second than the entirety of testing beforehand

test is very much a problem, yes, especially in academia

most commercial chip projects these days use at least one highly accelerated alternative to conventional simulation. there are multimillion dollar boxes based on massively parallel arrays of custom cpu cores designed specifically to accelerate HDL simulation. a cheaper approach which I have been involved with is to implement your asic in a multi-fpga board (or a stack of them if the chip is too large for one board).

both of these won’t run at a clock rate anywhere near the final product’s, tend to have reduced visibility of internal signals compared to a classic sim (especially fpga), and require living with the fact that you’re not directly testing your real design. however, the speed makes it all worthwhile as a supplement to conventional sim. 100 Hz would be a super nice sim speed iirc; by way of comparison I’ve worked on one fpga asic emulator which used some tricks to get a major data path up to ~160 MHz, not much slower than it had to run in the real chip. ~1 MHz is much more typical but even that’s so so much better than sim.

the other big win is that you can give fpga systems to software devs and have them start driver bringup work before your chip even tapes out

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

JawnV6 posted:

check this slide out

im stunned someone put a bloom filter into HW

that’s nuts

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

JawnV6 posted:

you don't appear to be standing on a solid understanding of the layer we're discussing and seem content to haughtily piss down from your tower of abstraction, an exercise in frustration for me

i dont think these issues have anything to do with the class of memory access issues that rust is particular about

i get the impression they don't quite understand how typical it is to pack a bunch of non byte sized, non byte aligned fields in a single hw register, where the register access path often does not even support byte-at-a-time access semantics, much less bit-at-a-time (which the cpu doesn't support anyways)

i do this kind of layout when designing registers all the time. i also often end up writing the software that manipulates them. it's fine. i don't want to chew up enormous amounts of address space and bloat out the read muxes to give every 1 bit field its own 32 bit word. fite me haters

also like someone mentioned above, if you're in any place where there's minimal competence you have some kind of simple source code - csv file, json, custom minilang, whatever - that the hardware engineers write to describe registers, and there's scripts to 'compile' that source to C headers and Verilog source code for use on both sides of the hardware/software divide. maybe documentation files too, or documentation embedded in the C headers

the output of these 'compilers' looks ugly to humans when you dig underneath the surface API, but it's a problem no HLL i'm aware of has ever attempted to address in a clean way, so you sweep all the ugliness into machine generated code and it's all fine

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Sapozhnik posted:

"stable kernel driver interface" oh goody i love IHV-maintained drivers that have some mandatory garbage winamp skin interface that marketing says has to show up as much as possible

and its own updater don't forget that

why not add a little light spyware while we're at it

os-level protection is very important particularly to protect against the user not paying a monthly subscription fee for absolutely loving everything

i mean yeah linux and the whole posix model in general have a somewhat discredited design at this point but fuschia is a land grab and nothing more. it is in nobody's business interest other than google's, qualcomm's, and cell carriers' for it to succeed.

what does a stable kernel driver interface have to do with the rest of your rant

for example, afaik those garbage winamp skin uis and updaters you get when installing a wandows gpu driver are separate user space binaries, not part of the module loaded into the kernel

im sure fuschia is a land grab but lol if you think a stable kernel driver api/abi is anything more than goog wanting to fix one of the perpetual dumb broken by design linux problems at the same time. it is possible for a thing you hate to not be pure evil in every last detail, you know?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Suspicious Dish posted:

it doesn't work well with threads at all, it's racy, it's a bad way to launch subprocesses because there's no way to control memory behavior, leading to things like the oom killer

like so many other parts of unix, fork was good in its original context (the early 1970s, hackers having fun making an os for themselves) but isn't aging well

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

pseudorandom name posted:

didn't icc get in trouble for generating the worst possible code for AMD processors?

i.e. it wasn't just naive or generic or whatever, Intel developed an instruction scheduling model that maximally pessimized execution on AMD CPUs

no, as rjmccall said they got in trouble for generating detection code which failed to select the vectorized codepath on amd cpus which could have supported it

there were loony amd fan web forums which probably believed intel was maximally pessimizing or w/e though, maybe you caught some of that noise?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
has anyone pointed out to the v guy that .v is already in use for verilog

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

JawnV6 posted:

no, but over the weekend i learned that spectre/meltdown are verilog's fault and a better HDL wouldn't allow them

:allears: I need links

JawnV6 posted:

but no vhdl is european, you think that thrice-reheated cruft goes anywhere near our glorious american cores???

layers of irony for those unfamiliar: vhdl was created by the United States DOD as its official HDL, complete with syntax directly lifted from Ada, because DOD. then us industry (outside of defense contractors) basically ignored it and went with verilog instead, while for reasons unclear to me european private industry went the opposite way

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

JawnV6 posted:

to something on the internet several days ago? i didn't comment, so i have no way to get back to it. doesn't appear to be the MIPS R3000 article fwiw

at some point it's not that wrong, you could write some checker that tags data to threads and blows up when something touches another. but that's nowhere near a good enough reason to swap out verilog

did some googling, found nothing. I did find this academic idea though, you might find it interesting http://www.cs.cornell.edu/projects/secverilog/

not exactly what your idea is i think? and I’m a little skeptical that such an extension can comprehensively find timing channels and other information leakage, but i haven’t actually read any of the papers :effort:

agreed that there is a little truthiness in that it should be possible to design a hdl which helps identify these issues in the design stage. but it sounds like your poster was all “it’s because verilog!” rather than the correct “whole industry was blindsided by unforeseen security holes left wide open in virtually all cpu isa specs”. everyone thought it would be enough to strive for correct implementation of their respective isa specs, but the specs p much all say your jobs done if you prevent privilege violations from modifying architecturally visible state. turns out that’s not nearly enough, so here we are

quote:

the first HDL i used at intel was iHDL - an in-house abomination that mixed all kinds of concerns. domino logic expressed in the source continues to haunt me. like imagine specifying the ADD opcode you wanted to be used under the hood in a python program

i have heard of ihdl before but was not aware of such horrors. i know Intel was all about hand tweaked logic and layout for a long time, guess that mentality must have leaked into iHDL

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Suspicious Dish posted:

mostly a reference to the blue/pink disaster from Taligent/Apple, named after the colors of the index cards they wrote the OS designs on. why they would want to harken back to that disaster, nobody knows

apparently it really is an intentional reference too

wikipedia posted:

Some of Apple's personnel and design concepts from Pink and from Purple (the original iPhone's codename)[5][6] would resurface in the late 2010s and blend into Google's Fuchsia operating system. Intended to envelop and succeed Android, its open-source code repository was launched in 2016 with the phrase "Pink + Purple == Fuchsia".[7]

even accounting for the purple part it is not possible to pick a more cursed name, imo. pink (taligent) was, in hindsight, incredibly doomed in every possible way

(blue turned out relatively all right, but that's because the blue cards were just unambitious updates to old school macos. blue became system 7.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
a truly threaded python with the same attention to performance as perl and use of {} or similar for blocks instead of significant whitespace would rule

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

pokeyman posted:

why?

(background in case it matters: I read the post about posits and mostly just shrugged, but I don’t know anything)

can’t speak for op and i haven’t followed recent developments (posits is a new term for me) but gustafson’s original propaganda for “unums” etc was way over the top. he made extremely dubious proclamations about ending error forever, hand waved away implementation cost concerns, and generally seemed to pitch it all towards less informed audiences who could be wowed by a slick presentation from a plausible authority figure rather than doing the legwork to get it taken seriously by the sort of people whom he’d need to create actual hardware implementations, build software tools, etc

he also took lots of juvenile potshots at William Kahan, aka the father of ieee 754 floating point arithmetic, which was pretty uncool imo

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

ratbert90 posted:

I'm writing I2C drivers in PYTHON. :v:

pro redtext / post combo

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
don’t forget Motorola, they sucked a whole lot too

and of course the third leg of the PowerPC triumvirate was early 1990s Apple, not exactly a well run company

consider PowerPC’s origin story. it was very loosely Apple going “hey Motorola your 88000 RISC is kinda ok and we’ve even built prototype macs complete with working 88k MacOS, but we don’t trust you at all after your shameful failure to keep 68k at least even with x86 so we need a second partner. how’s about you get in bed with IBM and us and we’ll take almost all of the ISA from IBM POWER, not 88k, because we’re real tight with ibm right now. we’ll throw you the bone of PowerPC using the 88000 bus, since that’ll save us from having to redesign our RISC Mac chipsets from scratch. yeah you have to eat the loss of designing 88k, is that ok???”

yeah that was doomed

tbh ive never seen reason to believe PowerPC the ISA was fatally flawed. there were issues but you could’ve made good implementations of it. it’s just that the actual implementations got off to a rocky start due to the bad companies and politics involved, and they never recovered from getting dumpstered by 1990s Intel, which shocked the world by proving you could build a superscalar pipelined x86 that was pretty awesome despite the ugliness of the ISA (all hail Bob Colwell and his team)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
as far as i can tell from spec.org power9 isn't very impressive tbqh

it looks very good in terms of specintrate 2017 per "core", but that's because of ibm's extremely aggressive 8 threads per core thing. they don't seem to have submitted any specint results, so it's a fair guess that they'd look bad on that metric. so it's one of those machines where you're trying to make up for lackluster ST performance by running lots of threads, which is not great. and even then, intel is considerably ahead of power9 on 2-socket specintrate

i don't doubt there are workloads where power9 is better (specintrate isn't the best thing in the world, ibm is also exotic in the sheer amount of cache they've got and that will help some things), but it is not the clear leader you're making it out to be

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

eschaton posted:

this is not in any way true though, up to the 68040 they were even

it was in the 486 and then seriously the Pentium where Intel really started to race ahead and even then, 88000 and PowerPC were pretty even at the same clock

i might be misremembering the sequence of events since i was a teen at the time, but i remember the 486 being widely available well before the 040

agreed that the lack of an immediate answer to the pentium was the nail in the coffin. motorola did eventually ship the 060, but it wasn't great

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
xilinx has several radiation qualified fpga models too, wouldn’t be surprised if lots of modern space hardware uses them instead of a rad750 or w/e

basically, if you needed the fpga for other reasons (eg to do dsp for software defined radio), and your control software doesn’t need high performance, you can just throw a soft core or two into your fpga instead of designing in a discrete extra part. xilinx’s own microblaze soft core gives you a ~200 MHz 32b risc without using much fpga fabric

it should be noted that despite the hardening and certification for space, designers still have to pay a lot of attention to poo poo like secded ecc for state machine state words, watchdogs which reset the whole shebang if anything goes too far wrong, and so forth. radiation: not great for reliability, who knew

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply