Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
hobbesmaster
Jan 28, 2008

Perhaps there is a better thread for this, but is there any news on general availability of Knight's Corner/Intel MIC?

Adbot
ADBOT LOVES YOU

hobbesmaster
Jan 28, 2008

Star War Sex Parrot posted:

Probably not any time in the near future, if I had to guess. It's dependent on Intel getting their 22nm 3D tri-gate process perfected, which Ivy Bridge seems to be the first commercial guinea pig for.

Thats unfortunate, was hoping for a 75W supercomputer. Video cards take so much power...

edit: Upon looking that up again, did I imagine 75W?

hobbesmaster
Jan 28, 2008

theclaw posted:

Late 2012. Expect power consumption and theoretical floating-point performance similar to Kepler.

If nVidia's roadmaps are to be believed then Maxwell should blow that out of the water a few months later?

IPP/MKL integration would be much easier to deal with than CUDA however.

hobbesmaster
Jan 28, 2008

Standish posted:

This is really cool, more of a big deal for servers than desktops though.

FMA will also be pretty cool for some applications.

hobbesmaster
Jan 28, 2008

calcio posted:

And really, a ps2 port still!?

No PS2 port means enabled USB which means no secure computers and no DoD sales.

hobbesmaster
Jan 28, 2008

dpbjinc posted:

I would like to point out that Thunderbolt does suffer from the same security problems as FireWire and ExpressCard, in that any computer that has the ports enabled and automatically loads drivers for new hardware can have its memory read and/or altered by a malicious device. It's not a major issue, since it only affects non-server systems that need full disk encryption, which are a vast minority of systems out there, but it is something to consider for organizations where that would be an issue (i.e. DoD and friends).

In other news if you open up a computer and plug in a PCIe card you also get DMA access. And if you solder a different bios flash chip in you can bypass a boot password.

DMA and guaranteed bandwidth is the entire point of those protocols.

hobbesmaster
Jan 28, 2008

Install Gentoo posted:

Firewire did this too. Also I'm pretty sure this could be done with PCMCIA and ExpressCard on laptops!

Maybe you should ask yourself why you'd need to worry about your friend giving you a malicious thunderbolt device?

Just imagine the security vulnerabilities of my currently set up MBP with a thunderbolt to expresscard bridge and two firewire expresscards installed!

(all this to run 3 machine vision cameras off a laptop - someone needs to release a thunderbolt to PCIe bridge yesterday)

hobbesmaster
Jan 28, 2008

Alereon posted:

-E series CPUs exist solely for niche applications that are heavily multi-threaded or dependent on memory bandwidth, which really isn't a lot of things.

Those are he applications that make the world go round though. Intel probably figures Sandy bridge EN/EP will be fine until haswell so there won't be anything to downgrade to the regular E part. Or something.

hobbesmaster
Jan 28, 2008

Alereon posted:

Almost nobody runs those applications on their desktop computer though, which is why it's a pretty tiny niche.

Which is why I mentioned the EN/EP xeons. They won't hit until around the release of Haswell, right? That'd be around when we would see ivy bridge e; if ever.

hobbesmaster
Jan 28, 2008

Doesn't seem unreasonable that each tick-tock set would have a new socket.

hobbesmaster
Jan 28, 2008

HalloKitty posted:

Isn't that tock-tick?
(new-shrink)

Clocks go tick tock so I will write it that way. :colbert:

(you're of course right)

hobbesmaster
Jan 28, 2008

movax posted:

The consumer SKUs fall off relatively quickly, but they guarantee availability for products coming out of their embedded group, and I imagine the server SKUs may enjoy a little longer longetivity as well. For example, Intel told us we can buy embedded SKUs of say the i7-620 for at least the next eight years.

And SNB-E is still the highest end consumer hardware. As discussed earlier IVB-E and related xeons won't be out for quite a while so there's plenty of SNB related processors still being sold.

As for embedded, Intel will still sell you Pentium 3s if you want them.

hobbesmaster
Jan 28, 2008

movax posted:

Look at this Lexus of Z77 mobos. Tons of add-on goodies and a PCIe switch for x8/x8/x8/x8 graphics :stare:

That little SSD is the most useless thing...

hobbesmaster
Jan 28, 2008

HalloKitty posted:

Surely DirectCompute and OpenCL can provide? I guess you mean a physics-specific framework based open one

If theres anything the world needs it's more GPU frameworks.

hobbesmaster
Jan 28, 2008

ohgodwhat posted:

Have you ever considered it's possibly faster to incorporate those things on one chip and your concerns are really outmoded?

The biggest issue these days really is memory bandwidth and moving everything as close as possible helps a lot.

hobbesmaster
Jan 28, 2008

Shimrra Jamaane posted:

So anyone who desires to build/upgrade a gaming PC is all set.

I don't think that many gamer machines are Sandy Bridge E.

hobbesmaster
Jan 28, 2008

Factory Factory posted:

They'll also probably win a Nobel prize for physics, what with figuring out room-temperature superconductors to cool a 47W chip in a form factor that normally struggles with a third of that.

13" rMBP can have a 35W tdp processor in it. Thats probably about as close as you'll get.

hobbesmaster
Jan 28, 2008

cstine posted:

In 2007, we were talking junk like windows mobile 6.5, blackberry os 4, symbian, and still ancient junk like palmos.

They were absolute garbage, and iOS was the first "smartphone" that wasn't a buggy junked up pile of poo poo covered in crapware in the US.

I'm not even sure i'd call the original iPhone a 'smartphone' - it just had a browser that wasn't garbage, and it was shiny.

But that's all smartphones were back then, a phone with a terrible browser and terrible email support.

hobbesmaster
Jan 28, 2008

Is that where intel landed on that one?

hobbesmaster
Jan 28, 2008

InstantInfidel posted:

You're right, there's nothing wrong with it. They just chose some very esoteric units for that and a couple other charts they had, and I definitely believe that they could have gotten across the same point much more succinctly and equally as effectively by linking to some of those charts as sources rather than dropping one in every couple of sentences. Regardless, it's a better article than I gave it credit for.

Watt hours are not an esoteric unit, have you ever looked at a power bill?

hobbesmaster
Jan 28, 2008

WHERE MY HAT IS AT posted:

Where's a good starting point to learning about various CPU architectures and how they work? My program is more focused on the software side, and we learn the basics of CPUs and how they work (like I know about logic gates and binary adders and APUs and such), but I find it all very interesting and would love to learn more. Any good resources besides signing up for summer courses at school?

Whats your major? If you have a computer engineering program there should be a juniorish level computer architecture class that should require you to write your own CPUs in verilog or something along those lines. If you're CS you might get to count it as an elective.

hobbesmaster
Jan 28, 2008

Is that thing the target for star citizen?

hobbesmaster
Jan 28, 2008


Is it time to reverse Intel/AMD on the cores/cores comic?

hobbesmaster
Jan 28, 2008

mewse posted:

I was watching old youtubes and apparently David Letterman not only banged one of his staff, when a dude tried to blackmail him he spoke about it on his show and the blackmailer was convicted

https://www.youtube.com/watch?v=f7f9D4KclJw

Well yeah, asking them for money is illegal, asking a paper for money is however perfectly legal.

It’s clear what the law wants you to do!

hobbesmaster
Jan 28, 2008

LRADIKAL posted:

Believe it or not 90 mv is .09 volts for everyone.

he didn't give units so clearly he meant decivolts :v:

hobbesmaster
Jan 28, 2008

SourKraut posted:

Nokia possibly?

Ericsson is the other possibility. Nokia market cap appears to be around 31b, Ericsson 25b.

Siemens, Qualcomm and Samsung are significantly larger so are right out. (In addition to more obvious reasons why it wouldn’t be them)

hobbesmaster fucked around with this message at 21:40 on Jul 4, 2018

hobbesmaster
Jan 28, 2008

suck my woke dick posted:

"don't you think it's a bit risky betting the entire business on intel getting 10nm out roughly in time"

"nah it's not like they're amd lol"

Whats curious is that they're not concerned about intel 5G chipsets in general. https://newsroom.intel.com/news/intel-introduces-portfolio-new-commercial-5g-new-radio-modem-family/

hobbesmaster
Jan 28, 2008

evilweasel posted:

that is, uh, not a statement that fills me to bursting with confidence "we managed to successfully complete one telephone call without the modem breaking"

Its not an uncommon thing to see in cell phone industry publications. It means that they have working phones and base stations for the new technology so its ready for pilot deployments.

The real question is what is intel's yield...

hobbesmaster
Jan 28, 2008

wargames posted:

these things require silicon right? So probably using their 14nm+++++++++++ node so might not be terrible.

Yeah current LTE-A modems use 14nm: https://www.intel.com/content/www/us/en/wireless-products/mobile-communications/xmm-7560-brief.html

Cellular modems usually use older nodes so I'm not sure why delays in 10nm screw them over. Unless it was designed for 14nm equipment that is not going to be available because the 10nm stuff isn't online?

hobbesmaster
Jan 28, 2008

Methylethylaldehyde posted:

5g stuff needs 10nm or better because of the hilarious power requirements needed to drive the system. A 5g SOC is like twice as power hungry as current SOCs, and almost all of it is in the actual modem. Same with the base station stuff, a LOT of the gear is so close to cutting edge they desperately need the extra 40% power savings in order to avoid having the telco racks sound like the mid 90s 1U pizza box servers.

Its infrastructure stuff so mid 90s 1 U pizza box server should be ok right :v:

I didn't realize the power density was that much higher for 5g, but I work with the slower side of the cell industry.

hobbesmaster
Jan 28, 2008

Combat Pretzel posted:

Intel prohibiting benchmarks and comparisons on their newest microcode release fixing various of the security issues?

https://perens.com/2018/08/22/new-intel-microcode-license-restriction-is-not-acceptable/

This is in the real release, not just engineering samples?

hobbesmaster
Jan 28, 2008

Paul MaudDib posted:

It's kind of annoying that people talk about resolutions being easy or difficult for a CPU to drive, when it's really the framerate that matters. 120 Hz 3440x1440 is not any easier to drive than 120 Hz 1920x1080, what matters is the refresh rate you're trying to drive.

3440*1440*120 > 1920*1080*120, I'm not sure what you're getting at?

hobbesmaster
Jan 28, 2008

YOY revenue growth is bigger but esports are “only” a billion dollar industry

That’s a couple NFL teams

hobbesmaster
Jan 28, 2008

VulgarandStupid posted:

Who buys an SA account in 2017?

People that were banned.

Maybe AMD could hit that 30% a year from now if Intel continues doing terribly. Maybe.

hobbesmaster
Jan 28, 2008

jisforjosh posted:

Mesh on low is actually the "try hard" help you see people thing apparently. Mesh affects rendering distance of certain objects but they supposedly changed the way the setting works from BF1->BFV. I think on Low, certain buildable coverage is not rendered but players and vehicle models are.

Ultra settings for pve, lowest usable settings for pvp.

For another example; until a month or two ago in overwatch running below 100% render scale and on low settings would increase the size of the team color outline on players which can help a lot with target recognition. Overwatch also renders a lot of extra “stuff” on higher settings that isn’t actually there; usually it’s just in spawns but sometimes you can see through a plant on low settings but not on high.

Tournaments often (and LANs almost always) enforce uniform graphical settings for this reason. For example epic requires max settings to be used for fortnite tournaments.

hobbesmaster
Jan 28, 2008

priznat posted:

The newest Jetson (Xavier) has a Cortex A55 with 8 cores and it is pretty sweet. Have a couple dev kits at work, Gen4 x8 PCIe slot on it too!

Runs a full on desktop linux on it no sweat.

Are there any RISC-V devices with similar power to a Cortex-5x to be able to run a full Linux (ie not busybox)? Would like to make a board controller/logger etc in the near-ish future.

A busy box based user land is “full” Linux. You’re free to waste flash and RAM on a “real” user land if you want. All you need for a full Linux is a mmu.

hobbesmaster
Jan 28, 2008

KKKLIP ART posted:

I remember buying a Soyo Dragon for my socket a machine and thinking It was the hottest poo poo around. I am actually surprised that Soyo is still around today

That iteration was liquidated in 2009 apparently. https://en.wikipedia.org/wiki/Soyo_Group

hobbesmaster
Jan 28, 2008

craig588 posted:

It might be caching or something. The Windows file transfer box regularly reads around 100MB/s so it's at least tricking whatever that measures.

How are you connecting these drives to the rpi? This doesn’t make much sense

hobbesmaster
Jan 28, 2008

craig588 posted:

I have some generic USB adapters for 2 WD Reds and 1 Seagate Ironwolf. Nothing name brand, but it works for me with the type of consumer workloads I have. Of course if you have something specific in mind a dedicated box will be better, but to just appear as one big drive that doesn't know anything that I move single multi gigabyte files to it works great.

Edit: I just did a file transfer to make sure I wasn't remembering wrong and it was 160MB/s which seemed high so I went to my hallway and at some point I bought a Drobo for my house and forgot about it. Sorry for the confusion.

A rpi doesn’t even have gigabit Ethernet so uh yeah that’s a pretty massive mistake.

hobbesmaster fucked around with this message at 03:17 on Mar 24, 2019

Adbot
ADBOT LOVES YOU

hobbesmaster
Jan 28, 2008

Sidesaddle Cavalry posted:

Hidden takeaway from this is that 6-core gets cheaper since it's no longer top of the consumer stack. That gets people into the multitasking game a lot easier

AMD must be really excited after seeing the contents of these leaks.

Unless zen 2 is a complete flop, but if Intel is standing still and the current ryzens are beating Intel's mid range...

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply