Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I forget if I posted this already, a friend/ex coworker sent me this

https://m.youtube.com/watch?t=254&v=IFwFDZDQWAA&feature=youtu.be

Watching him puzzle out how a pcie switch works is amusing to me (I worked on that exact one)

Adbot
ADBOT LOVES YOU

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

With the sheer amount of PCIe 5.0 lanes on Zen 4, I feel like it's a decent idea to use a PCIe 5.0 switch as a bandwidth bridge to cover PCIe 4.0 U.2 drives. The real guys are of course just buying EPYC but I wish there was a market for home users who want:

* Lower TDP CPUs (e.g. Ryzen 7600)
* Lots of PCIe lanes for used enterprise U.2 NVMe to have large amounts of flash JBOD storage

It would be really cool to have an external enclosure with an edge card to 0.5m cable to a box with a backplane for 8 gen4 x4 drives.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

Are people still doing PCIe over fiber? QSFP28 modules would probably work for a few lanes...

Yes! It's something we should actually start seeing more of. Samtec has really been trying to get that going with individual modules that could do Gen4 x4 that just clip on the board near a package, then flywire (fibre?) over to a backplane for up to 100m lengths.

I would not be surprised to see more coming with Gen5 support for rack based CXL memory pooling applications as well - the issue potentially being the latency though.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

I am actually an idiot and should just go buy SAS SSDs if I truly just want HDD replacement flash.

How are the prices on those? I wonder if they're dropping as enterprise moves to all NVMe solutions.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

necrobobsledder posted:

They're roughly within 10 - 15% pricing of the mainstream consumer pricing. A vendor accidentally shipped me an 18TB SAS drive rather than SATA and I saw that the price difference was maybe $10 - $20 USD so it seems to be following similar price curves with the mainstream on the spinning rust aspect of the market. 2.5" SAS SSDs (namely the Nitro series) I ordered last year roughly were pretty close to consumer pricing at the time (read: not that great). Looking around now they haven't followed suit with the NVMe drive drops going on for about half a year now a Nitro 3750 400 GB drive is still $307 on CDW. We bought I think 24 3.84TB drives about $800 each from CDW which was pretty nice but is roughly the cost of a brand new car these days and probably not appropriate for a home lab.

I'm not quite sure how these will work in terms of pricing as enterprises rotate out of SAS over the coming few years but I don't think it'll be a slam dunk win for home lab dirt cheap target pricing. The first generation Xeon D systems are still rather pricey for being about 7 - 8 years old now, for example.

Yeah, agreed they’ll taper off the manufacturing of SAS but if there was a supply glut there could be some sweet bargains.. sounds like that was not the case though

I wonder if manufacturers were even taking inventory and removing the flash for repurposing on nvme or other more in demand items (I had heard of similar things for other products but probably the nand isn’t in as tight supply as things like clock buffers and certain fpgas)

Also those Xeon D 1500s are really sweet chips. They’re such perfect machines for home servers/NAS applications it is really baffling why Intel didn’t try to get into or even create that market with partners who could make them user friendly/quiet/low power appliances.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Yup pcie 4 drives will work in a 3 motherboard no problem! Other way works too. Pcie is very flexible like that it just negotiates to the highest common speed.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
M.2 screws are a ridiculous pain in the rear end from being tiny and easily lost and to not being a common thread with anything else in a pc build like you mentioned.

Some supermicro boards had great little plastic clip things for retention and I wish those were adopted more. They were all on a plastic connector that was fixed to the board so impossible to lose!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Maybe not the fastest but those P4500 series will basically live forever and exceed their estimated write lifespan by an impressive amount.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

WhyteRyce posted:

hug your nearest solidigm friend because they are doing mass layoffs right now

drat what kind of %?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

WhyteRyce posted:

Wow Solidigm is letting everyone who gets fired to keep their laptops what a swell gesture

How do they do that, just remote wipe?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Honestly I would just ship my work laptop back who needs a lovely mid tier dell or whatever

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

HalloKitty posted:

my work hp zbook firefly 15 g8 has the shittiest screen I've seen in anything, ever, it has absolutely unreal image retention - you can leave a dialogue box on the screen for maybe just 30 seconds, lock the screen, or move a blank window there instead and you can make out the text in the window that was just there
that said, I'd still take it for free

I have a dell latitude and it’s pretty decent but if they said you can have it I think it would probably either just sit unused or I would give it to my kids. I have a hard separation between work and personal devices so I don’t have any use for it outside of work, really!

I’d rather have the cash equivalent :haw:

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
WD Blacks are great quality, and it’ll still be supported even with Sandisk taking over the SSD stuff if that happens.

Starfield is meh though :haw:

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I wonder what is up with Phison after MaxLinear walked away from buying them. That seemed like a horrible plan to me, MaxLinear doesn’t seem the type to get into sub consumer grade ssd controllers.. they’re more kinda crappy enterprise stuff!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I think the biggest are still 2TB but iirc Micron put out 2TB NAND parts last year and 2 of those would fit on a 2230.. but nothing yet on Crucial.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I wonder if the failing drives all use the same controller

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

WhyteRyce posted:

it doesn't sound like he's harvesting and verifying the queues which is absolutely needed to say the drive is losing data due to power loss. but it wouldn't surprise me if there is a miss by some controllers since power loss verification is hard and I suspect vendors focus more on the "kill power and make sure drive comes back alive" and less on the "verify the accepted transactions are flushed". the former is easy to blast away at with fio and a quarch, the later is more difficult to setup.

not sure if applies to client drives that have no cap, but technically if the controller can guarantee what's in the cache will be written to media during power loss (all enterprise drives), then it is considered non-volatile and flush command is a no-op

I wonder if a lot of the m.2 vendors are just hoping/assuming the majority of the drives will go into laptops anyway so a power cut is less likely with the internal battery.

Surprised the SK Hynix one was a problem one but I think that's a pretty bargain level drive so gotta compete with the phison junk.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Ah yeah the P41 was the el cheapo. But still gold! Confusing branding imo.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Klyith posted:

And if you are still worried, I'd suggest that what you want is an enterprise drive with true power loss protection, not a slightly different consumer drive.

:agreed: there’s a lot that has to go wrong there in a specific timed sequence for you to lose important data. And if you slap the system on a UPS that’s a big mitigation right there.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I really miss Intel SSDs tbh. The 750s were such a ridiculously good product for prosumers. They were expensive but they were practically enterprise products at way less cost.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Subjunctive posted:

Lots of discounted Optane floating around these days!

some day I will install the one I grabbed last month

The optanes are pretty nice too but the 750 is still my fav. I have no need of buying any ssds I just like to reminisce about it lol

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Perplx posted:

At my old job they were the first ssds we had in production, over 5+ years I saw dozens of other ssds go bad but the 750s were as good as new.

Yeah I worked doing system validation on PCIe switches at a previous job and built a system with 750s along with other systems with other manufacturer SSDs, and we would run all sort of torture throughput and latency tests and the 750s only would die after giving you a TON of warning when far, far beyond the PB written limits. And then they would fail to read only (not that we cared, we were just writing raw test patterns) so you would be able to recover. Really impressive drives that were nigh bulletproof! The 3500s were similar (with that one high profile bug issue I forget the details of now though)

Anyway I thought it was a bummer Intel got out of the game they had some great products in the SSD space.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

WhyteRyce posted:

if you’re buying third party turn key solutions you probably can save engineering head count by sticking to the reference firmware except in “value add” areas (i.e. random nvme and security features, nothing performance related). To a bean counter it really doesn’t make sense to invest in an in house full firmware team if you’re already buying a complete solution. Also funny things happen when you decide to do that anyway and the firmware team brazenly ignores reference design and recommendations

Client is a poo poo razor thin even by nand product standards market and with the exception of a few standout cases I don’t think their client strategy was anything other than a numbers game to keep their fab utilization high or at least soak up the nand that wasn’t good enough quality for the enterprise market they actually cared about

It’s true, stuff like the 750 and 900p were too good to justify as you could charge the same for way shittier products. AND they would have much better margins as you barely change the firmware off a mid tier supplier’s reference design.

Stuff like that is why I’m thankful I have managed to avoid ever working on consumer products :haw:

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

WhyteRyce posted:

The 900p was fun because it was mostly the exact same as the enterprise 4800x with some firmware features disabled (namely manageability and logging stuff). But that was probably just the byproduct of the "JUST BUILD SOMETHING AND FLOOD THE MARKET" mindset of early Optane because the follow on product had no client/gamer/enthusiast variant.

Yeah slap a sleek looking heatsink on there and ship it with a star citizen free ship! lol (we had so many of those free coupons and no one ever bothered using them - probably could have sold them to star citizen weebs)

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

poe meater posted:

Uhh can someone explain to me if adding another M.2 drive will work properly on my motherboard?

I'm using a Prime X570-P motherboard which already has one M.2 slot occupied. There may or may not be issues with sharing lanes/bandwidth/SATA ports or something?

Can anyone clarify if there will be any issues? I don't really understand the manual lol. I'm running a 5800x3D + 6700XT.

https://www.asus.com/us/motherboards-components/motherboards/prime/prime-x570-p/techspec/

Looks fine. From having a look at the manual the 2nd (lower) m.2 is hanging off the chipset, which is connected to the cpu using a high speed bus basically the same as PCIe. There will be a tiny latency hit as it has another device to go through but nothing you would ever notice outside of synthetic benchmarks.

It doesn’t seem like it makes any of the onboard sata ports stop working it just has the option to be a sata keyed m.2 device as well.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Interestingly I’m hearing from friends at an ex-ex-company that they are not going to do a trimode (SAS/SATA/NVMe) controller for RAID in the future and dropping NVMe.

This makes sense to me, I always thought it was silly to have a controller with all the additional layer headaches instead of just routing from the host with possibly some transparent switches in the path.

Sounds like almost no customers used the NVMe part for the gen4 controller.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Seagate Nytros come in 2280 for sure, but a lot of the big fellas with the caps for write to nand on power fail would be 22110.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Honestly if an NVMe drive can hit power state transitions properly it was always a bit of a shocker. We had a list of drives we used to test that with because most drives would gently caress it up and require a bunch of debug to find out it wasn’t our problem. Yet again something intel drives did amazingly well, both the 750 and the 3500/3700s rocked it.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
We used a ton of those 16GB optane drives as pcie endpoints but I forget what it was that they did wrong, maybe it was having problems with repeated link disables. Anyway shouldn’t be a problem for a legit use when not torture testing pcie links.

I still think a disk-on-module is probably a better option for a small industrial drive though.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Shumagorath posted:

I guess that’s nice? I’ve never even come close to burning out an SSD, though maybe QLC will start to make me sweat that.

Why did 3DXpoint lose out to NAND? Cost?

Only intel and micron were interested in producing it, and frankly NAND lifespan is still plenty good especially with a lot of redundancy for enterprise products.

3dxpoint’s big feature was supposed to be much much lower latency and transfer speeds so it could be its own memory tier in between dram and NVMe ssds but it never got to that performance level so it was more of just an also ran to PCIe NVMe.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
It even kind of seems like CXL is a bit DOA with no real great use cases coming up, additional memory tiers that require a poo poo ton of software support and system planning probably not where it’s at.

Adbot
ADBOT LOVES YOU

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

WhyteRyce posted:

I used to think nothing of stuff like dual sockets being an issue for anyone because I’m a brain dead HW guy but then I talked to application teams that absolutely hate numa and lol CXL is numa on steroids get in losers we are going over an interconnect

Yeah Software folks are gonna dictate this for a while at least and I think they hate it.

Even adding memory expansion is a big headache and that is the simplest of CXL applications. Doing cache coherency and whatnot, they’re gonna go bananas.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply