Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SamDabbers
May 26, 2003



redeyes posted:

I wish I could find a case big enough for 2 x (3x5.25" iStar hotswap bays) plus a full size ATX mobo with full size RX480 GFX card.

Help

Rackmount is fine, almost anything is fine except OMG GAMERZZZ BLING type cases.

G-Prime posted:

Finding a case with 6x5.25" externals is hard, honestly, but Newegg shows that they have 4 different ones available. They're all "gamer" cases, but the Rosewill one doesn't have a window and has TONS of cooling capabilities. And if you remove the front fans and replace them with unlighted ones, it's just a chunky, black case.

Edit: Link: https://www.newegg.com/Product/Product.aspx?Item=N82E16811147053

Edit2: The fan up front has an on/off switch, so you don't even need to replace it. Also, holy poo poo, it's 230mm. That's MASSIVE.

Alternative:
Rosewill RSV-L4412
+
Tower stand w/casters

Similar cost if you include hot swap bays, of which there are 12, no LED bullshit, and currently the case comes with a 500W power supply.

Adbot
ADBOT LOVES YOU

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

evol262 posted:

use ECC if you care about your data

Yes. Is this now controversial?

I admit I came off too strong at the start by saying anyone who cares is already using ECC, sure, but for the data I care about any source of preventable corruption is equivalent to me. I also think you're overselling how much ZFS protects you from memory errors; data still gets held and handled in RAM by your backup program, network stack, and every other program that wants to read or modify it. ZFS may protect you from some memory errors, but the protections from checksumming and ECC do not overlap.

I think this thread overstates the dangers of bad reads from HDDs, or at least seriously understates the dangers of RAM, just because disk errors can be mitigated by software. Both are very rare events and data isn't more or less corrupt from either one. If there were some piece of software we could run that could mitigate memory errors like ZFS mitigates disk errors we'd all be running it.

evol262 posted:

You're gonna spend 2k on a NAS anyway

Yeah, if you're building a high performance Xeon/Threadripper/Ryzen 7 server with tons of storage, which that was in direct response to. You brought up spending hundreds on a "Xeon and Xeon board (or Threadripper, or Ryzen, or whatever)," which makes it more than just a simple NAS to serve files. $2000 for a high end home server with tons of storage isn't unreasonable and at that point ECC ends up a small part of it. You can go with Ryzen and ECC instead of an i5/i7 and probably end up saving a bit of money.

If you don't need high end CPU performance (i.e. $60 pentium + mobo combo tier) and just want to saturate one or two gigabit links (or to pair with a 10GbE PCIe NIC) you can get ECC for the same price as the cheapest new consumer level stuff. It's probably cheaper, actually, since old server ECC RAM is substantially cheaper than new DDR4 RAM, plus you'll get IPMI and multiple onboard NICs with teaming out of the deal.

evol262 posted:

My position is that you can build a performant, capable NAS with 24tb usable for $1000, which doesn't include ECC or ECC-capable chipsets.

Okay. But you could also do it with ECC for the same price ($65 for a cpu + mobo) and same definition of performant - in the absence of the hard requirement for the matx form factor. Once you need a small form factor you do face a higher price floor for ECC (~200ish for a pentium and ECC capable matx mobo, though that would have IPMI too). But you're also paying $60+ more for a Node 804 or equivalent compared to a bare bones ATX case, which could have been another 3TB drive or could knock the price down to $600.

Running a file system with high overhead like ZFS (particularly with AF drives since each tiny piece of metadata eats 4KB instead of 512-1024B) also ends up costing way more than ECC in the form of unusable space, on the order of multiple terabytes with large raidz1-3 pools. All my unimportant data would do just as well on mdadm or LVM, except that administering multiple tiers of storage like that would not be worth the hassle for ~$80 of HDD space and ZFS has other, unrelated, features like CoW snapshots.

evol262 posted:

"can't build a NAS for less than 2k"

What a strange quote.

Desuwa fucked around with this message at 20:13 on Aug 13, 2017

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I get the impression that Intel and Nvidia both have to deal with a lot of patents when it comes to the hardware accelerated pieces of their hardware. But like I said before, the industry's given up on hardware acceleration at scale for encode / decode besides end user devices so the few that seem to care about this segment of hardware acceleration on non-mobile GPUs are home users trying to play pirated content basically and streamers that will probably wind up with 8-core monstrosities that can easily transcode at least h.264 in realtime anyway.

BlankSystemDaemon
Mar 13, 2009



redeyes posted:

I wish I could find a case big enough for 2 x (3x5.25" iStar hotswap bays) plus a full size ATX mobo with full size RX480 GFX card.

Help

Rackmount is fine, almost anything is fine except OMG GAMERZZZ BLING type cases.
Lian-Li has one with 12x 5.25" external bays, combined with this, this, and this accessory, will give you up to 12 hotswap bays with locking doors, that you can expand as you see fit. 4x of these IcyDock trayless things will give 20 drives.

BlankSystemDaemon fucked around with this message at 21:27 on Aug 13, 2017

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Nullsmack posted:

I'm looking to build a new fileserver system. My current one is an older system running an AMD E450 chip. I liked that since it is low power and doesn't require a fan. Any modern equivalents without dropping nearly $1000 on a xeon-d processor and board?

I got started on an Intel Bay Trail motherboard and then upgraded to an Intel Braswell board. Both were very low power and feature a CPU soldered onto the motherboard with a large heatsink. Both had no fan. Both made excellent file servers if you just want to run FreeNAS or Ubuntu.

I think there's a modern equivalent. It might be Apollo Lake, but I'm not sure since I've moved up another level to a mainstream desktop processor.

evol262
Nov 30, 2010
#!/usr/bin/perl

Desuwa posted:

Yes. Is this now controversial?
Yes, in this thread, and from ZFS developers. Please stop flogging this horse.

Desuwa posted:

I admit I came off too strong at the start by saying anyone who cares is already using ECC, sure, but for the data I care about any source of preventable corruption is equivalent to me. I also think you're overselling how much ZFS protects you from memory errors; data still gets held and handled in RAM by your backup program, network stack, and every other program that wants to read or modify it. ZFS may protect you from some memory errors, but the protections from checksumming and ECC do not overlap.
I'm not arguing that ZFS protects you from memory errors. ZFS protects you from gradual disk failure. Unless the data is rewritten, a bit flipped in read isn't significant to your data integrity.

More the point, though, is that disks tend to gradually fail until you get a click of death. Cascading memory failures are immediately observable as core dumps, crashes, and other 'WTF is happening' behavior. This is why disk corruption is 'silent', but nobody speaks about memory failure that way.

If you've never seen a system repeatedly choke network transfers because TX/RX checksumming fails because a stick of memory is dying, I'm not sure what to say. Failing memory shows up all over the system.

Desuwa posted:

I think this thread overstates the dangers of bad reads from HDDs, or at least seriously understates the dangers of RAM, just because disk errors can be mitigated by software. Both are very rare events and data isn't more or less corrupt from either one. If there were some piece of software we could run that could mitigate memory errors like ZFS mitigates disk errors we'd all be running it.
And the dangers of bad memory to data which is already present on disk are...?

Desuwa posted:

Yeah, if you're building a high performance Xeon/Threadripper/Ryzen 7 server with tons of storage, which that was in direct response to. You brought up spending hundreds on a "Xeon and Xeon board (or Threadripper, or Ryzen, or whatever)," which makes it more than just a simple NAS to serve files. $2000 for a high end home server with tons of storage isn't unreasonable and at that point ECC ends up a small part of it. You can go with Ryzen and ECC instead of an i5/i7 and probably end up saving a bit of money.
:shobon:

I was replying to you saying "you're already gonna spend 2k, so may as well ...". Why spend that much?

I get if you're building an 'all in one' virt/storage/whatever machine, but that's an interesting single point of failure for someone who seems very concerned with uptime and integrity. Yeah, if you're spending 2k on a NAS, why not just get ECC? But why are you spending 2k at all?

Desuwa posted:

If you don't need high end CPU performance (i.e. $60 pentium + mobo combo tier) and just want to saturate one or two gigabit links (or to pair with a 10GbE PCIe NIC) you can get ECC for the same price as the cheapest new consumer level stuff. It's probably cheaper, actually, since old server ECC RAM is substantially cheaper than new DDR4 RAM, plus you'll get IPMI and multiple onboard NICs with teaming out of the deal.
People don't dump old server stuff cheap because it's efficient, which is one of my concerns. I can get a 4 port PCIe intel gige NIC for next to nothing, and vpro is literally free. Yes, there are nice parts about server gear. No, power, noise, and space are not any of them, and 'if you buy this 2 year old equipment, it's cheaper than new stuff!' is tautological.

Desuwa posted:

Okay. But you could also do it with ECC for the same price ($65 for a cpu + mobo) and same definition of performant - in the absence of the hard requirement for the matx form factor. Once you need a small form factor you do face a higher price floor for ECC (~200ish for a pentium and ECC capable matx mobo, though that would have IPMI too). But you're also paying $60+ more for a Node 804 or equivalent compared to a bare bones ATX case, which could have been another 3TB drive or could knock the price down to $600.
This is a facile argument, since nobody in this thread is buying bare bones cases (not enough drive mounting capacity). But I don't need a smaller form factor. I want one, like you want a single home server to do it all. We're not gonna come to a consensus here.

However, this is the consumer NAS thread, not the home lab thread. While a lot of people probably are using their NAS as an 'all in one' machine, some of us aren't.

Desuwa posted:

Running a file system with high overhead like ZFS (particularly with AF drives since each tiny piece of metadata eats 4KB instead of 512-1024B) also ends up costing way more than ECC in the form of unusable space, on the order of multiple terabytes with large raidz1-3 pools. All my unimportant data would do just as well on mdadm or LVM, except that administering multiple tiers of storage like that would not be worth the hassle for ~$80 of HDD space and ZFS has other, unrelated, features like CoW snapshots.
ECC and a checksumming filesystem are not orthogonal. ECC doesn't do a drat thing to protect you from bit rot.

Desuwa posted:

What a strange quote.

Desuwa posted:

If a person needs a high performance NAS with tons of storage they're already dropping 2000+ dollars then they can build a Ryzen system with ECC without shelling out for xeons.

Look, let's just stop. We're clearly not getting anywhere.

I can post a part list from my NAS if you want, but a $1k system in mATX can easily get to 24TB of storage with raidz2/raid6, quietly, saturate 4x multipathed gige iscsi (and probably a lot more than that -- I haven't actually run a disk benchmark), and have enough juice left over to run a couple of small VMs for :filez:

It won't do whatever extra stuff your system is doing (SQL VM?), but that's ok -- I want to provide storage, and not much else. This way, I can get a 1U switch, PDU, 4 compute nodes, and a storage server inside a half-depth 9U enclosure that sits next to my desk, and I can't hear it. Those are my needs, since the whole thing performs well, transports easily when I move, and is unobtrusive enough that my wife won't kill me if we're in some 1-bedroom temporary corporate housing for a month. This comprises a lot of my working environment, so it's important to me to have it available (and I can't be bothered with a colo)

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Frankly the real argument is that you shouldn't build a NAS without IPMI because IPMI is the loving tits and once you get spoiled by it you'll never want to go back. And if you get a board with IPMI, it's very probably gonna also come with ECC support. And at that point you might as well get ECC RAM because RAM prices are all hosed up right now anyhow and Intel chips have all but ceased to meaningfully advance over the last few generations.

IOwnCalculus
Apr 2, 2003





DrDork posted:

Frankly the real argument is that you shouldn't build a NAS without IPMI because IPMI is the loving tits and once you get spoiled by it you'll never want to go back.

This, so loving hard.

evol262
Nov 30, 2010
#!/usr/bin/perl

DrDork posted:

Frankly the real argument is that you shouldn't build a NAS without IPMI because IPMI is the loving tits and once you get spoiled by it you'll never want to go back. And if you get a board with IPMI, it's very probably gonna also come with ECC support. And at that point you might as well get ECC RAM because RAM prices are all hosed up right now anyhow and Intel chips have all but ceased to meaningfully advance over the last few generations.

vPro-capable boards are $100. It's perfectly fine OOB. I'm not gonna argue that Intel is doing a hell of a lot for performance since Haswell, but that's a different discussion.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Sure, you're welcome to go with your OOB management platform of choice. Just get something that will let you remotely manage / boot / install an OS. Though Intel AMT has had a whole host of problems of late, so I might look elsewhere unless I had a compelling reason to select them.

IOwnCalculus
Apr 2, 2003





Intel AMT is better than nothing (ignoring the security issues), but sucks compared to Supermicro IPMI.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

All things being equal you should get IPMI or whatever, but most people not tinkering all the time shouldn't worry about it too much. In fact, if you do a VM-centric server, even most tinkerers don't need it because they'll be installing in VMs, not on the hardware.

I would have benefited from having that on my server like twice in 7 years.

(don't get me wrong, that poo poo is cool)

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Is there an add-in card of choice that would give me IPMI/lights-out/whatever you call it? I just bought a GT 710 from EVGA B-stock for $25 for a similar purpose, i.e. something I can plug into headless boxes for those rare occasions I need to tweak BIOS settings. But if there's something that would give me full-on remote management in a similar price bracket (let's say <$50) I'd be open to that, I'm not wedded to a direct video output instead of something I could connect to over the network.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Paul MaudDib posted:

Is there an add-in card of choice that would give me IPMI/lights-out/whatever you call it?

I think there are some cheapish solutions for just the remote restart ability, but if you want the full capabilities of IPMI (remote power management, remote console, remote mounting of .ISO/IMG files, etc), you'd need something like the IP8000 which tends to go for well over $200.

You might be able to find a used remote KVM setup for <$100, but that'd be as close as you'd get.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

evol262 posted:

Yes, in this thread, and from ZFS developers. Please stop flogging this horse.

They even said you should have ECC. ZFS has no special requirement for ECC, but it's not ZFS that "wants" ECC, it's you or your data that should want ECC.

evol262 posted:

I was replying to you saying "you're already gonna spend 2k, so may as well ...". Why spend that much?

This is just false. Go check the posts, especially the one where I mention 2000 for the first time.

I've only built basic NAS boxes to serve files myself, with both new and used hardware, but if someone was doing a high end server with threadripper or a Xeon it's not unreasonable, and ECC ends up costing a relatively small amount.

evol262 posted:

This is a facile argument, since nobody in this thread is buying bare bones cases (not enough drive mounting capacity). But I don't need a smaller form factor. I want one, like you want a single home server to do it all. We're not gonna come to a consensus here.

I have a $50 ATX case that handles 10 drives. Maybe $55 once I factor in a 5.25 -> 3.5 adapter. It's not a great case but for just holding drives it works.

evol262 posted:

However, this is the consumer NAS thread, not the home lab thread. While a lot of people probably are using their NAS as an 'all in one' machine, some of us aren't.

I've not said that. I've said whether you're buying something just to serve files or some monster home server, ECC isn't some unobtainable halo tier product. I think anyone purpose-buying NAS hardware for any data they care about should get ECC.

In fact the quote you brought up showing me saying you can't build a NAS for under 2000 doesn't show anything of the sort. You're not building "a high performance NAS with tons of storage" so I'm not sure what you're getting at by constantly bringing it up.

evol262 posted:

ECC and a checksumming filesystem are not orthogonal. ECC doesn't do a drat thing to protect you from bit rot.

I'm not sure what you mean by orthogonal because that sounds pretty orthogonal? Two things that act on different, unrelated factors, which was my point?

Desuwa fucked around with this message at 17:31 on Aug 14, 2017

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
There's really nothing that's that cheap that will add in the capabilities. The usual box recommended on ServetheHome that's among the cheapest is the Lantronix Spider and it's $300+ on Amazon. The V portion of IP KVM is the part that gets expensive, and otherwise you're looking at Serial-over-USB type nonsense to get a terminal to the machine. This alone makes IPMI support extremely cost-effective if you want a lower-maintenance home NAS. Otherwise, you're looking at the SOHO NAS options that are a little pricey compared to frankenstein boxing it.

I'm considering the possibility of using a chassis like the Phantex Enthoo mini XL that can house two motherboards since I'm still going to have a desktop of some sort no matter what.

evol262
Nov 30, 2010
#!/usr/bin/perl

DrDork posted:

I think there are some cheapish solutions for just the remote restart ability, but if you want the full capabilities of IPMI (remote power management, remote console, remote mounting of .ISO/IMG files, etc), you'd need something like the IP8000 which tends to go for well over $200.

You might be able to find a used remote KVM setup for <$100, but that'd be as close as you'd get.
IPMI is great, and I love it everywhere I have it. For the home, setting the firmware to auto-power-on, a managed PDU, and a reasonable ipxe config (ideally with something like Foreman to let you provision) gets you 90% there. IPMI is still clearly better, but PDUs and a provisioner make fencing and spinning up VMs nicer anyway...


Desuwa posted:

They even said you should have ECC. ZFS has no special requirement for ECC, but it's not ZFS that "wants" ECC, it's you or your data that should want ECC.
Some do, some don't. Oracle/Sun explicitly say that you shouldn't care. Can you just drop this? The risk of data corruption in a file that's in memory without other symptoms is really down to cosmic rays, and almost every file format can deal with one flipped bit anyway.

Desuwa posted:

This is just false. Go check the posts, especially the one where I mention 2000 for the first time.
Please stop. This is just derailing. Nobody cares.

Desuwa posted:

I've only built basic NAS boxes to serve files myself, with both new and used hardware, but if someone was doing a high end server with threadripper or a Xeon it's not unreasonable, and ECC ends up costing a relatively small amount.
Server != NAS for some of us. Super powered CPUs don't do much for fileserver performance.

Desuwa posted:

I have a $50 ATX case that handles 10 drives. Maybe $55 once I factor in a 5.25 -> 3.5 adapter. It's not a great case but for just holding drives it works.
Yes, but what percentage of people in here are running plain-jane ATX cases workout hotswap or trying to wedge as many drives as possible into as small as case as possible? 15 years ago, I had a full height tower with 18 drives in it. But there are better ways now.

Desuwa posted:

I've not said that. I've said whether you're buying something just to serve files or some monster home server, ECC isn't some unobtainable halo tier product. I think anyone purpose-buying NAS hardware for any data they care about should get ECC.
Again, nobody cares, and this is your opinion, not a fact. That ECC isn't that important is my opinion, not a fact.

Desuwa posted:

In fact the quote you brought up showing me saying you can't build a NAS for under 2000 doesn't show anything of the sort. You're not building "a high performance NAS with tons of storage" so I'm not sure what you're getting at by constantly bringing it up.
Please explain how 4tb drives are faster than 3tb or how a Ryzen will make iscsi faster. Or don't. I dont know or care what you consider "high performance", but as many spindles as possible and as much flash/ssd as possible is/was the gold standard for performance, adding controllers where necessary.

Could I get more performance with more money? Yes. But in more memory and more flash. Would it improve my workloads? Not noticably. This is why I made the SLI GTX 99999 analogy. There are rapidly diminishing returns.

Desuwa posted:

I'm not sure what you mean by orthogonal because that sounds pretty orthogonal? Two things that act on different, unrelated factors, which was my point?

When you say "ZFS is more of a hit in space, and ECC..."

Not orthogonal means "it's not an either/or choice". ECC does not protect you from bit rot on whatever you'd run on top of mdadm/lvm (ext4 checksumming or btrfs would, but you wouldn't use mdadm or lvm with btrfs). Checksumming filesystems do not protect you from bit flipping before write. You can have both. Or either. ECC and filesystems are complimentary, but one saves data you already have, and one saves data you're acquiring or modifying.

Again, though (and again, and again), cosmic rays are amazingly rare, tx/rx checksums will protect from some of it, and bad DIMMS have enough symptoms that you'd notice.

Use ECC if you want. This is not a holy war. Just please stop overblowing the importance of it, and going down slippery slopes like "you're already spending X" or "already gonna have a Xeon/Ryzen" which have no merit to a lot of us, and which don't noticably increase workload performance for a pure fileserver.

In general, please just let this drop.

redeyes
Sep 14, 2002

by Fluffdaddy

SamDabbers posted:

Alternative:
Rosewill RSV-L4412
+
Tower stand w/casters

Similar cost if you include hot swap bays, of which there are 12, no LED bullshit, and currently the case comes with a 500W power supply.

I already have 2 iStar 4x hotswap backplanes but Rosewill makes that same rackmount case with no hotswap so that works fine. I was kind of hoping to not have to build an entire computer around the hotswap backplanes.. but I have no idea what I would use. eSata somehow? I can certainly run that case along side my workstation but I need a fast connection between the 2 boxes (infiniband?)

[edit] I didn't realize that Rosewill has a 25" deep model. Perfect! I can move all the stuff I need into one case which is the best solution.

redeyes fucked around with this message at 19:31 on Aug 14, 2017

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!
E: yeah this post was written with a bit too much anger.

Desuwa fucked around with this message at 22:15 on Aug 14, 2017

Steakandchips
Apr 30, 2009

Pull up thread, pull up!

Nullsmack
Dec 7, 2001
Digital apocalypse

evol262 posted:



I can post a part list from my NAS if you want, but a $1k system in mATX can easily get to 24TB of storage with raidz2/raid6, quietly, saturate 4x multipathed gige iscsi (and probably a lot more than that -- I haven't actually run a disk benchmark), and have enough juice left over to run a couple of small VMs for :filez:

It won't do whatever extra stuff your system is doing (SQL VM?), but that's ok -- I want to provide storage, and not much else. This way, I can get a 1U switch, PDU, 4 compute nodes, and a storage server inside a half-depth 9U enclosure that sits next to my desk, and I can't hear it. Those are my needs, since the whole thing performs well, transports easily when I move, and is unobtrusive enough that my wife won't kill me if we're in some 1-bedroom temporary corporate housing for a month. This comprises a lot of my working environment, so it's important to me to have it available (and I can't be bothered with a colo)

I'd like to know more about your system.

BlankSystemDaemon
Mar 13, 2009



Steakandchips posted:

Pull up thread, pull up!
Come on, you and I both know you should've gone with: Dive, dive, dive! Hit your burners, pilot!


In storage relevant news, Gigabyte Server has finally made the MS10-ST0 available to interested parties with local representatives (read: cloud customers, we're ready to sell to you - consumers, get hosed a little while longer).
Turns out the rumors were true: we're getting a 16 non-SMT-core 2GHz CPU with 1MB L2 cache per core, up to 64GB UDIMM ECC (128GB RDIMM ECC), 32GB eMMC, and 16 drives provided you don't use a daughter board.

Makes me wonder if there isn't something to be said for a memory-resident FreeBSD installation which loads itself from the eMMC and stores data on the zpool.

BlankSystemDaemon fucked around with this message at 13:49 on Aug 15, 2017

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Nullsmack posted:

I'd like to know more about your system.

Me too, it sounds like an awesome setup.

Djarum
Apr 1, 2004

by vyelkin

D. Ebdrup posted:

Come on, you and I both know you should've gone with: Dive, dive, dive! Hit your burners, pilot!


In storage relevant news, Gigabyte Server has finally made the MS10-ST0 available to interested parties with local representatives (read: cloud customers, we're ready to sell to you - consumers, get hosed a little while longer).
Turns out the rumors were true: we're getting a 16 non-SMT-core 2GHz CPU with 1MB L2 cache per core, up to 64GB UDIMM ECC (128GB RDIMM ECC), 32GB eMMC, and 16 drives provided you don't use a daughter board.

Makes me wonder if there isn't something to be said for a memory-resident FreeBSD installation which loads itself from the eMMC and stores data on the zpool.

That looks really nice. I wonder what pricing would be for that?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

D. Ebdrup posted:

Come on, you and I both know you should've gone with: Dive, dive, dive! Hit your burners, pilot!


In storage relevant news, Gigabyte Server has finally made the MS10-ST0 available to interested parties with local representatives (read: cloud customers, we're ready to sell to you - consumers, get hosed a little while longer).
Turns out the rumors were true: we're getting a 16 non-SMT-core 2GHz CPU with 1MB L2 cache per core, up to 64GB UDIMM ECC (128GB RDIMM ECC), 32GB eMMC, and 16 drives provided you don't use a daughter board.

Makes me wonder if there isn't something to be said for a memory-resident FreeBSD installation which loads itself from the eMMC and stores data on the zpool.

What's the deal with Intel releases now? It's weird to see OEM hardware released that has unannounced Intel CPUs in it, but Google Cloud has been selling Xeon Platinums for 6 months, and they only got announced 3 weeks ago.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
That looks like a $400+ board given the mini SAS controller and SFP+ ports. Primary difference between that and the Xeon-D boards from a couple years ago besides processor is the fact it's a mini ITX board too. There doesn't appear to be any m.2 slots either but the 32GB of eMMC onboard will more than make up the price difference.

BlankSystemDaemon
Mar 13, 2009



Twerk from Home posted:

What's the deal with Intel releases now? It's weird to see OEM hardware released that has unannounced Intel CPUs in it, but Google Cloud has been selling Xeon Platinums for 6 months, and they only got announced 3 weeks ago.
It's worth noting that IDC was cancelled this year, so any announcements that would've been made there have likely poofed.

people posted:

Pricing on the Apollo Lake/Denverton SoCs
About 400 is what I'd expect too - but as necrobobsledder pointed out, the main distinction is the CPU, specifically that the Xeon-D has 256kB L2 cache per core and 24MB shared L3 cache whereas the equivalent CPU in terms of GHz and core count has 1MB L2 cache per core, and that the Xeon D is a 4-wide instruction path whereas Goldmont is only 3-wide (but at least Goldmont can do store and load in the same instruction, unlike Silvermont), and also introduces SHA extensions which mean hardware acceleration for both AES and SHA-1 operations (outside of what QuickAssist offers).


EDIT: Supermicro has apparently also released a whole set of boards powered by the Denverton SoC, and some of them feature QuickAssist.

BlankSystemDaemon fucked around with this message at 19:31 on Aug 15, 2017

100% Dundee
Oct 11, 2004
It seems like most of the posting in this thread if for homebrew/custom built type stuff so I'm not sure if this is the right place but I'll post anyway and see what yall have to say.

Essentially I'm looking for is a super simple, easy to use, low maintenance, plug and play type solution for like 20-30TB worth of storage. What I have now is a bunch of random internal/external harddrives ranging in the 3-8TB range and just have all my stuff spread out among them without much rhyme or reason and most of them are getting pretty full. Don't really have any kind of budget in mind but ideally I'd like to stay around $2k at the higher end and hopefully have some kind of expand-ability in the future if I need more storage.

I've been looking at the Synology DS1517 5-bay unit with 5x 8TB WD Red drives in it. I like the idea that I can easily add the 5-Bay expansion units in the future for increasing capacity, I've heard Synology stuff is very easy to use as well. Feel free to steer me away if there's something other brands or solutions that I should be considering.

BlankSystemDaemon
Mar 13, 2009



Courtesy of ServeTheHome, we also have the full SKU list with MSRP and full feature lists including which chips support QuickAssist:


EDIT: Going by this, I'm most interested in the C3758 on the Supermicro A2SDi-8C+-HLN4F.

BlankSystemDaemon fucked around with this message at 23:31 on Aug 15, 2017

Thanks Ants
May 21, 2004

#essereFerrari


C3708 looking sweet

Internet Explorer
Jun 1, 2005





100% Dundee posted:

It seems like most of the posting in this thread if for homebrew/custom built type stuff so I'm not sure if this is the right place but I'll post anyway and see what yall have to say.

Essentially I'm looking for is a super simple, easy to use, low maintenance, plug and play type solution for like 20-30TB worth of storage. What I have now is a bunch of random internal/external harddrives ranging in the 3-8TB range and just have all my stuff spread out among them without much rhyme or reason and most of them are getting pretty full. Don't really have any kind of budget in mind but ideally I'd like to stay around $2k at the higher end and hopefully have some kind of expand-ability in the future if I need more storage.

I've been looking at the Synology DS1517 5-bay unit with 5x 8TB WD Red drives in it. I like the idea that I can easily add the 5-Bay expansion units in the future for increasing capacity, I've heard Synology stuff is very easy to use as well. Feel free to steer me away if there's something other brands or solutions that I should be considering.

You'll be perfectly happy with a Synology DS1517. I just bought a Synology DS1517+, actually. I do IT for a living and could have gone the homebrew/custom route, but I've enjoyed my little Synology 2 bay NAS and it meets my needs without requiring babysitting. Just be sure to have backups, as RAID is not backup. Also, 5x 8 TB drives will give you closer to 21 TB usable with SHR2, despite what Synology's RAID calculator states.

100% Dundee
Oct 11, 2004

Internet Explorer posted:

Also, 5x 8 TB drives will give you closer to 21 TB usable with SHR2, despite what Synology's RAID calculator states.

Yikes, are you serious? 5x 8TB Drives should be about ~36TB of storage, you think it'll need 15TB for it's RAID/protection magic voodoo? If so you're super right since the calculator says it should be about 30TB with SHR/Raid 5.

Mr Shiny Pants
Nov 12, 2012
What would I need to run Infiniband over 30 meters? My NAS is in the spare bedroom, as is my switch and I run CAT5 to my workstation which is about 30 meters give or take.

Let's say I want to run 40Gbit QDR over fibre, what would I need?

Internet Explorer
Jun 1, 2005





100% Dundee posted:

Yikes, are you serious? 5x 8TB Drives should be about ~36TB of storage, you think it'll need 15TB for it's RAID/protection magic voodoo? If so you're super right since the calculator says it should be about 30TB with SHR/Raid 5.

SHR2/RAID-6. I wouldn't recommend running SHR/RAID-5 with large drives. Some might disagree, saying that the issue with RAID-5 is overblown, but I've dealt with enough failed arrays in my life.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Mr Shiny Pants posted:

What would I need to run Infiniband over 30 meters? My NAS is in the spare bedroom, as is my switch and I run CAT5 to my workstation which is about 30 meters give or take.

Let's say I want to run 40Gbit QDR over fibre, what would I need?
Infiniband cards and appropriate transceivers. Most of these that you can get on Ebay use cables with four pairs of fiber (for 40/56GBit) and MPO connectors. There's also transceivers that work over a single pair without these annoying MPO connectors, but they're rare on Ebay and expensive as hell.

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

Infiniband cards and appropriate transceivers. Most of these that you can get on Ebay use cables with four pairs of fiber (for 40/56GBit) and MPO connectors. There's also transceivers that work over a single pair without these annoying MPO connectors, but they're rare on Ebay and expensive as hell.

Would those be the active connectors?

Photex
Apr 6, 2009




I wonder how those new Atom chips will handle transcoding, my AMD 5350 crumbles once one MPEG2 stream starts getting decoded (I have multiple tuners on the network so sometimes people watch using Emby)

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Mr Shiny Pants posted:

Would those be the active connectors?
Yeah. The fiber transceivers make it active.

Red Dad Redemption
Sep 29, 2007

Has anyone here used Open Media Vault? How does it compare in terms of ease of use / feature set to something like Diskstation Manager? I'm trying to decide whether to pick up a preconfigured NAS (Qnap / Synology) for convenience or go for a homebrew, and one factor is whether there's a clear winner in terms of babysitting / administration.

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


I think you're going to struggle to beat Synology DSM for lack of having to administer anything

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply