Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Do we have any timeline for Denverton boards actually purchaseable? I don't really need the beef / price of a Xeon-D board except maybe a 2 core, but low TDP and good connectivity is appealing.

Adbot
ADBOT LOVES YOU

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

thebigcow posted:

People have wanted to run everything under the sun on their NAS and FreeNAS finally integrated docker and bhyve support in the GUI. The new GUI also in theory supports multiple operations from multiple logins and and a CLI for their front end.

These all sound like great selling points for TrueNAS appliances, why do people think they've lost their way?

I think it's because surprise surprise, the greenfield complete rewrite of their UI feels less mature than their polished over the last decade solution. Corral will be great in 3 years time.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

DrDork posted:

If only there were cheap rj45 10GbE options. Fiber cable prices really kill my plans to shove the NAS on the other side of the house.

Even those are getting more affordable, if you look at Xeon-D with multiple onboard 10G-BaseT and the Ubiquiti 10GbE switch.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

n.. posted:

Fiber cable is cheap, and CAT6A is a huge pain in the rear end.

You can get 50 meter OM3 fiber cables on amazon for like 50 bucks.

Including 2 transceivers? Those are the expensive parts, right?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Moey posted:

Look on FS.com, they have dirt cheap compatible optics. Other companies have cheap compatible optics as well.

Huh, thanks for opening my eyes. 10G home network here I come.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Methylethylaldehyde posted:

Intel, HP and Cisco all have transceiver brand lockin, you have to use their lovely branded ones or it flat out won't work, which made it loving hellish getting my HP switch to talk to my Intel NIC using a copper cable. I think Cisco still has the iOS command that lets you use the non-branded ones, but intel and HP's response was more or less 'eat a dick' when I asked them how to disable it.

Wait, so what transceivers should I be looking at to work with the onboard 10Gb SFP+ NICs on Xeon-D boards?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

caberham posted:

I dont want to buy unraid. what else can I use in the mean time?

Ubuntu + ZFS, or your linux distro of choice + mdadm with LVM, or Windows + Storage Spaces, honestly anything that's been around for a while will work and people get all picky over minutia here.

Edit: Don't use btrfs in RAID 5 or 6.

Twerk from Home fucked around with this message at 06:25 on Apr 14, 2017

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

caberham posted:

Alright, attempt #4 to make things work.

This time trying ubuntu ZFS on virtual box to set up plex, domain log in, and other stuff

While you certainly can run your NAS OS inside a VM, the model I've seen people happiest with is having the NAS OS be the VM host.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Stutes posted:

:psyduck: I was ready to upgrade my DS1512+ on day one if they had launched with the new Atom processors. Why would they launch a model with a 4-year-old known defective processor?

These are appliances, not general purpose servers. Intel promises that the defect is fixed on current produced Avoton atoms, and the NAS vendor doesn't want you to care about what CPU is in your NAS any more than you care what CPU is in your network switch.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
How strong is the suggestion that you shouldn't use ext4 on volumes larger than 16TB? I can't tell if it's just a minor performance thing, or if I should not even consider ext4 on large volumes and look at xfs instead.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

DrDork posted:

Same with the other NAS drives--most of them are 5400/5900 RPM and tuned for low power use. If the drives you already have are 5400's, though, it probably won't be a night and day difference and you should look at trying to improve airflow. Admittedly, there have been a few reports out that suggest that moderately high temperatures like 50C aren't actually all that bad for drives, but YMMV.

Toshiba and Seagate didn't get the memo, most of their new line of NAS drives are 7200 RPM. Worst of all, on the Seagate Ironwolf basically no specs are constant across the line.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

DrDork posted:

I haven't looked at the new Toshibas, but the normal Seagate IronWolf's are 5900 up to at least 4TB. The IronWolf Pro's are 7200, but I don't think too many consumers are buying those anyhow--same with the Red Pro's, which are also all 7200.

I was looking specifically at the 6-10TB Ironwolf non-Pros, which are all 7200RPM and seem to be where mainstream "storage drives" are right now. I just had a friend buy 5 8TB ones to put in a NAS.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Paul MaudDib posted:

For a basic ZFS NAS build (just NAS, not hosting a bunch of other poo poo): G4400T or G4560? The T-series low-power is kinda compelling but the hyperthreading makes the 4560 quite a bit faster when needed.

It's not like the thing will be loaded all the time, right? And the idle powers should be mostly the same?

Both support ECC according to the Ark, so I should be good there.

edit: I read on another forum that they're the same apart from the clocks - so the T-series aren't binned?

I'd go with a G4560 for sure.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

I disagree with this take. Used enterprise SSDs are sure a good deal, but current DDR4 and Flash prices are not permanent and they will continue to fall in the long run once fabs have finished retooling for newer processes and are able to catch up some with pent-up demand.

Optane is new, expensive, and not mainstream yet. I'm cautiously optimistic about it.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

ddogflex posted:

My system actually has Intel AMT, but I've read it's a huge security hole and have it disabled due to that.

I'll just plug in a Monitor and keyboard later tonight and see what's up.

What do you use for a client for Intel AMT anyway?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

evol262 posted:

I'm gonna assume you mean decoding here. Accel has been there for a desktop, and it's practically required for s number of DEs.

GPUs are actually pretty bad at encoding. It's useful for offload, but GPUs have no branch prediction, which is incredibly important for modern codecs. There's a use case for it, but it's not nearly as extreme as you'd think.

GPUs are really good at encoding 1080p streams in real time with very low CPU usage. Sure I could do better burning 2 whole cores of my quad core CPU, but I don't have that to spare.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

ddogflex posted:

I'm sure this is a stupid question, but is there anywhere to get 8GB of unregistered DDR4 ECC cheaper than crucial.com ($100)? Newegg doesn't even sell any that I can find, and on eBay everything is $130+. Which seems pretty crazy to me.

DDR4 prices have been going up dramatically. That pricing doesn't seem that out of line for ECC UDIMMs at current pricing, and big RDIMMs are more like $15+/GB if you can find them in stock.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

D. Ebdrup posted:

Come on, you and I both know you should've gone with: Dive, dive, dive! Hit your burners, pilot!


In storage relevant news, Gigabyte Server has finally made the MS10-ST0 available to interested parties with local representatives (read: cloud customers, we're ready to sell to you - consumers, get hosed a little while longer).
Turns out the rumors were true: we're getting a 16 non-SMT-core 2GHz CPU with 1MB L2 cache per core, up to 64GB UDIMM ECC (128GB RDIMM ECC), 32GB eMMC, and 16 drives provided you don't use a daughter board.

Makes me wonder if there isn't something to be said for a memory-resident FreeBSD installation which loads itself from the eMMC and stores data on the zpool.

What's the deal with Intel releases now? It's weird to see OEM hardware released that has unannounced Intel CPUs in it, but Google Cloud has been selling Xeon Platinums for 6 months, and they only got announced 3 weeks ago.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Thanks Ants posted:

Azure Cool Blob is priced close to Glacier but without the retrieval caveats.

I had to look up if that's the actual product name. "Cool Blob" is a lot worse than Glacier or Coldline.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
If there's no need for HA and storage needs are about 100TB, there is absolutely zero reason to look at something like Ceph instead of just a 12+ drive ZFS (or mdadm) box, right? It looks like you have to throw absolutely huge amounts of resources at Ceph before it begins to approach the performance of a pretty simple zfs setup.

Twerk from Home fucked around with this message at 16:32 on Feb 21, 2018

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Zorak of Michigan posted:

12 drives would be awfully small for Ceph, but I'd want to hear more about your environment to make a solid recommendation about what you ought to do.

I'm currently using FreeNAS with an 8x8TB RAIDz2 setup, and while I'm very happy with it, my dream is to find the money to build a Ceph cluster, so that I can have a good level of data protection but still have the flexibility to buy drives as needed.

I want to try Ceph at home one way or another, and am considering recommending it to a group I work with.

An academic lab that I consult with (not as IT, but writing software for them) currently has 1 storage box with 12x10TB hard drives in RAID 6 on an LSI raid card. Write performance is really lovely on it, which makes me think that the RAID card is likely the bottleneck and mdadm or zfs might actually be faster, given that it's got a lot of CPU power. They need another 100TB of storage soon-ish, and their IT guy mentioned that he wants to look at other options than just using a RAID card.

They don't need HA, and don't need the storage to appear to be in a unified pool. What I'm getting at is for the same money, Ceph is just going to be way slower than their current solution, right? I (and their IT guy) are also curious if zfs on linux on the new storage server would outperform the 12-drive wide RAID 6 on a MegaRAID card, in which case they could move everything over to the new box, and re-do the original box's raid as software instead of hardware.

I also don't know a ton about Ceph and maybe I'm misunderstanding and it can be cost and performance competitive with linux software raid at the 100TB - 200TB space.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Zorak of Michigan posted:

Which performance metric is it lagging on?

Sustained write is poorer than expected. I don't have specifics off of the top of my head, but the workload deals more with long sustained reads / writes than lots of small I/O.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Careful Drums posted:

has anyone here set up a NAS w/ a raspberry pi and some USB hard drives? There's a lot of guides to them but I'm wondering about performance. Could I, say, rip a bunch of blu-rays/dvds onto it and stream them to my TV or would I need something beefier to handle that?

It'll work fine as a toy, but terribly as an actual NAS solution. I'd buy a cheap commercial NAS if you don't want to tinker, or look at more conventional ZFS or mdadm linux NAS solutions if you want to fiddle and learn.

Raspberry Pis have terrible network performance and won't even saturate their 100mbit ethernet interface. Loading stuff onto and off of them is a bad time, and if you have things like logs writing to the SD card eventually you'll wear out the SD card and have to replace it.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Greatest Living Man posted:

Plex doesn't deal with DVD ISO's or VIDEO_TS folders. So you're gonna spend a lot of time transcoding them and dealing with weird conversion issues. I think Kodi can work with VIDEO_TS folders.

You can also just rip the disks straight to an MKV with MakeMKV, and then Plex (or any media player at all) can handle it just fine.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

wargames posted:

There are 10g nas boxes out there but the only ones I could find use SFP not cat6a or cat7.

How about this one? 10GBaseT, 2 ports: https://www.synology.com/en-us/products/DS1817 https://www.amazon.com/Synology-bay-DiskStation-DS1817-Diskless/dp/B071K9J4MS/

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

DrDork posted:

They are, indeed, though at $900 their price puts them solidly out of range for most, especially as you can get better performance for much less unless you really need 4x10Gig NICs.

What's the "better for less" if you need >64GB of RAM and at least 1 10Gig NIC? Just spitballing here because Intel's product stack has gotten too complex for me to follow.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Paul MaudDib posted:

x264 was patched for AVX-512 about a year ago, and x265 can make very good use of it as well. But I was talking about lower AVX levels - the 7100 only supports up to AVX2 but the G4560 does not support AVX at all.

You made me do a search about this, and it looks like Skylake Xeons are fifty percent faster per core than E5-v4 Broadwell xeons at x265: http://x265.org/x265-receives-significant-boost-intel-xeon-scalable-processor-family/

That's unbelievable. That's a bigger single-generation CPU than we've seen since the Pentium 2 days.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Paul MaudDib posted:

It smacks the poo poo out of high-end off-the-shelf solutions unless they're something absurd like X99/X299 platforms, which you will pay dearly for in an off-the-shelf product. If you are willing to splash out $800 for a mobo+CPU, it's arguably a better buy than anything you can do on the LGA1151 platform, and it's way more power efficient than an X99/X299 platform.

I hear this, but I can't help but feel that a $180 6-core i5, reasonable motherboard, and simple HBA in IT mode is still far better bang for the buck. Just like the Xeon-D platforms I think that for home enthusiast usage, Denverton is really cool, but is a splurge for someone who just wants an excuse to tinker with less-mainstream products.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

HalloKitty posted:

I built two supermicro denverton machines in silverstone ds380s with 8 drives in each :v:
Ecc ram.. unraid.. I could throw up some photos

How far up the denverton product stack did you go? Did you have 2 cores, or 16 cores in each?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Schadenboner posted:

Are WD Reds Pro ok or are they :ohdear:? It seems like 7200 is a lot of RPMs to keep going at for years at a time?

7200RPM disks will use more power and make more heat also. If you've got deep pockets, make sure you're getting helium drives (look for lower power usage). It looks like 10TB Red Pros are helium, which makes them use half the wattage at idle (while spinning) vs the 8TB ones.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Is anybody using an HP Microserver Gen10? Are they an awful value for money? I've been using an i5-4250u NUC that I got on eBay used for $110 with a 5TB external shingled Seagate hooked up to it as a VM host and NAS for almost 5 years now, and I finally want redundancy and more storage.

I'm willing to have a somewhat weird low performance setup in exchange for better power efficiency than mainstream parts could get me, but I don't want to be like that dude a few posts up who spent $1400 for 16TB raw storage. My current plan is shucking 4x10TB Easystores into a Microserver Gen10, but I could just as easily put an i3 into a Fractal Design Node 304 or something.

I want more than 16GB RAM, low power usage, and more CPU performance than a 15W Haswell. That's about all my requirements.

Edit with one more question: I have found near zero info about the Operon X3421 online. My current understanding is that it's two Excavator modules on 28nm at 35W TDP, right? Looks to perform favorably compared to Atoms, and C3xxx atoms look like a miserable value for money overall.

Twerk from Home fucked around with this message at 14:50 on May 13, 2019

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Sheep posted:

I have a Gen10 and it is awesome. The CPU is not some super stripped down or mobile line, but yeah information is a little sparse.

My only complaint is that the SSD mount on the top of the chassis is a $20 extra part that isn't included. All in with extra RAM and the SSD tray was only like $400ish off Amazon which is way better than anything else I could build that supports ECC in a small form factor.

My other option I'm looking at also has ECC and is cheap-ish, which is putting an i3 in a mini-ITX supermicro board. That would have IPMI and way more CPU power, but I'm not sure about overall power consumption or nice-ness of case.

I'll probably just do the microserver. Do I need the SSD tray or can i just velcro that sucker in there? I've always just taped SSDs to the sides of cases.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Thermopyle posted:

You can get the 8TB Easystore on Amazon for $167.78.

TBH, I'm not really sure if they're worth the extra 30 bucks over the WD 8TB Elements. The Elements seem to mostly have EMAZ helium drives, with a scattering of air drives. Same warranty length as the Easystore.

Are people really returning reassembled drives after shucking for warranty coverage? I assumed that shucking the drive violated the warranty, and we were self-insuring with how much cheaper the drives are.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Do you have to put it back together, or return all the pieces at least? Or can you just ship back the drive without all the plastic enclosure.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
I know that shingled drives should never be used in arrays, and are garbage at workloads that have a mix of reads and writes. What about for a single sustained write? What sort of write throughput would I see on a shingled hard drive with a write-once workload?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Bob Morales posted:

Dropbox uses servers full of 100 SMR drives but they basically are doing WORM and by using SSD’s to buffer data and controlling the whole drat stack, they manage to push 40GB/s worth of writes :science:

Good to know. They've gotta be some type of replication-based solution that's similar to an inhouse Ceph, right?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Does anybody have case recommendations with at least 4 2.5" bays that can hold 15mm tall drives? Is this something that exists? Almost all of the cases I've found specify 9.5mm high 2.5" bays.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Have you guys had SSDs die? What's expected lifespan for consumer SSDs? This guy kicked the bucket this week.

Only registered members can see post attachments!

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

H110Hawk posted:

2 petabytes of write against that amount of host writes suggests offset issues. Though I haven't looked deeply into it. Yes I have had thousands die but in datacenter workloads, though most as os drives not heavy hit ones. (The heavy hitters nuked out way faster.)

Yeah, I was curious about that but don't know how to address it in the future. This disk had a couple of hyper-V VMs, including 1 Win10 and 2 or 3 Ubuntu. It was running a plex server, minecraft server, Uni-Fi wifi controller, qbittorent scratch area, couple low-traffic NodeJS and PHP websites, TICK stack for monitoring & dashboards.

I have no idea which of those was writing enough to hit even 150TBW, because all of them were extremely low utilization. I had configs and database backups and didn't lose anything that I will miss, but it was still annoying.

Edit: I didn't need Windows, so I've already got everything set up on Ubuntu 18.04. I'll monitor to see what's writing so much, assuming that it wasn't something related to Hyper-V / Windows that was writing so much.

DrDork posted:

You're clearly using that drive for some business/enterprise workload: shell out a few extra bucks for a larger enterprise-level drive next time and it should last a good bit longer. A 512GB 970 Pro, for example, is warrantied to 600TBW, or probably 6x what that SanDisk was engineered to hit.

I replaced it with an 840 Evo out of my junk drawer and I am preparing to make the next drive failure even faster to recover from. :colbert:

Twerk from Home fucked around with this message at 17:21 on Oct 30, 2019

Adbot
ADBOT LOVES YOU

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

H110Hawk posted:

What caused it is your host OS (Windows?) formatted the disk such that FS block boundaries were offset from the NAND ones. Two examples:

1. Host OS formats 512b blocks on a 4k NAND disk. Every "one" write is amplified by (4096 / 512) = 8.0x.
2. Host OS formats 4k blocks on a 4k NAND disk but starts at byte 1k. (Out of 240,000,000kb) It should have started at byte 4k. Now every write is written out to 2.0x as it crosses a NAND boundary. (The closer to 4k increments the lower this multiplier, but 1 bit changed will result in 8k written.)

It sorta looks like you had the former. Ubuntu as a host OS should handle it a LOT better, but I would check this out right at the start and make sure everything maps through correctly. gparted has a little helper as I recall which can help you align things from the get go. Make sure your VM os disks are the correct block size as well and because you're using files they will self-align. Also for your butttorrents and plex storage consider a HDD.

Thanks, I appreciate this. Hyper-V was the host, with Windows and Ubuntus installed on it. I didn't think about VM OS disks block sizes at all, and there's a separate long-term storage system that's on hard drives. I just wanted to use the SSD as some fast scratch space.

This drive should have been doing barely anything, so 150TBW does not add up with its extremely light usage. It's a mystery to me why it wrote so much. I also kept it under 80% full.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply