Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



And that's exactly why I have never understood the idea of why FreeNAS should be installed on a seperate USB disk instead of the same pool as the storage.

For a new page, Denverton is looking a lot more interesting than the new HP Microserver Gen10 - up to 16 disks, up to 64/128GB (U/R)DIMM ECC, up to 16 threads, 10G SFP+ and 1G I210 NICs, all available on a mITX format board (the GIGABYTE MA10-ST0) in a System-on-Chip solution, meaning no ICH, PCH or any other nonsense - and most importantly, no lovely Marvell controllers that die under high sustained load.
The Goldmont microarchitecture also features updates which make it possible to read and write a memory execution in the same instruction (meaning much better IPC in memory-heavy tasks of, say, ZFS), as well as improvements to AES-NI and the addition of QuickAssist which means both encryption and hashing are hardware-accelerated and can do several GB per second.
It seems like they're well-positioned, too - throw a bare-metal hypervisor on the included eMMC, and throw a bunch of them in a blade server, and you've got yourself a nice cloud.
It's just too bad that Intel are apparently not going to tell us anything about availability anytime soon, since IDF'17 was shitcanned and there's nothing on the horizon until November.

Speaking of the HP Microserver Gen10, though I cannot comprehend why they added an APU - who needs a GPU on a server?

EDIT: Switched to a non-NDA'd image from Gigabytes own brochure which is publically available, just to be safe.

BlankSystemDaemon fucked around with this message at 21:05 on Jul 3, 2017

Adbot
ADBOT LOVES YOU

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
People who want their server to do realtime transcoding to x264.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

D. Ebdrup posted:

And that's exactly why I have never understood the idea of why FreeNAS should be installed on a seperate USB disk instead of the same pool as the storage.

It's not too crazy, really. As long as you remember to not have log files or whatever being written to a USB stick, they're generally pretty reliable (and cheap enough you can have a backup one sitting in a desk somewhere). They simplify some bits of installation, and are pretty nifty for when you want to upgrade the OS: if the update fucks up, just pull the stick and throw the backup in, and you're done. Not as easy if the OS lives on the storage pool.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
People run FreeNAS off USB drives for the same reasons people run ESXi off of USB drives - easier maintenance. If you need to upgrade a fleet of machines, you can flash a single USB drive and do rolling reboots. You could do similar things with booting from SAN-based LUNs, but several hundred LUNs that have basically identical content is management overhead and would get backed up alongside actually important things like databases. If you want to rollback an installation, it's a lot easier to swap back in the original USB flash drive as well.

TTerrible
Jul 15, 2005

priznat posted:

HP Gen10 microserver with AMD APU is coming out:

https://www.servethehome.com/new-hpe-proliant-microserver-gen10-powered-amd-opteron-x3000-apus/

I guess they skipped gen9?

This looks awesome. Finally time to retire my Gen1 microsever I think.

IOwnCalculus
Apr 2, 2003





One last update on the Ironwolf while I'm packing them up - Seagate's packaging allows a lot of end to end movement (along the long axis of the drive). Like half an inch. The drives I got were from two different manufacturing batches, 3/17 and 6/17.

I can't believe they ship drives out in these boxes. Newegg's bulk packaging was better.

JBark
Jun 27, 2000
Good passwords are a good idea.

priznat posted:

HP Gen10 microserver with AMD APU is coming out:

https://www.servethehome.com/new-hpe-proliant-microserver-gen10-powered-amd-opteron-x3000-apus/

I guess they skipped gen9?

Note that the Gen10 doesn't have any form of iLO, and the CPU is now soldered, so no upgrading later like we can with the Gen8. But it does support double the mem at 32GB, and it has 2 PCIe slots instead of 1. Plus the APU means it should be fairly useful as a HTPC.

The lack of iLO made it a non-starter for me, so I picked up an E3-1265L v2 on ebay for $190AUD for my Gen8, should last me for a few more years at least. It's nuts how much faster this CPU is than the stock G1610T.

Ziploc
Sep 19, 2006
MX-5

G-Prime posted:

This. Hopefully you've got a backup of your config, or can roll back to a previous boot environment on the same drive. If you have a backup, wipe the drive and reinstall and then restore the config (or replace the drive and do the same). You can try the rollback without doing anything crazy if it's available from the boot menu when you power on.

Previous boot enviro didn't seem to work. Don't think I have a config backup. Oh boy!

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
I got a 10 pack of the 8tb ironwolf drives, and they are working fine for me. They came in a big 20 drive box with decent padding, and they've been smart tested and had drive burn in done.

KOTEX GOD OF BLOOD
Jul 7, 2012

I want to access my Synology DiskStation remotely but to do so as securely as possible: I understand a lot of these units are getting turned into Bitcoin farms these days, or just an easy way for whomever to get access to my network. What is the best way to accomplish this; using it as a VPN server perhaps? I don't trust Synology's quick-connect thing.

EpicCodeMonkey
Feb 19, 2011

KOTEX GOD OF BLOOD posted:

I want to access my Synology DiskStation remotely but to do so as securely as possible: I understand a lot of these units are getting turned into Bitcoin farms these days, or just an easy way for whomever to get access to my network. What is the best way to accomplish this; using it as a VPN server perhaps? I don't trust Synology's quick-connect thing.

VPN is probably the right answer (and the one I use). Unfortunately Synology's built-in VPN package doesn't allow you to configure a private key for the VPN as far as I can tell, so your security is effectively only as good as the password on your account. There's ways around it (creating a new user with a very long password, specifically for VPN, using a Dockerized OpenVPN with better config options, monkeying with the inbuilt package via SSH) but they all have their cons.

OnceIWasAnOstrich
Jul 22, 2006

KOTEX GOD OF BLOOD posted:

I want to access my Synology DiskStation remotely but to do so as securely as possible: I understand a lot of these units are getting turned into Bitcoin farms these days, or just an easy way for whomever to get access to my network. What is the best way to accomplish this; using it as a VPN server perhaps? I don't trust Synology's quick-connect thing.

Port forwarding over public key SSH on a non-standard port.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

OnceIWasAnOstrich posted:

Port forwarding over public key SSH on a non-standard port.

A good go-to in my experience

Catatron Prime
Aug 23, 2010

IT ME



Toilet Rascal
Just picked up a Synology DS1717+, and was wondering what the new hotness in drives are these days.

Would one of these Desktar NAS drives be hunky dorey, or has Seagate cleaned up their act at all in terms of quality? Because I also found these Ironwolf 4tb drives which advertise similar mtbf, but I've had poo poo luck with Seagate 1.5tb drives in the past so I'm not sure if it's worth risking a hundred bucks in savings between five drives.

eames
May 9, 2009

I'm not sure what capacities you have in mind but check out helium filled drives. They're fast because of high platter density, run cool, have relatively low power consumption and are generally awesome but also expensive. The big question mark is longterm reliability because of helium leakage. I'm really happy with my 8TB WD Reds.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

OSU_Matthew posted:

Just picked up a Synology DS1717+, and was wondering what the new hotness in drives are these days.

Would one of these Desktar NAS drives be hunky dorey, or has Seagate cleaned up their act at all in terms of quality? Because I also found these Ironwolf 4tb drives which advertise similar mtbf, but I've had poo poo luck with Seagate 1.5tb drives in the past so I'm not sure if it's worth risking a hundred bucks in savings between five drives.

No one's ever put out any sort of reliability study on any of the NAS drives, so we really don't know which (if any) of them are particularly more or less reliable than average. That said, other than some DOA batches here and there, no one in this thread seems to have had much of an issue with any of the WD/Seagate/Hitachi NAS drives.

Note that the Hitachi drive you linked is a 7200RPM/128MB cache, while the IronWolf is a 5900RPM/64MB cache drive. This equates to higher performance for the Hitachi, at the cost of higher heat and higher price. Not sure if a DS1717+ would actually benefit from higher disk speeds, though.

Internet Explorer
Jun 1, 2005





Was wondering if I could get a sanity check on this plan:

I have a 2-bay Synology NAS that's in RAID-0. I understand the ramifications of RAID-0, have backups that run several times a day, etc. It's just what worked best for me at the time. It's time to upgrade to a 5-bay, likely a DS1717+. I'd like to move off of RAID-0 and onto SHR-2. I'd also like to keep my DownloadStation tasks and have them continue to work correctly. My plan was to...

1. Move both disks from 2-bay to the 5-bay. From my understanding this should move the config/OS and keep the DownloadStation tasks.
2. Install 3 new drives in a SHR-1 array, create a new volume, move the volume from the RAID-0 array to the SHR-1 array. Once the volume is moved the Shared Folders and the path on the DownloadStation tasks should still be correct. Anyone know otherwise?
3. Remove the 2 disks from the RAID-0 array. My understanding is that all of the disks get the OS and config as RAID-1 and that things should continue running correctly.
4. Install 2 new disks. Upgrade the SHR-1 array to SHR-2 using 1 of the new disks and expand it using the other.

Does any of that sound incorrect? Thanks thread.

[Edit: It looks like you can't just move drives to a new NAS if it is a different model, so this plan likely won't work. It looks like you have to get into the weeds to backup and restore Download Station tasks. Has anyone done this before? Any advice?]

Internet Explorer fucked around with this message at 21:06 on Jul 5, 2017

EssOEss
Oct 23, 2006
128-bit approved

Combat Pretzel posted:

Too bad the SSD cache layer in Storage Spaces doesn't work* like the L2ARC, otherwise I could section like 32GB off my SSD and use it as cache and bullshit my way around by sticking the iSCSI extents into Storage Spaces instead of running NTFS on it directly.

--edit2: *At least that's how I understand it, that it's offline balanced, altho some random Technet info suggests otherwise. Given it's all closed source, poo poo is kinda muddy.

Oh but it does! At least if I understand the L2ARC description correctly.

There are actually two different systems in there. Tiered storage spaces are indeed reshuffled based on a scheduled task that runs whenever. However, there is an entirely separate SSD cache system.

First, you need a tiered storage pool with an SSD tier and an HDD tier. After sticking your drives in the pool with Add-PhysicalDisk, just do New-StorageTier in PowerShell for both tiers - no need to configure anything yet besides telling it that yes you have two tiers.

Then when creating your storage space using New-VirtualDisk, you specify what storage tiers to use using -StorageTiers and -StorageTierSizes. This creates the normal tiered storage space. Use Get-StorageTierSupportedSize to get the numbers you need and then subtract your desired cache size from the SSD plus around 10 GB from the HDD because the "max" it gives you seems not to consider that the storage spaces system actually needs some housekeeping space, as well.

By default, a tiered storage space uses 1 GB of SSD write cache - all writes go there, then to be offloaded in the background to the HDD. You can use -WriteCacheSize on New-VirtualDisk to specify how much the cache should be.

And just to confirm, yes, you can use the same SSD for both a tiered storage space while also using it for the cache.

EssOEss fucked around with this message at 21:46 on Jul 5, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Eh, I don't really care so much about a small write cache. The L2ARC stores data that the ARC drops via LRU. If you enable streaming data on it, it's pretty much a full live read cache of the size of the SSD device/partition you've assigned to it. When I'm playing games, they currently all run from L2ARC, ZFS only touches disks to update atime. I don't see how to do that with Storage Spaces without enabling S2D, which somehow turns the SSD tier into a similar live cache, which doesn't work on the client version of Windows.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Do people use cheap second-hand consumer SSD's as L2ARC, since it's only used for caching and not essential for data integrity?

Or could something like an old, eBay sourced, Kingston SSDNow cause problems?

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

apropos man posted:

Do people use cheap second-hand consumer SSD's as L2ARC, since it's only used for caching and not essential for data integrity?

Or could something like an old, eBay sourced, Kingston SSDNow cause problems?

Initial google results suggest you're a horrible person with horrible ideas and deserve to burn in a pit of filth.

My gut says it couldn't hurt anything to try playing around with it.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
L2ARC is checksummed. If the SSD starts failing, it'll defer to the spinning rust.

IOwnCalculus
Apr 2, 2003





Much better results with the Toshibas so far:

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

ILikeVoltron posted:

Initial google results suggest you're a horrible person with horrible ideas and deserve to burn in a pit of filth.

My gut says it couldn't hurt anything to try playing around with it.

The idea was one of those 'join the dots' pieces of linear logic that you think is good, then quickly realise that many others have come to the same conclusion :-)

Combat Pretzel posted:

L2ARC is checksummed. If the SSD starts failing, it'll defer to the spinning rust.

I take it that the checksumming is part of everyday ZFS checksumming, so that it really doesn't matter in terms of performance whether you use a lovely 5 year-old eBay SSD or something new. If the r/w speeds are roughly the same on both drives then it's gonna work the same, with the caveat that the lovely drive is gonna run out of reallocated sectors much earlier than a new drive.

Catatron Prime
Aug 23, 2010

IT ME



Toilet Rascal

DrDork posted:

No one's ever put out any sort of reliability study on any of the NAS drives, so we really don't know which (if any) of them are particularly more or less reliable than average. That said, other than some DOA batches here and there, no one in this thread seems to have had much of an issue with any of the WD/Seagate/Hitachi NAS drives.

Note that the Hitachi drive you linked is a 7200RPM/128MB cache, while the IronWolf is a 5900RPM/64MB cache drive. This equates to higher performance for the Hitachi, at the cost of higher heat and higher price. Not sure if a DS1717+ would actually benefit from higher disk speeds, though.

Whoops, thanks, I missed that with the link... I thought I was looking at apples to apples in terms of cache and rpm but I guess not. I might have accidentally grabbed the wrong link but that at least answers my brand question. I was trying to extrapolate backblaze's hdd failure data but I guess that in my use case I probably wouldn't notice a difference with WD, SG, HT drives in my dinky nas.

Sigh... I just can't bring myself to spend another six hundred bucks when I have a bunch of new terabyte platter drives laying around I could fill it with today...

Anyone ever upgraded their NAS capacity by replacing hard drives one at a time and then rebuilding the RAID and expanding the volume when they're all done? Or am I just asking for random errors and trouble and should buy and set it up with the larger drives right from the get go? 3-4tb should get my important non-media stuff backed up and started and I can get radius/ldap, cloud backup, and vpn configured on my home network.

What extra steps are you guys taking to prevent malware and hacking when you set up your NAS for WAN access? I know Synology has the antivirus essentials app... But are you guys also using a dedicated firewall appliance with IDS like pfsense, or just relying on port blocking with your router? Are synology NAS units vulnerable to ransomware from SMB vulnerabilities like samba or windows with eternal blue?

Sorry, lots of questions for one post... any thoughts are appreciated!

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
There's Mellanox ConnectX-3 IB cards on Ebay Ireland for 80€ a piece, and they're with low-profile brackets. Fuuuuuuuuuuuuuuck! :(

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

There's Mellanox ConnectX-3 IB cards on Ebay Ireland for 80€ a piece, and they're with low-profile brackets. Fuuuuuuuuuuuuuuck! :(

You know you want to ;)

Got a link? I might be interested also. Saw the new Gen 10 Microserver.........

IOwnCalculus
Apr 2, 2003





Methylethylaldehyde posted:

I got a 10 pack of the 8tb ironwolf drives, and they are working fine for me. They came in a big 20 drive box with decent padding, and they've been smart tested and had drive burn in done.

I suspect those would've been packed better than the individually packed drives I've got (and still need to return).

For what it's worth I popped those Toshibas into the exact same drive sleds the Seagates had been in, and it's humming away on copying data from the old zpool to the new one.

Part of me wonders - but I'm not going to be assed to unbox and check - if perhaps the control boards on the Seagates were shorting out on the bottoms of the Norco trays? Seems like that would result in more than just ECC correctable errors, though.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Mr Shiny Pants posted:

You know you want to ;)

Got a link? I might be interested also. Saw the new Gen 10 Microserver.........
Eh, I need large brackets :(

http://www.ebay.co.uk/itm/112467584000

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I have no space in my townhouse and I've been thinking small (U-NAS NSC 810a). Is there going to be a way to get server hardware quiet enough in this case that we could stash them in plain sight?

My fiance is a professional seamstress and might be able to come up with something to dress that up a bit if it doesn't gently caress up airflow too much, but I think that's going to be the tradeoff there. Loud, and retro styling with blinky lights on the front.

Is dust-protection going to be possible with rackmount servers in a household environment?

Paul MaudDib fucked around with this message at 08:39 on Jul 7, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

IOwnCalculus posted:

I suspect those would've been packed better than the individually packed drives I've got (and still need to return).

For what it's worth I popped those Toshibas into the exact same drive sleds the Seagates had been in, and it's humming away on copying data from the old zpool to the new one.

Part of me wonders - but I'm not going to be assed to unbox and check - if perhaps the control boards on the Seagates were shorting out on the bottoms of the Norco trays? Seems like that would result in more than just ECC correctable errors, though.

Seagate has a pretty sordid history with some weird drive failures. After a string of premature failures in 2010-2012 I decided no more and haven't had a drive fail since. Literally. Out of maybe 20 drives I've had no failures. I'm running drives that are 5 years old in some places (with backups).

My 4x3TB WD Red drives are pushing 5 years old now. I have a couple cheapo 1-2 TB HDDs in various machines that are ~3 years old, and I have been really enamored with the Toshiba X300s lately (reportedly they are made on the same equipment as Toshiba's enterprise drives, I have three of them and they're huge and fast).

Paul MaudDib fucked around with this message at 17:13 on Jul 7, 2017

IOwnCalculus
Apr 2, 2003





My only issue with the X300s is they run quite a bit hotter than reds. Had to stick a fan in front of the server in my garage.

Thankfully (for them) they'll be going to live somewhere much cooler since Cox is going to enforce bullshit overage fees.

hifi
Jul 25, 2012

Paul MaudDib posted:

I have no space in my townhouse and I've been thinking small (U-NAS NSC 810). Is there going to be a way to get server hardware quiet enough in this case that we could stash them in plain sight?

My fiance is a professional seamstress and might be able to come up with something to dress that up a bit if it doesn't gently caress up airflow too much, but I think that's going to be the tradeoff there. Loud, and retro styling with blinky lights on the front.

Is dust-protection going to be possible with rackmount servers in a household environment?

4u chassis have 120mm fans, there's nothing stopping you from using quieter ones with fan filters, and sometimes there's a bit of foam between the fan mount and the front of the chassis, although if you fill an entire case up with hard drives then that's different.

The nas you listed has a 1u power supply which means 40mm fans though. Maybe it runs passive 99% of the time and has great fan speed management but it would make me wary if your goal is the quietest box possible

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
A 4u chassis with 120mm fans, fan filters and isolators, a regular size power supply and a CPU cooler that isn't super loud will end up being pretty quiet. The downside is the thing is huge as hell, and basically can't be left out, put it in a closet or under your bed.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Yeah, as others have said, unless you really jam pack that case, you can get quiet fans that should still be ok for airflow (if you're looking at a $500+ case, you can shell out for some Gentle Typhoons or quiet model Noctua.

That said, even in a townhouse there's probably a better option for noise abatement than sitting it out somewhere with a frilly skirt on it or whatever. Run a Cat6 drop to a closet you don't use much, or the drain space under the stairs, or whatever. Then close it up and forget about it--you are using IPMI or similar management hardware, right?

ddogflex
Sep 19, 2004

blahblahblah

Paul MaudDib posted:

Seagate has a pretty sordid history with some weird drive failures. After a string of failures in 2010-2012 I decided no more and haven't had a drive fail since. Literally. Out of maybe 20 drives I've had no failures. I'm running drives that are 5 years old in some places (with backups).

My 4x3TB WD Red drives are pushing 5 years old now. I have a couple cheapo 1-2 TB HDDs in various machines that are ~3 years old, and I have been really enamored with the Toshiba X300s lately (reportedly they are made on the same equipment as Toshiba's enterprise drives, I have three of them and they're huge and fast).

The only drives I've had die literally ever were Seagate. And they've died within a year or two. I have a WD Black 1TB that's still in my desktop and has been for EIGHT YEARS, no signs of problems. I also have a seven year old 1.5TB WD Green in the same PC. I totally expect them to die eventually, and anything important on them is backed up to my NAS, but they haven't yet. This is a computer that's been powered on pretty much 24/7 the whole time. So yeah. WD only for me. I've used them since back when I built my first PC with a 40GB WD and that was the largest size.

redeyes
Sep 14, 2002

by Fluffdaddy
I did this kind of thing with a 4u Inwin chassis. I used 2x 80mm Noctua's on the back and 1x120 Noctua on the front left (right in front of the video card). Then a 100mm fan in the center front and a 80mm left front (behind a 4x sata hot swap bay). Case has a RX480, NVMe 750 SSD, 840 Pro, and 5x4TB Hitachi and 1x 2GB WD Green (for security footage). It weighs a loving ton and isn't easy to work on but it's cold steel and beefy as hell. I like it a lot.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

ddogflex posted:

The only drives I've had die literally ever were Seagate.

I've had a smattering of Seagates, WD's, and random others (remember when Maxtor was a company?) all die on me in the past. I've been quite happy with all the NAS drives (I have Seagates, WD Reds, and two HGST's) I've played with so far.

redeyes
Sep 14, 2002

by Fluffdaddy
In my box of 'broken, erase all data' drives I have mostly Seagate, Toshiba, Fujitsu. A few WDs and Hitachis. Sample size around 50

Adbot
ADBOT LOVES YOU

Internet Explorer
Jun 1, 2005





I'm not too sure I'd be worried about anecdotes when good data exists.

https://www.backblaze.com/blog/hard-drive-reliability-stats-q1-2016/
https://www.backblaze.com/blog/hard-drive-failure-rates-q3-2016/
https://www.backblaze.com/blog/hard-drive-benchmark-stats-2016/
https://www.backblaze.com/blog/hard-drive-failure-rates-q1-2017/

If you've never had a WD fail on you, then you have good luck. I generally buy WD and have had quite a few fail on me over the years. I think the takeaway from Backblaze's data is that there are specific models that seem to fail more than others and it is not generally linked to one manufacturer or another. Except for HGST, they seem to be consistently pretty good.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply