Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KOTEX GOD OF BLOOD
Jul 7, 2012

I just got a 10tb easystore to shuck, to replace the first 8tb drive in my array, but now I'm not so sure. It's only two years old. I do have a spare NAS lying around and I could just add to my storage solution (using three NASes in one house, lol.) What do you goons vote – should I preemptively replace the first 8tb drive or just balls to the walls add another 10tb to my setup? The data isn't irreplaceable or anything.

Adbot
ADBOT LOVES YOU

CopperHound
Feb 14, 2012

KOTEX GOD OF BLOOD posted:

The data isn't irreplaceable or anything.

Rube Goldberg together one giant JBOD array spanned over all three!

H110Hawk
Dec 28, 2006

CopperHound posted:

Rube Goldberg together one giant JBOD array spanned over all three!

But enough about ZFS.

Corb3t
Jun 7, 2003

KOTEX GOD OF BLOOD posted:

I just got a 10tb easystore to shuck, to replace the first 8tb drive in my array, but now I'm not so sure. It's only two years old. I do have a spare NAS lying around and I could just add to my storage solution (using three NASes in one house, lol.) What do you goons vote – should I preemptively replace the first 8tb drive or just balls to the walls add another 10tb to my setup? The data isn't irreplaceable or anything.

I'd probably consolidate everything into an unraid box. Definitely wouldn't preemptively replace anything.



Unraid's 6.8 features are shaping up to be pretty nice - I'll probably dump OpenVPN for Wireguard (Less resource intensive on iOS, so I'll be able to VPN into my network whenever I'm not at home at all times). Remote access to the server webGUI is really great too.

KOTEX GOD OF BLOOD
Jul 7, 2012

Thanks but I already have these three synology units and they work great.

H110Hawk
Dec 28, 2006

AgentCow007 posted:

I've had the 1500 version of this for years and it's trusty af. I haven't configured it for my Syno yet but I should do that. Used to have it working like that on my FreeNAS back in the day, though. I've long since shut off the power outage beeps; it only beeps when you gently caress with the power button.

Just about to buy one and realized I should read the manual and make sure it will actually shutdown my Synology. Glad I did because gently caress this noise:
13. MUTE: This icon appears whenever the UPS is in silent mode. The alarm does not beep during silent mode until the battery reaches low capacity.

The search continues for a UPS which will never beep ever.

Raymond T. Racing
Jun 11, 2019

H110Hawk posted:

Just about to buy one and realized I should read the manual and make sure it will actually shutdown my Synology. Glad I did because gently caress this noise:
13. MUTE: This icon appears whenever the UPS is in silent mode. The alarm does not beep during silent mode until the battery reaches low capacity.

The search continues for a UPS which will never beep ever.

The Cyberpower BRG1500AVRLCD according to the manual (as well as confirmed from me never having it happen), will only beep if there's a problem, but it doesn't consider low battery a problem. Does that fit the bill?

H110Hawk
Dec 28, 2006

Buff Hardback posted:

The Cyberpower BRG1500AVRLCD according to the manual (as well as confirmed from me never having it happen), will only beep if there's a problem, but it doesn't consider low battery a problem. Does that fit the bill?

If it's not my smoke alarm and it's beeping then no. I would accept it beeping for overcurrent, but not battery failure. It should just signal a shutdown.

KS
Jun 10, 2003
Outrageous Lumpwad
Have it signal the shutdown above the level that it beeps, problem solved. That's usually configurable.

H110Hawk
Dec 28, 2006

KS posted:

Have it signal the shutdown above the level that it beeps, problem solved. That's usually configurable.

I think he means it beeps when the battery is failing (no capacity, cannot charge, etc) vs when the battery is low (depleted due to use).

I will likely wind up swallowing my pride and buying it regardless which makes me mad, but it's not going anywhere where people sleep. The last (3) UPS's I owned all decided to poo poo themselves in single digits hours of the morning and woke me up. Only one of them had a permanent mute - via physical dip switches on the back in like 2000.

IOwnCalculus
Apr 2, 2003





Open it up and rip the speaker out. If you're lucky it's on wires instead of soldered directly to the PCB.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

I have been poking at my backup fileserver a little more trying to understand what's going on with that ZFS infinite loop when receiving. I destroyed the pool and recreated it (needed to move from a simple to a raidz1 anyway) and sent it back over... and it still is infinite looping. So probably not the pool.

Somewhat odd - the dataset shows up, but not the snapshot itself. So I have raidpool/myset but not raidpool/myset@010 or whatever. Not sure if that's significant in some respect, like it's getting stuck somewhere between when it does the bookkeeping for the dataset and where it does the bookkeeping for the snapshot?

Also, during this spin, there's no disk load at all. It's on a USB RAID enclosure (configured for JBOD) and the drive lights aren't flashing.

And it worked previously... not sure what exactly has changed. I'm thinking about taking the X5650 back out and putting the W3565 back in, that's the only thing I can think of at this point. I've tried disabling meltdown/spectre mitigations, removing the intel-microcode package, updating the system, etc etc.

Is there a debug log I can check or enable or whatever?

edit: looks like the stock ZOL build doesn't have debug logging enabled, but dmesg is showing a shitload of corrected memory errors, I am guessing I either have a dead stick, I bent a pin changing CPUs, or a dead memory channel. Pulled that stick and we'll see if it hangs again when the send finishes in a couple days.

Paul MaudDib fucked around with this message at 07:15 on Sep 7, 2019

BlankSystemDaemon
Mar 13, 2009



The only thing I can think to suggest is to dtrace the process, or simply give up and try and report the failure somewhere with steps on how to reproduce if you can work them out - because it's definitely something that needs to be fixed before the new OpenZFS lands (the repo has been renamed, so there is no longer ZoL, ZoF, and such).

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe
I know a lot of you are low on storage space. So 18 & 20TB drives are being manufactured next year.

https://www.servethehome.com/western-digital-volume-production-of-18tb-and-20tb-drives-in-2020/

Corb3t
Jun 7, 2003

Devian666 posted:

I know a lot of you are low on storage space. So 18 & 20TB drives are being manufactured next year.

https://www.servethehome.com/western-digital-volume-production-of-18tb-and-20tb-drives-in-2020/

Western Digital needs to finally release some 12 TB and larger EasyStores.

IOwnCalculus
Apr 2, 2003





I'm finally in the unusual position of not being constrained on either drive counts or capacity at the moment. I'm probably going to buy four more Easystores on Black Friday, but mostly to get some of my oldest drives out of the array before they cause major problems.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

IOwnCalculus posted:

I'm finally in the unusual position of not being constrained on either drive counts or capacity at the moment. I'm probably going to buy four more Easystores on Black Friday, but mostly to get some of my oldest drives out of the array before they cause major problems.

you buy one of those Netapp DS4243 disk shelves that were kicking around for $60 a few months back?

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire
I still have a Proliant N40L that's running fine for my needs-- just using it as a NAS and not as any thing else server wise.

It has 5x 2TB drives in it, but I am thinking of finally upgrading to 10TB drives or something to get some more life out of it.

Will I run into that voltage problem someone mentioned before?

IOwnCalculus
Apr 2, 2003





Paul MaudDib posted:

you buy one of those Netapp DS4243 disk shelves that were kicking around for $60 a few months back?

Yup, and it's still only about half full of disks.

I'm not even going to have to pull the disks that I'm replacing out - I'll leave them as spares for their similarly-old same-size drives and get another year or two out of them.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

IOwnCalculus posted:

Yup, and it's still only about half full of disks.

I'm not even going to have to pull the disks that I'm replacing out - I'll leave them as spares for their similarly-old same-size drives and get another year or two out of them.

What did you do for the interface card? Looks like you can buy the Netapp card, you can buy the OEM card that Netapp sourced it from, or possibly "other" if you replace the IOM3 module.

I bought one and one of the mounting ears is a little bent, but it looks reasonably OK other than that, haven't connected it to anything yet.

I'm thinking semi-seriously about setting up a half-depth rack, getting some chassis and white-boxing a mini cluster with my half-dozen-odd X99 boards and maybe some 1600s.

Paul MaudDib fucked around with this message at 06:44 on Sep 8, 2019

IOwnCalculus
Apr 2, 2003





I went the other route, replaced the IOM6 modules mine had with some Dell / Compellent HB-SBB2-E601-COMP 0952913-07 modules. Technically, one of them isn't even doing anything more than blocking airflow, but surprisingly the blockoff modules cost even more. Have it hooked up to an internal LSI2008 card via a bracket with a 8088/8087 adapter.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
What would be the nominally best way to configure one of those 24-drive shelves with ZFS for reasonable but not super overkill redundancy with 8 TB drives?

BlankSystemDaemon
Mar 13, 2009



I would love to get myself a 24 disk JBOD chassis with a 16e initiator target HBA, but they're so sparse here. :denmark:
Already have a perfectly functional windtunnelserver in the form of a a IBM x3650 M3 with 2x Quadcore Westmere CPUs, 96GB memory and a initiator target HBA for the internal disks which are just running a couple of mirrored disks in a zpool for the OS.

Paul MaudDib posted:

What would be the nominally best way to configure one of those 24-drive shelves with ZFS for reasonable but not super overkill redundancy with 8 TB drives?
3 raidz2 vdevs with 8 disks, unless you wanna save some for hot-spares.

BeastOfExmoor
Aug 19, 2003

I will be gone, but not forever.
I asked this question in the Hardware Q&A thread, but didn't get a great response. I figured there might be someone here with some experience doing this.

Can I use those PCIe risers that use one of your 1x slots and allow you to plug in a 16x card to run a very minimal GPU for what's essentially a headless server? I'm trying to free up some larger slots for other PCIe cards (HBA cards, a GPU passthrough, etc.) for a VM server I'm building. I really just need a GPU to get the server to boot and for occasional troubleshooting at the command line if I can't hit it via network.

More related to the main topic of this thread. For a self-built NAS, is there a way to share data with another adjacent PC that's faster than gigabit internet? Presumably I could just buy 10Gb NICs for each PC and direct connect them, but is there a better solution than that?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

BeastOfExmoor posted:

I asked this question in the Hardware Q&A thread, but didn't get a great response. I figured there might be someone here with some experience doing this.

Can I use those PCIe risers that use one of your 1x slots and allow you to plug in a 16x card to run a very minimal GPU for what's essentially a headless server? I'm trying to free up some larger slots for other PCIe cards (HBA cards, a GPU passthrough, etc.) for a VM server I'm building. I really just need a GPU to get the server to boot and for occasional troubleshooting at the command line if I can't hit it via network.

More related to the main topic of this thread. For a self-built NAS, is there a way to share data with another adjacent PC that's faster than gigabit internet? Presumably I could just buy 10Gb NICs for each PC and direct connect them, but is there a better solution than that?

For the PCIe question, this depends on your motherboard and what else you have plugged in, sometimes the x1 slots get disabled if all the lanes are being used by another slot or m.2. However as long as the slot is active that should be fine using a lane reducer to plug a x16 card in there. Just note the card will stick up a bit more so there could be issues with the video connectors.

Also some motherboards have x1 slots with the slot open at the back edge so you can plug a regular x16 card in there, which would mean the lane reducer is not required. Not the easiest mod to do so I wouldn’t recommend cutting it if it does have the edge though!

Another option is something like this:

https://www.memoryexpress.com/Products/MX67227

The USB cable is used to pass the PCIe lanes direct to the x16 slot board. Fine for running a video card at gen1/2

vanilla slimfast
Dec 6, 2006

If anyone needs me, I'll be in the Angry Dome



Incessant Excess posted:

Having recently gotten into torrents, I'm looking to set up a torrent client on my (Synology DS918+) NAS as a Docker container, ideally with a VPN connection.

Looking around I've come across this tutorial: http://tomthegreat.com/2018/03/11/setting-up-deluge-with-vpn-on-synology-using-docker/

I'm wondering if anyone has experience doing something similar and if that guide would be a good idea to follow.

 

Can’t speak for the guide, but I’ve got all my apps running in docker on my synology and after a bit of initial trial and error I have it working great. The thing that drove me to doing it this way was that it was the only way to get rtorrent/rutorrent on my model, and I absolutely hated deluge

Right now I have the following running in containers:
- plex media server
- nzbget
- rtorrent/rutorrent
- sonarr
- unifi controller (for managing my APs)
- pihole

H110Hawk
Dec 28, 2006

BeastOfExmoor posted:

More related to the main topic of this thread. For a self-built NAS, is there a way to share data with another adjacent PC that's faster than gigabit internet? Presumably I could just buy 10Gb NICs for each PC and direct connect them, but is there a better solution than that?

Can you elaborate on your goal here? Because ethernet is really drat fast, and 10g gets you even less latency than 1g even if you only use 1g of it. There is also 25, 40, 100G. You probably don't need as much juice as you think you do. Infiniband, Fiberchannel, in theory "lightning"/Displayport style.

For your video card question, can you just link the board you're looking at?

H110Hawk fucked around with this message at 00:54 on Sep 9, 2019

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
I use to say no one needed to have a 100 gbps connection but an ISP until I started working with NAS professionally. :negative:

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Infiniband is a better cost-effective option if you only care for storage needs. But there’s also Ethernet over Infiniband and you can get 100gb Infiniband switches and cabling for so much less than 100gbe gear.

H110Hawk
Dec 28, 2006

Axe-man posted:

I use to say no one needed to have a 100 gbps connection but an ISP until I started working with NAS professionally. :negative:

Given you can get 32x 100g ports in 1U for under $20k it's almost silly not to just say gently caress it and put it everywhere.

Crunchy Black
Oct 24, 2017

by Athanatos

Yikes

Crunchy Black fucked around with this message at 04:32 on Sep 9, 2019

BeastOfExmoor
Aug 19, 2003

I will be gone, but not forever.

priznat posted:

For the PCIe question, this depends on your motherboard and what else you have plugged in, sometimes the x1 slots get disabled if all the lanes are being used by another slot or m.2. However as long as the slot is active that should be fine using a lane reducer to plug a x16 card in there. Just note the card will stick up a bit more so there could be issues with the video connectors.

Also some motherboards have x1 slots with the slot open at the back edge so you can plug a regular x16 card in there, which would mean the lane reducer is not required. Not the easiest mod to do so I wouldn’t recommend cutting it if it does have the edge though!

Another option is something like this:

https://www.memoryexpress.com/Products/MX67227

The USB cable is used to pass the PCIe lanes direct to the x16 slot board. Fine for running a video card at gen1/2

The riser that uses a USB3 cable is essentially what I had in mind. Currently planning on using this motherboard, which I picked partially because it has three X16 slots (although they don't operate in X16 mode) as well as 3 X1 slots.

H110Hawk posted:

Can you elaborate on your goal here? Because ethernet is really drat fast, and 10g gets you even less latency than 1g even if you only use 1g of it. There is also 25, 40, 100G. You probably don't need as much juice as you think you do. Infiniband, Fiberchannel, in theory "lightning"/Displayport style.

Basically my server, which I'm planning on hosting my NAS server in a VM on, sits right next to my main desktop under my desk. 1Gbps ethernet is going to max out at at about ~125MB per second minus any overhead. I would imagine that a NAS with many drives should be able to push a good deal more data than that. The only PC on my network that I'd really see any benefit from the increased bandwidth is the PC immediately adjacent so I was just curious if there was an option (Ideally cheaper than 10Gb NICs in each machine) to directly connect the desktop to the server.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

BeastOfExmoor posted:

The riser that uses a USB3 cable is essentially what I had in mind. Currently planning on using this motherboard, which I picked partially because it has three X16 slots (although they don't operate in X16 mode) as well as 3 X1 slots.

Gotcha, I had it in my head you were thinking about something like this. I use those a lot and they're usually pretty solid, the USB ones I haven't played with.

Phone
Jul 30, 2005

親子丼をほしい。
Why is half of synology’s lineup have intel atoms (but expandable up to 32GB of RAM!) and 8 bays, but the ones with a Celeron or Xeon are limited to 8GB and only have half of the bays?

H110Hawk
Dec 28, 2006

BeastOfExmoor posted:

Basically my server, which I'm planning on hosting my NAS server in a VM on, sits right next to my main desktop under my desk. 1Gbps ethernet is going to max out at at about ~125MB per second minus any overhead. I would imagine that a NAS with many drives should be able to push a good deal more data than that. The only PC on my network that I'd really see any benefit from the increased bandwidth is the PC immediately adjacent so I was just curious if there was an option (Ideally cheaper than 10Gb NICs in each machine) to directly connect the desktop to the server.

To be pedantic - you haven't specified a need merely that you believe the nic would be a bottleneck. That's fine. If I may be so bold as to suggest that you set your system up now with 1gig nics (dollars) and see if you wind up running out of performance. If you do, and in the mean time, keep an eye on ebay for cheap cards to accomplish your pipe dream. Unless you're running a vm off the NAS that is I/O intense you might not even notice it. (Windows you will notice, Linux is super lightweight for basic stuff. Yes, even torrents and plex and music.)

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

Phone posted:

Why is half of synology’s lineup have intel atoms (but expandable up to 32GB of RAM!) and 8 bays, but the ones with a Celeron or Xeon are limited to 8GB and only have half of the bays?

Yeah there are some holes in their lineup. I have money to spend but none of them are attractive at the moment.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
They are coming out with some nice ones but yeah the rack mounted ones have the real oomph.

Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast

Smashing Link posted:

Yeah there are some holes in their lineup. I have money to spend but none of them are attractive at the moment.

I just have a NUC 8 (NUC8i5BEK) in front of my Synology DS1817+, the combo works pretty great, let the Syno just focus on the disks and spitting bits, and do all my work with it externally. Was the best price/performance I was able to come up with that wasn't a rackmount/full computer server in the living room

Phone
Jul 30, 2005

親子丼をほしい。

Sniep posted:

I just have a NUC 8 (NUC8i5BEK) in front of my Synology DS1817+, the combo works pretty great, let the Syno just focus on the disks and spitting bits, and do all my work with it externally. Was the best price/performance I was able to come up with that wasn't a rackmount/full computer server in the living room

I’m coming from a NAS4Free box that IOwnCalculus somehow managed to convince me was a good move for a ZFS RAIDZ2 array that hasn’t been booted on in like 3 years. I couldn’t be bothered to figure out how to get the NAS with training wheels to work out, so I’m looking for an appliance to set and forget.

I’m sure that the NUC is miles easier as an interface and whatnot; however, I’m trying to get this to the point where my own sloth isn’t the determining factor of setting the thing up... because I technically could just spin up the thing I have downstairs and that should be plenty beefy, etc. if I’m gonna to try to fill a gap in the Synology product line.

Computers. :(

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



One of the things I have considered vis-a-vis a NUC is the idea that if they had ECC memory and anywhere from 1-3 Thunderbolt 3 interfaces, they'd be perfect for storage, because each ThunderBolt interface can easily support 6 daisy-chained 8-disk JBOD chassis for a total of 48 disks connected to a single machine.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply