Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

I think it's possible that I'm missing something, but didn't Computer Viking ask about SAS, as in Serial Attached SCSI - rather than Network Attached Storage?

The short answer is that as far as I know if it's quiet, it's probably not cooling the SAS expander sufficiently.
I wouldn't mind being wrong on that, though.

Indeed.

The reason I ask is that I can stuff eight disks in a tower server and it will keep the disks, SAS controller, and SAS expander/backplane cool enough while making about as much noise as a gaming PC, or I can put 12 disks in a 2U enclosure that sounds like a vacuum cleaner fighting a deep carpet.

Logically, it should be possible to build a similar tower without $2k of computing hardware inside, with an SFF-whatever plug at the rear, that could host those same eight disks for a comparable cost and noise level - or less.

E: Consider the Synology DX1222. That's a 12 bay SAS expander that I know from experience is reasonably quiet and compact and seems to work fine - but from what I can read it only works with a Synology controller, despite using SFF-8644 cables. Does anyone make a generic version of that?

Computer viking fucked around with this message at 22:21 on Mar 9, 2023

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





You can, but you're going to have to roll your own. Serve the home has guides that are a bit dated but the general idea is still the same. You need a SAS expander, something to handle the power supply, and the appropriate cables and brackets to connect the downstream ports to the drives and the upstream port to an external SAS port.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
gen10+ is perfect in every way except:

no nvme slot
only one pcie slot
no quicksync support

a second pcie slot would make it the absolute perfect home NAS

Wibla
Feb 16, 2011

I built a 4U with dual X5676 CPUs, a bunch of 4TB drives, SAS HBA + SAS expander + 10gbe and it was about as quiet as my regular PC, but I used Noctua coolers and fans.

A used R6 will do fine for 8 drives etc, and it won't make much noise.

That 5.25" thing with a bunch of mSATA drives is not my idea of fun, that's going to be both expensive (for what you get) and noisy. A single pcie 4.0 NVMe will outperform a bunch of mSATA drives anyway, and for bulk storage you want spinning rust.

E: Here's an old build from 2009 that got some incremental upgrades along the way moving from a 3ware 9500-12 SATA raid controller to two LSI 9211 cards. Note the obligatory fan - and the IDE system drive :v:

Wibla fucked around with this message at 22:19 on Mar 9, 2023

Computer viking
May 30, 2011
Now with less breakage.

IOwnCalculus posted:

You can, but you're going to have to roll your own. Serve the home has guides that are a bit dated but the general idea is still the same. You need a SAS expander, something to handle the power supply, and the appropriate cables and brackets to connect the downstream ports to the drives and the upstream port to an external SAS port.

Right. It just feels like something someone would sell, and for work I'd like less of my own mediocre handiwork in there. Oh well, it sounds perfectly manageable, and I should have most of the year before they start running low on space.

Brain Issues
Dec 16, 2004

lol

e.pilot posted:

gen10+ is perfect in every way except:
no quicksync support

so, like, this should be a dealbreaker for most then

I am just sour that there is no great OOTB solution for a home plex server with 100TB+ capacity that is affordable

Klyith
Aug 3, 2007

GBS Pledge Week

Computer viking posted:

Right. It just feels like something someone would sell, and for work I'd like less of my own mediocre handiwork in there. Oh well, it sounds perfectly manageable, and I should have most of the year before they start running low on space.

Someone does: you get a fractal define 7 XL, another 3 of the add-on HDD cages & sleds, and you have a full tower case that can fit 18 3.5 drives.

You spend $350 for the privilege, but that's a shitload of drives and a low-noise case for you.



VVV edit: umm yeah seems I don't math so good. it has 8 drive slots ootb and space for up to 18, so 5 packs of HDD sleds. Though I was also double-counting how many accessories you need to buy -- the extra sleds don't go into cages, so I think you don't need to buy extra cages? So my price tag was actually high. Anyways it's a lot of money, look at the manuals first to figure out what you need.

Klyith fucked around with this message at 01:01 on Mar 10, 2023

Less Fat Luke
May 23, 2003

Exciting Lemon
I did this and it's awesome (well, Meshify 2 XL which is the same frame). Also I needed 5 of the 2-pack drive trays to get up to 16 drives so if you go this route check your math :)

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Ihmemies posted:

It means the Software is absolute pure trash, so I give huge shits. Surely there must be other software too?


I've had a love/hate relationship with Veeam for over a decade now.

It works great, until it decides to start leaving orphaned snapshot and just murder VMs.

Good times.

BlankSystemDaemon
Mar 13, 2009



I'm not sure I understand how that's different from filling a Lian Li PC343B with those 5x3.5" in 3x5.25" bays (which nets you 30 drives in total).
That's a case that dates back to when PATA was still the most common interface in consumer devices - and you'd still be operating it basically the same way today, that you would the set of PATA disks back then.

A modern SAS setup involves all sorts of fancy things like enclosure identification and fault notification (so that if a drive fails, the LED on the outside of the bay automatically lights up), auto-expansion and auto-replacement (so that when you pull a drive that's failing/dead and insert a bigger drive, it automatically starts the resilver process, and once every drive in that part of the array has been replaced, it automatically expands to fit the available amount of space), and using multiple controllers and multiple data cables to each disk to ensure that there's full datapath redundancy, as well as being able to daisy-chain multiple enclosures.
All of that, mind you, is possible with ZFS (at least on FreeBSD, though I don't see any reason Linux shouldn't be capable of it as all of the software necessary for it is either expected of any modern OS or is cross-platform enough that it runs on any modern OS).

BlankSystemDaemon fucked around with this message at 01:39 on Mar 10, 2023

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
My back hurts just thinking of that many drives in a box.

IOwnCalculus
Apr 2, 2003





Matt Zerella posted:

My back hurts just thinking of that many drives in a box.

You're not wrong. Step one of deracking either my server or the DS4246 attached to it is "pull all the spindles".

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

IOwnCalculus posted:

You're not wrong. Step one of deracking either my server or the DS4246 attached to it is "pull all the spindles".

I have 5 drives in my unraid server and almost died trying to move it without a dolly when we got our new place.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
My Supermicro CSE-836 fully loaded with drives and PSUs is excellent for deadlift training. Just remove some drives to de-load on a tough set

BlankSystemDaemon
Mar 13, 2009



I once watched someone slide out a SAS enclosure on its rails, with some 40 drives loaded, and the rack subsequently tipping forward and landing face-down.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.
The real reason to have a UPS is your rack is just to anchor that thing when you're working on a server, lol.

BlankSystemDaemon
Mar 13, 2009



Scruff McGruff posted:

The real reason to have a UPS is your rack is just to anchor that thing when you're working on a server, lol.
Yep!

I will say though, I've never seen a PFY move that far in such a short amount of time.

Computer viking
May 30, 2011
Now with less breakage.

In good news: I've finally found the space and time to do a full "restore everything" test of the tape backup instead of just picking a few random folders. I'm getting about 133MB/s back from LTO-8 tape, which feels slightly low - but on the other hand it does seem to be working perfectly, so I'll just leave it alone (apart from the tape swap in the middle).

As a bonus, I'm doing this to move from an old to a new fileserver, so I actually get to use the mythical "set different properties and restore everything from backups" approach to changing the compression method for files in a dataset. I'm moving from lz4 to ztsd, and I'm curious to see what sort of difference it makes.

Computer viking fucked around with this message at 13:57 on Mar 13, 2023

Vaporware
May 22, 2004

Still not here yet.
I'm still poking thru installing ZFS. I finally got everything mounted properly and the kernel headers sorted. I immediately found a conflict between the tutorial I was following and another very similar SBC.

Hardware specific tutorial recommends using dev/sdx and uses USB disks extensively because of the rockpro64's single PCI slot.
https://github.com/nathmo/makeZFS_armbian/tree/c59583bc03b884be15822ec93040b9fec970784b

Another easy to read ZFS tutorial I found says use uuid so you can replace a disk later:
https://wiki.kobol.io/helios64/software/zfs/install-zfs/

This feels like a bigger design question I was completely unaware of. No big deal or is the UUID thing something I should look up?

I know the solution to this is to RTFM but I am still struggling with Linux basics. What partition options are correct or useful, how to check what services are running and how to debug when I get error messages in logs (even finding things like boot logs) as most of my knowledge is either windows specific or circa early 2000s. Is there a better document to follow? I tried TLDR man pages and they help because of the included examples. Anything else to help get updated? There seem to be a lot of SEO garbage linux tutorial sites that are wasting my time.

Things like not partitioning the a full SSD because some sort of firmware error correcting utility eats the free space? That's a thing?

Also, I'm planning on booting off the eMMC for now because it's easy, but I'm also interested in making the install robust enough to survive boot disk failure and I have no idea what that involves on linux. Back it up to a USB stick? lol make an acronis disk?

code:
user@host:~$ hostnamectl
 Static hostname: host
       Icon name: computer
      Machine ID: long ID1
         Boot ID: long ID2
Operating System: Armbian 23.02.2 Jammy           
          Kernel: Linux 5.15.93-rockchip64
    Architecture: arm64

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Vaporware posted:

This feels like a bigger design question I was completely unaware of. No big deal or is the UUID thing something I should look up?

The FAQ had a good explanation on that.

Selecting /dev/ names when creating a pool (Linux)

BlankSystemDaemon
Mar 13, 2009



It will never not be funny to me that in current year, Linux still has inconsistent device naming because of its floppy disk support.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

BlankSystemDaemon posted:

It will never not be funny to me that in current year, Linux still has inconsistent device naming because of its floppy disk support.

I was vividly reminded of two weeks ago when we updated a SAP HANA server and during boot-up it was dropped to a recovery console and the only line shown on the screen was an error message about PS/2 UART controller. When you are doing diagnostics in a rush your Google searches may not immediately find the information how that if a standard error message because the server doesn't have PS/2 and can be ignored.

As a professional Linux server admin I hope I'm enough involved with it, that I'm entitled to hate Linux. To balance things out my coworker hates Windows more and I probably less.

BlankSystemDaemon
Mar 13, 2009



Saukkis posted:

As a professional Linux server admin I hope I'm enough involved with it, that I'm entitled to hate Linux. To balance things out my coworker hates Windows more and I probably less.
It's okay, every OS sucks in different ways.

Vaporware
May 22, 2004

Still not here yet.

Thanks! I will read up

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
I have a problem

I got this this for $100:

Its a NetApp DE6600

Wibla
Feb 16, 2011

Woah. How big are those drives? And I'm sorry for your power bill...

Thanks Ants
May 21, 2004

#essereFerrari


That's the sort of thing you run when you have acres of land and a solar farm

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Wibla posted:

Woah. How big are those drives? And I'm sorry for your power bill...

60 x 3TB SAS.

1,100 Watts or so to run fully loaded.

Wibla posted:

Woah. How big are those drives? And I'm sorry for your power bill...

Yeah I run bladeservers :)

Rap Game Goku
Apr 2, 2008

Word to your moms, I came to drop spirit bombs


Wibla posted:

Woah. How big are those drives? And I'm sorry for your power bill...

Maximum Power: 1,222W :getin:
https://www.ibm.com/docs/en/ess-p8/2.0?topic=enclosures-netapp-de6600

BlankSystemDaemon
Mar 13, 2009



And that's before AC-DC conversion losses.

There's a reason why hyperscalers use per-rack rectifiers with two bus-bars that blades blind-punch into.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
And here I thought I was being stupid running my SAS expander, NAS, and NAS 3.0 at 220w for a while. I also literally have solar power at home although I'm not able to get off the grid due to capacity restriction ordinances by the city.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

necrobobsledder posted:

I'm not able to get off the grid due to capacity restriction ordinances by the city.

same I hate it

Boner Wad
Nov 16, 2003

e.pilot posted:

gen10+ is perfect in every way except:

no nvme slot
only one pcie slot
no quicksync support

a second pcie slot would make it the absolute perfect home NAS

This is exactly what I want... why doesn't this exist?

e: if I was to purchase one, I'm guessing I'd want to use the one PCIe slot for a graphics card to help with the lack of QuickSync?

Boner Wad fucked around with this message at 04:28 on Mar 14, 2023

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Boner Wad posted:

This is exactly what I want... why doesn't this exist?

e: if I was to purchase one, I'm guessing I'd want to use the one PCIe slot for a graphics card to help with the lack of QuickSync?

yeah, I did that with a USB3.whatever external SSD for a while. It at least has 10gbps USB, so it worked fine just a little ungainly

Now I have plex running in the ML30 I got a bit ago and have the bizarre qnap 10gbit and dual NVME card in the gen10+

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
In TrueNAS I've got a user account "fletcher" that was created with the "Microsoft Account" and "Samba Authentication" checkboxes ticked. I have a dataset with this user set as the owner, which I use exclusively as a mapped drive on my Windows machine. I've been using WinSCP and a scheduled job on my Windows machine to sync some files between the NAS and a remote server. I'd like to move this scheduled sync job to a linux machine and started going down the route of using NFS & rsync (as I'm writing this I realize this could probably just be an Rsync Task in TrueNAS itself). However, I'm wondering if mixing linux & windows permissions like this is going to cause any issues? Or if I create the Rsync Task in TrueNAS with my "fletcher" as the user, everything will work just fine?

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
This is intriguing.

https://youtu.be/E_an5heI1BU

Cantide
Jun 13, 2001
Pillbug

I was for the first 30 seconds of the video then I stopped and decided to look for the price:
https://www.newegg.com/thinkstation...&quicklink=true

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

Cantide posted:

I was for the first 30 seconds of the video then I stopped and decided to look for the price:
https://www.newegg.com/thinkstation...&quicklink=true

Lol, this is the tagline for every ServeTheHome video

Thanks Ants
May 21, 2004

#essereFerrari


"Serve the Home! For all your domestic requirements! Here's a Fibre Channel SAN!"

Adbot
ADBOT LOVES YOU

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Thanks Ants posted:

"Serve the Home! For all your domestic requirements! Here's a Fibre Channel SAN!"

Its the way of the future!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply