Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Well, that card's mini-PCIe which won't go into the ASRock board I linked even though similarly to the M.2 WiFi slot it has a single PCIe lane. If you can find an M.2 2230 SATA controller then you'll be in business, but when I searched the only ones I saw were 60+mm.

Eletriarnation fucked around with this message at 06:49 on Mar 28, 2023

Adbot
ADBOT LOVES YOU

Corin Tucker's Stalker
May 27, 2001


One bullet. One gun. Six Chambers. These are my friends.
After lurking this thread for a while, I decided to take my old PC components and try setting up Ubuntu Server for the first time. This is just for a few SMB shares, SABnzbd/Sonarr/Radarr, and Jellyfin, so nothing too complex.

The install was easy. I set up SSH and was able to do stuff from my MacBook's Terminal, which felt surprisingly rewarding. Today I'm going to dig into basic security stuff and remove the video card to make sure it works as a true headless setup.

I just thought I'd check in and say this thread is a good resource and encouraging for a beginner.

ChineseBuffet
Mar 7, 2003

Kangra posted:

Except I can't tell for certain what the boot devices are, since I'm stuck with only the tiny IPMI 'preview' screen and and no actual console. I also tried using this but it's kind of a rabbit hole of new things to get working.

What version of the ipmi firmware are you on? That generation of boards has an HTML5 KVM that just works in addition to the Java one that doesn't, but it's possible that it's not there in the older firmwares. The good news is that updating is very easy and can be done from the web ui.

Motronic
Nov 6, 2009

Corin Tucker's Stalker posted:

After lurking this thread for a while, I decided to take my old PC components and try setting up Ubuntu Server for the first time. This is just for a few SMB shares, SABnzbd/Sonarr/Radarr, and Jellyfin, so nothing too complex.

The install was easy. I set up SSH and was able to do stuff from my MacBook's Terminal, which felt surprisingly rewarding. Today I'm going to dig into basic security stuff and remove the video card to make sure it works as a true headless setup.

I just thought I'd check in and say this thread is a good resource and encouraging for a beginner.

Nice.

Grab iTerm - it's a much nicer client to SSH from.

And what video card? You may be able to use it for transcoding so you might want to leave it in.

wolrah
May 8, 2006
what?

ChineseBuffet posted:

What version of the ipmi firmware are you on? That generation of boards has an HTML5 KVM that just works in addition to the Java one that doesn't, but it's possible that it's not there in the older firmwares. The good news is that updating is very easy and can be done from the web ui.

Can confirm, I have a bunch of A1SRi-2558F boards out in the world which use the same AST2400 BMC and the most recent major revision of the IPMI firmware adds a HTML5 iKVM option alongside the Java one (also prevents it from prompting that your Java is out of date every time you open it).

Supermicro seems to have broken the IPMI firmware links on the product page, but you can go here and enter the first few of the model in the search box to get it: https://www.supermicro.com/support/resources/bios_ipmi.php?type=BMC

Corin Tucker's Stalker
May 27, 2001


One bullet. One gun. Six Chambers. These are my friends.

Motronic posted:

Nice.

Grab iTerm - it's a much nicer client to SSH from.

And what video card? You may be able to use it for transcoding so you might want to leave it in.
Thanks, I'll check that out.

The video card is a 6650 XT, which seems like overkill so I planned to sell it. I built the PC last year only to realize I'm playing everything on the Steam Deck.

Kangra
May 7, 2012

Thanks for the help! I was able to install the firmware and get IPMI working now. It appears to be stable (I changed the SATA cable to the SSD, and had to disable the watchdog timer in the BIOS) and running okay for now. I will probably keep it around for a few months until I can update the system.


Eletriarnation posted:

Well, that card's mini-PCIe which won't go into the ASRock board I linked even though similarly to the M.2 WiFi slot it has a single PCIe lane. If you can find an M.2 2230 SATA controller then you'll be in business, but when I searched the only ones I saw were 60+mm.

Something like this should work, right? The case does have a few slots in the back, I found, and if I had to I could probably even remove the bracket on the card and just let it sit there, since there's nothing being plugged in and out of it once it gets set up.

e:I just realized I mistakenly said this was a 1U case. It's 2U, sorry for the confusion this might have caused. It can actually hold low-profile PCI cards in it.

Kangra fucked around with this message at 06:03 on Mar 29, 2023

Tiny Timbs
Sep 6, 2008

Question about basic NAS stuff with UnRAID:

Is there a specific way I should set up a 1 TB SSD cache to work with a 17 TB array used primarily as a media server? Right now I have it set to move files over to the array every 8 hours, but that's all I've done so far.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Tiny Timbs posted:

Question about basic NAS stuff with UnRAID:

Is there a specific way I should set up a 1 TB SSD cache to work with a 17 TB array used primarily as a media server? Right now I have it set to move files over to the array every 8 hours, but that's all I've done so far.

Set any VMs or Docker shares to "Prefer" and run them strictly off the SSD. If you set the share to prefer and stop the docker/vm services and then run the mover it will move them off the array and onto the SSD. Also do this for appdata.

Make sure you're backing up your appdata and VMs!

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
I would run it every 24 hours at like 4am, it does have a noticeable impact on the system. With 1TB I doubt it'll fill up that fast under normal usage.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Tiny Timbs posted:

Question about basic NAS stuff with UnRAID:

Is there a specific way I should set up a 1 TB SSD cache to work with a 17 TB array used primarily as a media server? Right now I have it set to move files over to the array every 8 hours, but that's all I've done so far.

unless you’re moving an absurd amount of files you should be able to do way longer than every 8 hours. I have a 1tb cache myself and do it once a month, occasionally I’ll get the nearing full warning towards the last week, but can always just manually run it when that happens.

Corb3t
Jun 7, 2003

I automatically move data off my 1 TB cache weekly.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Shouldn't the cache automatically be populated with frequently used files and purged of files that are unused?

CopperHound
Feb 14, 2012

Cache is a misnomer. It is just another mirror or single drive that people usually use SSDs for.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Nitrousoxide posted:

Shouldn't the cache automatically be populated with frequently used files and purged of files that are unused?

Kind of. It’s more for staging and fast storage.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Kangra posted:

Thanks for the help! I was able to install the firmware and get IPMI working now. It appears to be stable (I changed the SATA cable to the SSD, and had to disable the watchdog timer in the BIOS) and running okay for now. I will probably keep it around for a few months until I can update the system.

Something like this should work, right? The case does have a few slots in the back, I found, and if I had to I could probably even remove the bracket on the card and just let it sit there, since there's nothing being plugged in and out of it once it gets set up.

e:I just realized I mistakenly said this was a 1U case. It's 2U, sorry for the confusion this might have caused. It can actually hold low-profile PCI cards in it.

Oh! Yeah, if it's 2U that changes rather a lot - I suspect you could keep a normal desktop processor cool with a 2U cooler without it being obtrusive. In fact, my HTPC is a 2U desktop with a "65W" i5-4440, an Intel stock cooler, and a couple 80mm Noctua fans and I never hear it at all. I figured you just had one of the 1U cases that uses a little riser to provide a sideways card slot.

Regardless, that card should be fine. You might be able to get a cheaper one too if you look around, I bought this just a couple weeks ago: https://www.amazon.com/10Gtek-Profile-Bracket-Controller-Expansion/dp/B09Y1NRHX3/?th=1

Eletriarnation fucked around with this message at 14:03 on Mar 30, 2023

Syenite
Jun 21, 2011
Grimey Drawer
Are there any particular multiple-slot external drive bays folks recommend? I don't necessarily need RAID support, or a full-fledged NAS setup, just something I can plug a bunch of 2.5/3.5 drives into without it being a pain to fiddle with or slow to read/write.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
When you say "external" you mean via USB, or just using external bays in a desktop case? If it's the second, I have this 2x 5.25" to 3x 3.5" enclosure in my NAS: https://www.amazon.com/gp/product/B07YQKHL8N/

I've been using it for my torrent mirror for a year and change and haven't noticed any difference from having the drives plugged straight into the controller from a data perspective. The fan is not too loud but can be easily replaced if you want. I actually just popped in another disk earlier today because one of the drives is starting to go after >7 years spinning, so I can also vouch for the hotswap working fine.

Syenite
Jun 21, 2011
Grimey Drawer

Eletriarnation posted:

When you say "external" you mean via USB, or just using external bays in a desktop case? If it's the second, I have this 2x 5.25" to 3x 3.5" enclosure in my NAS: https://www.amazon.com/gp/product/B07YQKHL8N/

I've been using it for my torrent mirror for a year and change and haven't noticed any difference from having the drives plugged straight into the controller from a data perspective. The fan is not too loud but can be easily replaced if you want. I actually just popped in another disk earlier today because one of the drives is starting to go after >7 years spinning, so I can also vouch for the hotswap working fine.

A standalone enclosure would be best. I do have an older (small) case I could probably also repurpose into an actual NAS... but I don't particularly need those extra features in my apartment, haha.

E: JBOD DAS is apparently the correct term for what I'm looking for.

Syenite fucked around with this message at 10:28 on Mar 31, 2023

Computer viking
May 30, 2011
Now with less breakage.

I did just yesterday test a QNAP 8-disk SAS enclosure with bundled controller card, and it appears to be a bog standard device despite their claims it only works with their hardware. I put their cute little SAS controller card in a desktop PC, a few SATA drives in the box, and they just showed up and worked. At least in the BIOS and in Linux, I haven't tested the Windows support yet.

The one downside is that the controller card is intended for one of their boxes and only comes with a half height bracket; I tested it by removing the bracket entirely but that's not a long term solution.

Still, I think I have more or less solved my long standing "where can I get a quiet desktop JBOD SAS box" question. I'll find the model number when I get to work today.

Computer viking fucked around with this message at 11:09 on Mar 31, 2023

Thanks Ants
May 21, 2004

#essereFerrari


Is it this?

https://www.qnap.com/en-uk/product/tl-d400s

Computer viking
May 30, 2011
Now with less breakage.

Oh it's SATA only? Good thing I bought SATA drives I guess. (I assume it's effectively eSATA, and just uses those cables to bundle a bunch of SATA lanes from a fairly normal SATA controller?)

I got the big brother, the 800:




Still, it does fill a niche. :)

E: I bought it to expand a qnap nas and didn't really look at it beyond "yeah this is compatible" at the time. When it arrived I took the opportunity to poke at it before putting it into use.

Computer viking fucked around with this message at 12:59 on Mar 31, 2023

Beve Stuscemi
Jun 6, 2001




Is there a NAS distribution thats built around disk speed? I currently use unraid, which tends to store entire files on a single disk, regardless of size. I have a 10GB link from it to my video editing PC, but I feel like I cant really take advantage of it since everything is streaming off one disk. The unraid is currently all spinning disks, but I am looking to rebuild with SATA SSD's very soon.

Is there a distro that is more built for fast file movement?

BlankSystemDaemon
Mar 13, 2009



A 100GB link? Nice.

The real answer is that you're always going to be limited by either smbd or nfsd.
Strictly speaking, nfsd scales better, but at the speeds you're talking about, any appliance software should do fine provided you aren't expecting it to run off some an underpowered CPU with a tiny amount of memory.

EDIT: Fast file movement also depends heavily on the files you're moving.
Since you're talking about video editing, I'm assuming it's a lot of big files involving a lot of sequential access, with bouts of random I/O when you're scrubbing the timeline of your non-linear video editing software.
In theory, you can even build an array of spinning rust that's fast enough for that, but it's a lot easier to achieve with SSDs - just be aware that SATA SSDs still only use AHCI, which means it's NCQ with one queue.

BlankSystemDaemon fucked around with this message at 16:15 on Mar 31, 2023

Motronic
Nov 6, 2009

BlankSystemDaemon posted:

The real answer is that you're always going to be limited by either smbd or nfsd.

I think you completely missed the mark here, especially as the thread ZFS evangelist.

The actual answer is in fact to use a real filesystem that doesn't store each file on one disk, like ZFS. So TrueNAS fits that bill as well as most other actual NAS distros.

This isn't a network or protocol bottleneck. It's disk I/O because everything is going to/from a single piece of spinning rust. A pool of 4+ spinning rust will make an enormous difference.

Wibla
Feb 16, 2011

Slap a 2TB NVMe drive in your unraid box and use that as your scratch drive.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Motronic posted:

I think you completely missed the mark here, especially as the thread ZFS evangelist.

The actual answer is in fact to use a real filesystem that doesn't store each file on one disk, like ZFS. So TrueNAS fits that bill as well as most other actual NAS distros.

This isn't a network or protocol bottleneck. It's disk I/O because everything is going to/from a single piece of spinning rust. A pool of 4+ spinning rust will make an enormous difference.

Also re:ZFS. It will automatically build up a R/W cache of frequently accessed files and hold that in RAM, which vastly speeds up recall of data. You can also add explicit SSD cache drives (L2ARC) to a pool to further expand the fast access space.

BlankSystemDaemon
Mar 13, 2009



Motronic posted:

I think you completely missed the mark here, especially as the thread ZFS evangelist.

The actual answer is in fact to use a real filesystem that doesn't store each file on one disk, like ZFS. So TrueNAS fits that bill as well as most other actual NAS distros.

This isn't a network or protocol bottleneck. It's disk I/O because everything is going to/from a single piece of spinning rust. A pool of 4+ spinning rust will make an enormous difference.
The reason to use ZFS is if you're looking to ensure your data isn't subject to silent corruption.
ZFS can go fast if you use the right hardware (as Wibla mentioned, a pair of NVMe disks will work wonders) - but that's true for any filesystem.

Even then, you're still going to be limited by nfsd and smbd - they're the daemons that're going to be using the most cputime.

Nitrousoxide posted:

Also re:ZFS. It will automatically build up a R/W cache of frequently accessed files and hold that in RAM, which vastly speeds up recall of data. You can also add explicit SSD cache drives (L2ARC) to a pool to further expand the fast access space.
It depends on the data-set and how much memory can fit into the system.
ARC uses the MFU+MRU+shadowlists algorithm whereas L2ARC is simple FIFO, and each LBA on the L2ARC device needs to be mapped in memory, so it reduces the amount of memory that's available for ARC.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
L2ARC maps directly do ZFS filesystem blocks. It's 70 bytes of header per block.

512MB of RAM allows you to map to either 3.6GB of 512 byte blocks, 29GB of 4KB blocks, 117GB of 16KB blocks, or 937GB of 128KB blocks. Latter is the default block size of ZFS.

My L2ARC stats, which is caching just metadata of ZFS filesystems and two ZVOLs at 16KB block size in their entirety, hosting a Steam library and MS Flight Simulator respectively:

code:
L2ARC size (adaptive):                                         259.7 GiB
        Compressed:                                    82.1 %  213.1 GiB
        Header size:                                    0.3 %  704.7 MiB
        MFU allocated size:                             9.2 %   19.6 GiB
        MRU allocated size:                            90.4 %  192.6 GiB
        Prefetch allocated size:                        0.4 %  893.9 MiB
        Data (buffer content) allocated size:          99.0 %  210.9 GiB
        Metadata (buffer content) allocated size:       1.0 %    2.2 GiB
I can live with losing 700MB of RAM to keep like 250GB of data warm.

--edit:

BlankSystemDaemon posted:

The real answer is that you're always going to be limited by either smbd or nfsd.
Maybe if these dipshits at Samba would actually practically implement SMB Direct instead of just talking about it for 15 years (or whatever it is), that'd be nice. Needs RDMA capable cards at both ends, tho.

Combat Pretzel fucked around with this message at 16:46 on Mar 31, 2023

Beve Stuscemi
Jun 6, 2001




BlankSystemDaemon posted:

A 100GB link? Nice.

Its a 10GB SPF Direct cable

Yaoi Gagarin
Feb 20, 2014

Jim Silly-Balls posted:

Its a 10GB SPF Direct cable

Just to be extremely clear: gigabits or gigabytes?

Beve Stuscemi
Jun 6, 2001




VostokProgram posted:

Just to be extremely clear: gigabits or gigabytes?

I had to check the hardware profile, but it looks like bits

quote:

Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah, that's an X520.

You can definitely get a lot more than a single drive's bandwidth with a different topology. I'm using 6x16TB in RAIDZ2 on my NAS, and I'd expect a single disk to top out around 250MB/s. When I temporarily had my desktop and NAS connected to the same switch over 10G I was able to get about twice that with sequential reads over SMB. It probably could have gone even faster, but unfortunately the NIC/OS combo I was using (actually also an X520, Windows 11) on the desktop was unsupported and locked up after about a minute of testing.

Yaoi Gagarin
Feb 20, 2014

I thought each vdev only gets the bandwidth of its slowest drive?

I was going to recommend ZFS with striped mirrors. You'll only get half the space but if you really want to saturate the network it might be worth it

Wibla
Feb 16, 2011

I get 600+MB/s sustained read/write with my 8x14TB RAIDZ2 array over 10gbe.

Motronic
Nov 6, 2009

BlankSystemDaemon posted:

The reason to use ZFS is if you're looking to ensure your data isn't subject to silent corruption.

I know this is your schtick, but in this case the suggested reason to use ZFS, as you very well know but don't want to say "oh, you were right I missed it" so that you can keep on well acksuallying instead is: because OP needs a pool of drives. Not a single drive. Yes, other files systems can ackshually do that too. But I chose ZFS as the example because of your response.

Wibla
Feb 16, 2011

Can we not have this stupid slapfight again?

Also: I routinely see >500MB/s over SMB via 10gbe, it's not as slow as some people claim.

Quoting OP again because I think we've missed a few things:

Jim Silly-Balls posted:

Is there a NAS distribution thats built around disk speed? I currently use unraid, which tends to store entire files on a single disk, regardless of size. I have a 10GB link from it to my video editing PC, but I feel like I cant really take advantage of it since everything is streaming off one disk. The unraid is currently all spinning disks, but I am looking to rebuild with SATA SSD's very soon.

Is there a distro that is more built for fast file movement?

TrueNAS Scale or Core will do what you need. Unraid is literally the opposite of what you need to solve your needs in this instance, for the reasons you've already stated. While you're figuring out how to get from what you have today, to a fully functioning NAS, you can add an NVMe drive to your unraid server to use as a scratch drive for editing, but that means you need to manually copy files to and from that drive.

I would NOT buy a bunch of SATA SSD's in 2023. At least not without a clearly defined goal :haw:

Please tell us what your goals for a new NAS is, and we'll help you get there, hopefully without too much bickering.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

VostokProgram posted:

I thought each vdev only gets the bandwidth of its slowest drive?

Maybe in a mirror, but as far as I understand it the performance characteristics of distributed parity topologies like RAID-5/6/Z* have more similarities to striped arrays. You of course have a lot more CPU overhead, and at any given time some subset of your disks is reading/writing parity blocks that don't contribute to your final application-available bandwidth. Still, modern CPUs are fast so that's not much of a bottleneck to HDDs and you can absolutely get very fast numbers for sustained, sequential transfers.

Beve Stuscemi
Jun 6, 2001




Wibla posted:

TrueNAS Scale or Core will do what you need. Unraid is literally the opposite of what you need to solve your needs in this instance, for the reasons you've already stated. While you're figuring out how to get from what you have today, to a fully functioning NAS, you can add an NVMe drive to your unraid server to use as a scratch drive for editing, but that means you need to manually copy files to and from that drive.

I would NOT buy a bunch of SATA SSD's in 2023. At least not without a clearly defined goal :haw:

Please tell us what your goals for a new NAS is, and we'll help you get there, hopefully without too much bickering.

I do have a cache 1TB SATA SSD in my unraid, but again, its the limitations of a single disk that come into play.

I am looking to store a modest amount of data (currently about 5.5TB) in a redundant way so that I do not get a single point of failure (this is why I don't have it all sitting on a single 10TB disk, despite that being by far the easiest option). I also would like to take advantage of the 10Gbit link between the server and my video editing PC. It would be nice to be able to store everything on the NAS and edit directly from there. I'm mostly dealing with 4K and some 6K footage. Other than that its simple file storage accessible over gigabit ethernet or wifi, one or two VM's and a plex docker that just occasionally serves my local LAN on the box.

I have access to a bunch of ex-datacenter stuff (which is where I got the 10GB SFP cards for the unraid box and my video editing box). Everything currently runs on a Dell Poweredge T420 with 8 Drive bays filled with spinning disks on a PERC 6GB unit of some sort that has been flashed to HBA mode, 2x E5-2430 V2's and 192GB DDR3

I also have a stack of 1TB SATA SSD's, which, should get me 7TB usable under the current unraid standard, assuming 8 of them in use. Since I already have the SSD's I was hoping to put them to use as an upgrade in both speed, heat generation and power consumption over the spinning rust.

I also have the option of moving this all over to an HP DL380 G9 with 2x E5-2640 V3's, 128GB DDR4, and a 12GB/S SAS controller, but I'm guessing the 12GB/s unit wont gain me anything without using 12GB/s drives with it.

Nothing that I own as a candidate for a NAS can accept an NVME drive. Its all a bit too old for that.

Beve Stuscemi fucked around with this message at 21:22 on Mar 31, 2023

Adbot
ADBOT LOVES YOU

Theophany
Jul 22, 2014

SUCCHIAMI IL MIO CAZZO DA DIETRO, RANA RAGAZZO



2022 FIA Formula 1 WDC
So if some rube dumbass just assembled a NAS by reusing a 3900X, a new micro ATX board, a lmao K2200 quadro and 12 HGST 6tb UltraStars with a 2.5Gbe PCI card and a very AliExpress SATA card what's the best direction for an OS?

The three I'm considering right now are Unraid, TrueNAS and Xpenology. I realise the latter is basically a shittier version of Hackintosh, but having run a number of Synology NAS in the past I do quite like the simplicity of their interface.

My sensible brain is saying run with Unraid but I'm keen to be guided on which is going to be the least hassle.

Use case is a two way mirror of three Synology shares and running various *arrs and perhaps a windows VM for Blue Iris in the future.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply