Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
GreatGreen
Jul 3, 2007
That's not what gaslighting means you hyperbolic dipshit.

DNova posted:

How about you describe what you're looking to do exactly (and/or what you mean by connected directly to your PC) and we go from there

Sorry man, sure. Honestly I'm so new at this stuff I'm still figuring out what's possible so I can know what's reasonable to expect/ask for and what's not.

Basically, I'm about to build two PC's: a Windows 7 workstation/gaming PC and a RAID box to use as a server.


I'd like to build a RAID box because I'd like a single, consolidated pool of storage I can use to keep all my media and otherwise important files in one place, as opposed to the hodgepodge of randomly sized drives I use now with zero redundancy. I'd like to use a RAIDZ file system because I'm okay with only one redundant disk, meaning I don't want to spend the extra money on a RAIDZ2 configuration, and I like the fact that it is expandable for a reasonable degree of future-proofing.

I'd like the gaming PC to see the RAIDZ array as just another drive on the computer, which google has just taught me is what iSCSI is for, and I'd also like to be able to setup a public storage pool as well that just anybody on the house network can freely use as on-the-fly storage when they need it.

FreeNAS has been pretty great so far with the exception of random errors that seem like bugs in the latest build. At first I was getting weird PC mismatch errors which turned out to be DNS errors I was able to fix by disabling the CIFS share's DNS lookup capabilities, along with a few other random errors. If FreeNAS really is super stable and it's just me not knowing what I'm doing, then FreeNAS should be perfect for doing what I'm asking about, especially if I can setup an iSCSI drive correctly. I just thought I'd ask about what else is out there just in case I'm just totally off the mark as for the correct tool for the job I'm looking for it to do.

GreatGreen fucked around with this message at 23:04 on Mar 23, 2015

Adbot
ADBOT LOVES YOU

sleepy gary
Jan 11, 2006

GreatGreen posted:

Sorry man, sure. Honestly I'm so new I'm still figuring out what's possible so I can know what's reasonable to expect/ask for.

Basically, I'm about to build two PC's: a Windows 7 workstation/gaming PC and a RAID box.


I'd like to build a RAID box because I'd like a single, consolidated pool of storage I can use to keep all my media and otherwise important files in one place, as opposed to the hodgepodge of randomly sized drives I use now with zero redundancy. I'd like to use a RAIDZ file system because I don't want to spend the extra money on a RAIDZ2 configuration and I like the fact that it is expandable for future-proofing.

I'd like the gaming PC to see the RAIDZ array as just another drive on the computer, which google has just taught me is what iSCSI is for, and I'd like to be able to setup a public storage pool as well that just anybody on the house network can freely use as on-the-fly storage when they need it.

FreeNAS has been pretty great so far with the exception of random errors that seem like bugs in the latest build. At first I was getting weird PC mismatch errors which turned out to be DNS errors I was able to fix by disabling the CIFS share's DNS lookup capabilities, along with a few other random errors. If FreeNAS really is super stable and it's just me not knowing what I'm doing, then FreeNAS should be perfect for doing what I'm asking about, especially if I can setup an iSCSI drive correctly. I just thought I'd ask about what else is out there just in case I'm just totally off the mark as for the correct tool for the job I'm looking for it to do.

Ok, you probably don't want to use iSCSI based on what you're saying. You can set up a private CIFS share for yourself that you mount as a network drive in Windows and then it shows up with a drive letter in My Computer and everywhere else like a regular drive would. You can set up another public share for guests or roommates or whatever that you (and they) can all mount and share together, also as another drive letter in Windows. It will also work in Mac OSX and Linux, if that matters.

But you can definitely accomplish what you want, and it is a much better storage practice than having data scattered around on various computers.

GreatGreen
Jul 3, 2007
That's not what gaslighting means you hyperbolic dipshit.
Cool, thanks for the reply.

Are there any downsides to mounting a share as an iSCSI drive to Windows as opposed to just mapping a network drive?

sleepy gary
Jan 11, 2006

GreatGreen posted:

Cool, thanks for the reply.

Are there any downsides to mounting a share as an iSCSI drive to Windows as opposed to just mapping a network drive?

Downsides are that you can't easily share that volume if you want to be able to, and I think there is also a performance hit for using iSCSI with ZFS but someone else will have to talk about that. I'm not sure how flexible it is about resizing it after creation, either. I have never used iSCSI outside of just playing around so hopefully someone else will chime in on that topic.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
iSCSI is just like hooking up a hard drive to your computer, except over the network. So nobody else would have access to that, unless you then shared it out from your desktop.

I suppose you could use an iSCSI for your own drive and then CIFS for the stuff you share with others, but the question then is "why" when you can access all the data with CFS and not add another failure point and not have things be a general pain in the rear end.

E: ZFS and iSCSI are very flexible, it creates ZVOLS which are basically giant files and you can easily change the size of the file, and then resize the drive in Windows. But again it's all a lot of work over CIFS for no great benefit.

GreatGreen
Jul 3, 2007
That's not what gaslighting means you hyperbolic dipshit.
Thanks guys.

Would setting up the drive as iSCSI yield worse performance as opposed to mapping it like a normal network drive?

Edit: whoops, didn't see the rest of those replies. Thanks again!

GreatGreen fucked around with this message at 00:18 on Mar 24, 2015

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

GreatGreen posted:

and I like the fact that it is expandable for a reasonable degree of future-proofing.

What do you mean by this? ZFS isn't exactly the most expansion-friendly system...

GreatGreen
Jul 3, 2007
That's not what gaslighting means you hyperbolic dipshit.

Thermopyle posted:

What do you mean by this? ZFS isn't exactly the most expansion-friendly system...

I was playing around with changing virtual hard drives with FreeNAS in a virtual machine.

I started with three 20GB drives in RAIDZ. Basically 40GB usable space.

Changing out a single 20GB drive for a 40GB drive and rebuilding the array yielded no additional space. Swapping the 2nd disk for another 40GB disk also didn't increase the available space.

However, once the third drive had been replaced with a 40GB drive, I suddenly had 80GB of free space available. The entire time, the few randomly named files and folders I had in the drive for testing stayed available while the server was powered on. I had to reboot the VM to add new virtual drives.

sellouts
Apr 23, 2003

Do not worry about iscsi to share your media.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

GreatGreen posted:

I was playing around with changing virtual hard drives with FreeNAS in a virtual machine.

I started with three 20GB drives in RAIDZ. Basically 40GB usable space.

Changing out a single 20GB drive for a 40GB drive and rebuilding the array yielded no additional space. Swapping the 2nd disk for another 40GB disk also didn't increase the available space.

However, once the third drive had been replaced with a 40GB drive, I suddenly had 80GB of free space available. The entire time, the few randomly named files and folders I had in the drive for testing stayed available while the server was powered on. I had to reboot the VM to add new virtual drives.

Ok, yes, ZFS does that well.

However, that can feel pretty limiting when you're like a lot of us with large pools and you've got to spend $500 or $1000 dollars to get more space by upgrading 5 or 10 hard drives.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
If you want to run games off your NAS, iSCSI is the way to go, because some games don't dig CIFS/Samba. Same goes for some apps, Lightroom for instance balks when trying to store a catalog on a network drive.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
People run games off their NAS?

GreatGreen
Jul 3, 2007
That's not what gaslighting means you hyperbolic dipshit.

Don Lapre posted:

People run games off their NAS?

Well... Read speeds on a typical RAID 5 array can easily saturate a gigabit connection, or 125 MBp/s, which is right around as fast as what most large platter drive read speeds are these days. If you configured an iSCSI drive for playing games on a NAS, I'd bet you probably wouldn't see much difference in load times between that and a locally stored hard drive.

Actually you'd probably even be hard pressed to notice the difference between playing games from a regularly mapped share drive and regularly installed on the local hard drive either.

GreatGreen fucked around with this message at 12:58 on Mar 24, 2015

Mr Shiny Pants
Nov 12, 2012

Don Lapre posted:

People run games off their NAS?

My steam library ran of my NAS using ISCSI fine. With an SSD ARC cache performance was pretty good.

Seek times are worse though compared to a local SSD.

BlankSystemDaemon
Mar 13, 2009




Don Lapre posted:

People run games off their NAS?
For my next setup, I'm seriously considering giving up any disk in my workstation and just booting off PXE with iSCS on a zvol with lz4 compression (on a pool consisting of 8 disks in RAIDz2, one cache, and two log SSDs), and buying two PCI-express QSFP NICs for a direct connection between my workstation and my server.

Nam Taf
Jun 25, 2005

I am Fat Man, hear me roar!

D. Ebdrup posted:

For my next setup, I'm seriously considering giving up any disk in my workstation and just booting off PXE with iSCS on a zvol with lz4 compression (on a pool consisting of 8 disks in RAIDz2, one cache, and two log SSDs), and buying two PCI-express QSFP NICs for a direct connection between my workstation and my server.

...why? What tangible advantage does it give over just, you know, running a normal PC?

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
Literally all your data having redundancy with no added cost?

Mr Shiny Pants
Nov 12, 2012

D. Ebdrup posted:

For my next setup, I'm seriously considering giving up any disk in my workstation and just booting off PXE with iSCS on a zvol with lz4 compression (on a pool consisting of 8 disks in RAIDz2, one cache, and two log SSDs), and buying two PCI-express QSFP NICs for a direct connection between my workstation and my server.

Nothing beats a local SSD, maybe infiniband because of RDMA, otherwise you will always have milisecond response times. Which is a step down from SSD access times. It is nice though, snapshot your workstation? yes please.

GreatGreen
Jul 3, 2007
That's not what gaslighting means you hyperbolic dipshit.
Yeah, that idea is kind of blowing my mind right now. It would be a very easy way to enable you to take snapshots of the state of any one of your drives, or all of them, at any given moment, enabling you to revert back at the drop of a hat if you needed.

Also, upgrading to a better CPU, RAM, and video card later on would basically be just a matter of unplugging an old machine and plugging a newer one in its place, then installing drivers.

Mr Shiny Pants is right though, the fastest setup would definitely be a local SSD for your OS drive while everything else lives on the server. I guess the question is whether or not you want to be able to take snapshots of your OS drive.

GreatGreen fucked around with this message at 14:26 on Mar 24, 2015

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

G-Prime posted:

Literally all your data having redundancy with no added cost?
Snapshotting your Windows installation would be an awesome option. I'd be going for something like this, wasn't it expensive as gently caress to go 10GBit.

GreatGreen posted:

Mr Shiny Pants is right though, the fastest setup would definitely be a local SSD for your OS drive while everything else lives on the server.
That doesn't work as smoothly, tho. While you may be able to kludge your NAS into the MyWhatsit folders, that's about it. Pushing ProgramFiles and AppData to the NAS doesn't have much utility, since it becomes useless as soon you freshly reinstall the OS locally.

Combat Pretzel fucked around with this message at 14:37 on Mar 24, 2015

BlankSystemDaemon
Mar 13, 2009




Mr Shiny Pants posted:

Nothing beats a local SSD, maybe infiniband because of RDMA, otherwise you will always have milisecond response times. Which is a step down from SSD access times. It is nice though, snapshot your workstation? yes please.
That's where I'm figuring on system files which get accessed often will end up on the L2ARC and the ZIL devices to alliviate that. SFP+ and QSFP get about an order of magnitude smaller latency compared to 10GBase-T.

Alternatively, I won't upgrade storage before NVMe is available for PCI-express SSDs and motherboards. My current setup is SSD in all my machines except my server, and everything but the OS and heavy-duty programs/games on cifs/afp/nfs shares depending on OS.

BlankSystemDaemon fucked around with this message at 14:49 on Mar 24, 2015

GreatGreen
Jul 3, 2007
That's not what gaslighting means you hyperbolic dipshit.

Combat Pretzel posted:

That doesn't work as smoothly, tho. While you may be able to kludge your NAS into the MyWhatsit folders, that's about it. Pushing ProgramFiles and AppData to the NAS doesn't have much utility, since it becomes useless as soon you freshly reinstall the OS locally.

The utility of having your OS drive exist as an iSCSI file on your NAS would be redundancy and the ability to recall snapshots for rollbacks if you need it.

You wouldn't just keep Program Files and AppData on the NAS, the entire C: drive would literally exist as an iSCSI file on the NAS the computer would boot from via PXE boot when you powered it on. The user experience would be functionally identical to having a local OS.

GreatGreen fucked around with this message at 15:20 on Mar 24, 2015

Hughlander
May 11, 2005

Is it possible to have freenas serve an iSCSI device to a windows machine that formats it as ntfs and Then also have the freenas mount it read only?

I want to have calibre manage ebooks on a win7 machine that may or may not be on while at the same time having access to said books over a web server in a freenas jail. Calibre expects direct drive access and mounting the drive via nfs causes renames to fail.

Mr Shiny Pants
Nov 12, 2012

D. Ebdrup posted:

That's where I'm figuring on system files which get accessed often will end up on the L2ARC and the ZIL devices to alliviate that. SFP+ and QSFP get about an order of magnitude smaller latency compared to 10GBase-T.

Alternatively, I won't upgrade storage before NVMe is available for PCI-express SSDs and motherboards. My current setup is SSD in all my machines except my server, and everything but the OS and heavy-duty programs/games on cifs/afp/nfs shares depending on OS.

If you do go this route, I would be interested. Especially about the NICs that boot ISCSI natively.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Nam Taf posted:

...why? What tangible advantage does it give over just, you know, running a normal PC?
If I do iSCSI / FCoE between multiple machines when it's across a bunch of VMs that are local to each other physically, there should be almost no hit to latency because it's all done in software through the hypervisor (hopefully with zerocopy). Using the Vmxnet3 NIC should basically be zerocopy with just a small bit of TCP/IP overhead and if using FCoE not even that much (iSCSI is pretty objectively slower than FCoE, see). iSCSI can be accelerated by the NIC a fair bit but I don't think the Vmxnet3 driver does this yet. This approach allows a clear separation of roles through network based hosts and you can treat your hypervisor like a FreeBSD jail host. The downside of jails is typically because they share a kernel shutting down / upgrading the host means all the jails go down. Being able to keep everything as independent as possible is a Good Idea when you tinker with crap constantly like many of us do.

I'm currently trying to setup a VM on an ESXi server as a PXE boot server instead of even bothering with USB booting and ISOs because gosh, it's what I'm used to doing in a professional environment with a lot of servers. PXE booting your machines lets you have a lot of control over rolling out updates to thin clients and is generally only something I'd even think of if I was provisioning machines fairly frequently (I do, hence spending money to resurrect my old LGA1155 Xeon server as a fat server).

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

GreatGreen posted:

The utility of having your OS drive exist as an iSCSI file on your NAS would be redundancy and the ability to recall snapshots for rollbacks if you need it.
You said local SSD for the OS.

--edit:
Goddamnit, researching the whole topic a bit, I come across a blog post from 2011(!) where a guy shoved 7GBit/s over some cheap rear end Infiniband setup. 27€ per adapter, 22€ a 3m cable. I need to find out now what cheap poo poo works with FreeNAS and Windows 8+ and what a 5-6m cable costs. :siren:

Combat Pretzel fucked around with this message at 18:09 on Mar 24, 2015

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Combat Pretzel posted:

Snapshotting your Windows installation would be an awesome option. I'd be going for something like this, wasn't it expensive as gently caress to go 10GBit.

That doesn't work as smoothly, tho. While you may be able to kludge your NAS into the MyWhatsit folders, that's about it. Pushing ProgramFiles and AppData to the NAS doesn't have much utility, since it becomes useless as soon you freshly reinstall the OS locally.

You can snapshop your windows install within windows, cant schedule it though i dont think.

GreatGreen
Jul 3, 2007
That's not what gaslighting means you hyperbolic dipshit.

Combat Pretzel posted:

You said local SSD for the OS.

--edit:
Goddamnit, researching the whole topic a bit, I come across a blog post from 2011(!) where a guy shoved 7GBit/s over some cheap rear end Infiniband setup. 27€ per adapter, 22€ a 3m cable. I need to find out now what cheap poo poo works with FreeNAS and Windows 8+ and what a 5-6m cable costs. :siren:

Hah, please share your findings here as I'll totally build a similar setup if anything that fast for that cheap is actually out there.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Here's the post I referenced. He talks about Mellanox adapters:

http://www.davidhunt.ie/infiniband-at-home-10gb-networking-on-the-cheap/

Here's someone else on the Mellanox forums (courtesy Google) posting stuff about playing with their adapters:

https://community.mellanox.com/people/rimblock/blog/2013/03/28/20gbs-infiniband-for-under-us200

Researching Mellanox, their adapters seem to be able to do things like booting from iSCSI. Remains to be seen if the cheap ones actually do.

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

Here's the post I referenced. He talks about Mellanox adapters:

http://www.davidhunt.ie/infiniband-at-home-10gb-networking-on-the-cheap/

Here's someone else on the Mellanox forums (courtesy Google) posting stuff about playing with their adapters:

https://community.mellanox.com/people/rimblock/blog/2013/03/28/20gbs-infiniband-for-under-us200

Researching Mellanox, their adapters seem to be able to do things like booting from iSCSI. Remains to be seen if the cheap ones actually do.

Depends, the Inifnihost 3 stuff is cheap but it does not have too many features. They do 10Gbit easily though. You may also have problems that drivers are not signed which can be a pain for newer windows versions. My 2012R2 server has driver signing disabled.

My Infinihost adapters do 800MB sec read and 500MB writes over SRP. Which is RDMA enabled scsi. That is MegaBytes.

The newer stuff is the ones you want: ConnectX2 and Connect X3 are nice and have drivers for 2012R2 enabling SMB3.

The problem is that SRP is the only RDMA enabled protocol that Solaris and Windows both support. Windows does not support ISER, which is a shame. Or you can do IPOIB which is regular IP over an Infiniband fabric. You don't get Remote DMA but it is still much faster than regular ethernet.

And you need a subnet manager to get your fabric up, the OpenFabrics drivers have one and Linux does also. Solaris's one is non existent.

Best solution is two Linux machines with Infiniband. You get subnet managers in the OpenFabrics drivers and Linux supports almost all the protocols that you want. ( SRP and ISER )

Performance within a VM running on Hyper V:

Mr Shiny Pants fucked around with this message at 20:23 on Mar 24, 2015

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

tip: Don't let a zfs pool completely fill up. It'll take eleven infinities to move data off of it.

GreatGreen
Jul 3, 2007
That's not what gaslighting means you hyperbolic dipshit.
Does the inability to defrag a RAIDZ array compromise its performance after a while?

BlankSystemDaemon
Mar 13, 2009




Well, too much fragmentation can kill any filesystem simply from the platters and heads having to physically move more to access any kind of data, incurring access time for every single command, but you shouldn't get fragmentation on ZFS unless you do very specific things (I mentioned one of them before: downloading anything with torrents or any process that only writes slowly in chunks).

However, according to the best practices guide, as pool utilization exceeds 80%, pools which see a lot of I/O (like mailservers or database servers) suffer severe preformance degration. For WORM pools, the preformance loss doesn't really become a huge issue before 95%.

Mr Shiny Pants posted:

If you do go this route, I would be interested. Especially about the NICs that boot ISCSI natively.
My plans are only tentative so far, and probably two to three years in the future - but once I get around to it, I'll see if I can't remember to do a trip-report thing.

BlankSystemDaemon fucked around with this message at 23:36 on Mar 24, 2015

poxin
Nov 16, 2003

Why yes... I am full of stars!
Posted my Synology unit to SA-Mart if anyone is interested: http://forums.somethingawful.com/showthread.php?threadid=3708698

GreatGreen
Jul 3, 2007
That's not what gaslighting means you hyperbolic dipshit.
Will installing FreeNAS onto a USB drive as opposed to a regular old hard drive effect the speed of the system in any way aside from the USB drive having a slightly slower boot time?

In other words, does the boot up process load the entire OS into the RAM, never to access the disk again until the next reboot, or not?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

GreatGreen posted:

Does the inability to defrag a RAIDZ array compromise its performance after a while?
ZFS employs read-ahead (reading adjacent blocks), prefetching (loading bigger bunches on data into ARC if it detects patterns in the IO) and does escalator-sort pending IOs. It'll partially mitigate issues from fragmentation that's bound to happen with COW.

--edit:

D. Ebdrup posted:

My plans are only tentative so far, and probably two to three years in the future - but once I get around to it, I'll see if I can't remember to do a trip-report thing.
My current hold-up on the project is possible drama in getting Windows up and running on iSCSI. It's either trying to find a card that can act as iSCSI HBA and pretend to be a real disk to the system, or do VM-style fuckery to get an image up and running. I'd like former, but I don't expect it to fit my price range, Ebay or not. Latter can be bootstrapped by running a VM on an iSCSI target, before you'll render your box diskless. But on Microsoft's Technet pages it said that you can't upgrade Windows when booting from iSCSI (say if you want to go to the new version, without bootstrapping everything from scratch).

Combat Pretzel fucked around with this message at 20:12 on Mar 26, 2015

thebigcow
Jan 3, 2001

Bully!

GreatGreen posted:

In other words, does the boot up process load the entire OS into the RAM, never to access the disk again until the next reboot, or not?

The whole thing is loaded into RAM, it only writes updates and configuration changes back to disc.

SynMoo
Dec 4, 2006

What software are you guys using for multiple simultaneous drive testing on one machine?

I've got a bunch of WD Reds coming in and testing them one at a time sucks with their software.

SamDabbers
May 26, 2003



SynMoo posted:

What software are you guys using for multiple simultaneous drive testing on one machine?

I've got a bunch of WD Reds coming in and testing them one at a time sucks with their software.

The badblocks utility under Linux (sysutils/e2fsprogs in ports for those running FreeBSD/FreeNAS) and smartmontools. I run 'badblocks -svw /dev/sdx' on each new drive, then use 'smartctl -a /dev/sdx' to make sure there are no reallocated sectors or excessive errors.

Adbot
ADBOT LOVES YOU

uhhhhahhhhohahhh
Oct 9, 2012
Trying to decide what NAS setup to go with without spending too much.

Don't need a ton of space, a single 4TB drive will be fine for now and I'll have 2 backups of everything important: music and main system image. It's mainly going to feed a Rasperrry Pi 2 running OpenElec and potentially another one or something like a WD TV Live plus a XboxOne or Smart TV. It needs to be able run NZBGet and NZBDrone too. Preferably something low power and quiet too. Is one of the pre-made ones from Synology/QNAP/Asustor going to be my best bet? I'm not against building my own I don't think it's possible to build one as cheap and low powered... Will something like a Synology DS215j (or QNAP/Asustor equivalent) be enough or will something stronger be needed? (DS214?)


Also trying to put it in another room, but it's away from the router. Is it a dumb idea to put it behind something like http://www.overclockers.co.uk/showproduct.php?prodid=NW-020-DV&groupid=46&catid=1604&subcat=2879 ? It will be on the same floor.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply