Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mr Shiny Pants
Nov 12, 2012

Paul MaudDib posted:

My thought with InfiniBand was that IB RDMA should provide a good mechanism for database-as-a-service or a storage backend for self-hosted databases on other machines. That's part of my goal here with leaving myself shitloads of RAM channels/capacity and PCIe lanes for NVMe expandability. Is that a difficult/bad use-case?

Someone mentioned that Shadow Copy Service can use ZFS as a snapshot mechanism, that seems pretty cool for backups too. Right now I manage my backups manually and I'd really like to get things set up better. Would that be impacted by the Windows driver problems you mentioned? I suppose backups can always run over Ethernet if it comes down to it, but it would be nice to get IB from Windows if I could.

RDMA won't work for SMB, you need to have support for it like SMB Direct. ZFS snapshots work with Shadow Copy but you need to hack the ZFS scripts because all the ZFS scripts use a different naming scheme that Shadow Copy can't handle. It expects the snapshots to be in a specific naming scheme otherwise they don't show up in the "Previous Versions" dialog. There is a guy who has hacked together something from the ZFS autosnapshot script that should work.

SRP is blockbased and SMB is not, so you would need to run your cards in Ethernet mode to get SMB running over that link.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

With a NAS case like a U-NAS NSC-810a, would I want fans pushing in or pulling out? Normally in gaming cases you want positive pressure, and it would probably help reduce dust if I could do that. Would I want static-pressure fans pushing in, or high-volume?
Usually you keep cases positive pressure in order to prevent dust from seeping in through the small cracks and holes in the case, and put dust covers on the in-take fans, so you can minimize the amount of dust in the case.

Like others have intimated, static-pressure fans are mostly used on servers where the CPU, PCH, RAID HBAs, NICs, RAM, and other devices are all lined up so the air-flow doesn't get interrupted and can keep the parts cool using only heat-sinks without coolers attached (which also lets you use slot-in coolers which can be hot-swapped, but that's mostly for high-availability).

That said, it's a case that I'd love to have if it was even remotely available in the EU for a reasonable price.

people posted:

:words: about ZIL.
You're thinking of SLOG, not ZIL. ZIL is part of ZFS in the same way that the ARC is, whereas SLOG is similar to a L2ARC. L2ARC and SLOG (cache and log devices, respectively) are the optional pool-wide read and (synchronous) write caching options that ZFS has which can be used to speed up certain workloads. Unless you know what they do, ie. what workloads they benefit, you almost definitely don't need them.

Paul MaudDib posted:

Threadripper would be cool
Just as long as you're aware that some of the AMD platforms ship with TrustZone, which is AMD and ARMs version of Intel ME, and - as you might expect - it already has issues.

BlankSystemDaemon fucked around with this message at 10:04 on Jul 26, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Thanks for all the help guys, I really appreciate the advice. Just to emphasize here I'm not going to go from zero to enterprise SAN overnight, my initial goal for this build is just a basic ZFS server going for my home stuff. I just want to leave my options open as much as possible. I realize I'm asking some dumb questions here, but in really small cases you don't have the expansion room to correct mistakes, so I am trying to think this build through as completely as possible to avoid hitting any dead-ends.

I took another look at racks and a short/small rack (19" deep, and say 3' high-ish) might be viable. Not sure if that would be limiting on chassis selection (probably). The other downside is heat/power, by splitting this up I can potentially scatter my power draw and heat output around my apartment a little better rather than turning my workroom into a blast furnace. It would also be nice if it wasn't totally obnoxious in terms of noise too, server stuff is usually pretty noisy because datacenters don't care, right? And this will also go into a carpeted room, and I have pets, so I'm concerned with dust management on a rack too...

Right now I don't have hardline ethernet to most places in my apartment, I have a few PLE adapters in various places and where necessary I've done a few ethernet runs around the edges of rooms and under carpet. I was also thinking seriously about getting rid of my TV and entertainment center and going to an ultra-short-throw projector that I can ceiling-mount, which would probably look much nicer with hidden cables. The complex is renovating apartments (mine is one of the few they haven't done) so I think they would be receptive to modern upgrades like adding ethernet as long as they didn't have to pay for it.

My workroom and my GF's workroom are basically just above the living room, how bad would it be to have some cables professionally pulled from the ground floor up to my room, and then from my room to my GF's workroom next door? Like ballpark terms here, am I looking at say <$500, $1000, or what? I think that's the other bullet I need to bite to hide my tech away a little better. I definitely want ethernet, and I probably want HDMI and a USB active extender run so I could get the media center out of the room entirely. In fact the runs are short enough (5-10m) that I could probably pull some short fiber runs for InfiniBand without breaking the bank too badly. Again, not that it's strictly necessary, but it's easier for me to spread things out a little into a couple small devices than to go turbo-nerd in my room, and if I'm pulling cables anyway...

Paul MaudDib fucked around with this message at 20:02 on Jul 26, 2017

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Paul MaudDib posted:

It would also be nice if it wasn't totally obnoxious in terms of noise too, server stuff is usually pretty noisy because datacenters don't care, right? And this will also go into a carpeted room, and I have pets, so I'm concerned with dust management on a rack too...
Not only do they not care, they're usually very invested in density, so they're shoving the maximum amount of stuff into a case and then trying to cool it with 10kRPM 80mm fans or something equally silly. Unless you're planning on having your CPU working hard 24/7 (which with a 24xx Xeon on a home server I'd say is unlikely), you can get away with much more human-friendly cooling without issue.

Paul MaudDib posted:

My workroom and my GF's workroom are basically just above the living room, how bad would it be to have some cables professionally pulled from the ground floor to my room, and then from my room to my GF's workroom next door? Like ballpark terms here, am I looking at say <$500, $1000, or what?
If they're planning on redoing the drywall, it's a joke. If they're not, going up/down a floor should still be pretty easy--you could fish those yourself, if you're handy at all. Going from one room to another without re-doing the drywall is a little bit more annoying, but if you have carpet you can always just hide it under there. Either way it shouldn't be particularly expensive to tack on to an existing work order--a few hundred, I'd think, at most. Several hundred if you were to get someone to come out just to do that one run, though.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
This is stupid but... would it be any easier to run plenum through air ducts? Particularly with the horizontal run - my GF's workroom and mine both have floor ducts and I am 90% sure they are connected.

I have no idea how you would route a cable out of an air duct and still have it look good though, without tearing the whole floor up and poo poo. I looked at that idea a year or two ago but it wasn't as simple as just lifting the vent grate and running cable. I don't remember the snag but I'll take another look. If fishing cables horizontally is hard then the air duct may still end up being easier.

Paul MaudDib fucked around with this message at 20:21 on Jul 26, 2017

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
You can, just make sure you get plenum rated cabling to make them a bit more resistant to the heat during the winter. It's absolutely not to code, of course, but it's also pretty unlikely to burn your house down and murder your dog. The big issue with running through ducts is that they often make a variety of 90deg turns, which is a pain in the rear end to fish through.

As far as making it look less ghetto, you can try taking the vent cover off at the terminal end, and then dropping it down inside the drywall, instead of actually ever coming through the grate, and put a terminal jack below the vent. Unsure if that would work with where the vent and the computers are in the room, though.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Paul MaudDib posted:

Just to emphasize here I'm not going to go from zero to enterprise SAN overnight, my initial goal for this build is just a basic ZFS server going for my home stuff.
Heh, you mentioned QDR/FDR 8X Infiniband :v:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Combat Pretzel posted:

Heh, you mentioned QDR/FDR 8X Infiniband :v:

Dual-port 4x QDR InfiniBand adapters are like $50 each on eBay though (HP, or Mellanox badged as Sun or Mellanox). If all you have is two or three machines you can do a direct attach connection between them (what would be called a "crossover" connection on ethernet, not sure of the InfiniBand terminology), otherwise a 36-port rack switch is about $125 (and they're short enough I can fit them if needed, there's also smaller ones too). Fiber cables aren't cheap (compared to Cat5e), but if you watch for surplus they're actually not that bad. Longer runs (>10m) are pretty expensive though, I assume because datacenters don't need many long runs relative to the number of short patch cables needed to plug machines into switches.

It's got a lot of really desirable technical properties. GPUs can use it to do RDMA against GPUs or NVMe SSDs on other systems, and it provides hard latency guarantees that Ethernet just can't touch. I never really got a chance to play with MPI all that much back in college but it would be interesting to tinker with.

But, mostly I just want something to connect my workstation to my NAS with higher throughput (and particularly more IOPS) than gigabit ethernet. 10GbE adapters are expensive as gently caress, 10 GbE switches are expensive as gently caress, and InfiniBand seems more technically appealing anyway (not actually sure what the motivation for the switchover in commercial gear is, just cost?). All of this stuff is surplus and you can get it for like pennies compared to the list price. We're talking less than $200 to make my workstation connect to my NAS at 64-gigabit speed with crazy IOPS and latency. Boo hoo, my graphics card cost twice that much. It's not #1 on my list of upgrades but I have my eye on it at some point.

My GF doesn't care, of course, but her workroom is still on the powerline ethernet, which is like 30 megabits on a good day. Backups/etc are going to be atrocious on that connection (and that's one of my big goals with a NAS upgrade). I at least want to get her onto a real gigabit line, and that immediately raises the question of "if I'm cutting holes in the walls to pull ethernet anyway, and it's only $50 for the InfiniBand cable, why don't I just pull it now and not give a gently caress?". It's more options for me splitting things up so I don't pop breakers or turn my room into a blast furnace (I legitimately don't know how our circuits are laid out, and because my room is far away from the furnace, it doesn't get good A/C either...)

(the better solution would be to get a new apartment to be honest, but rents are insane and we have a pretty nice location and a good management company)

edit: By the way, do the HP adapters have the same limitations with RDMA under Windows as Mellanox? (they may actually be rebadged Mellanox anyway, not sure, but visually they do look a little different)

Paul MaudDib fucked around with this message at 22:40 on Jul 26, 2017

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
I don't know how you're worrying about popping breakers. Your entire NAS should sit comfortably under 200W under full load, and probably idle closer to 50W.

Same with your workstation--unless you've got some really fancy hardware (mostly multi-GPU), you probably aren't cracking 500W from the wall.

A normal US home circuit is 120v * 15A = 1800W. So you could easily have two gaming PCs and a smattering of lights and fans and whatever else before you got close to popping a breaker.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

DrDork posted:

I don't know how you're worrying about popping breakers. Your entire NAS should sit comfortably under 200W under full load, and probably idle closer to 50W.

Same with your workstation--unless you've got some really fancy hardware (mostly multi-GPU), you probably aren't cracking 500W from the wall.

A normal US home circuit is 120v * 15A = 1800W. So you could easily have two gaming PCs and a smattering of lights and fans and whatever else before you got close to popping a breaker.

A normal US household circuit is 15A instantaneous, the continuous load is 80% derated, so (12A * 120V = 1440W), and that's measuring at the wall.

It's not the NAS, it's everything. I have a workstation with a 5820K that peaks at close to 500W while gaming (and I do want to be able to do SLI someday), plus the NAS, plus a treadmill, plus I want enough headroom that I can bring up another machine or two for a project without needing to worry. She's a professional seamstress so she's using irons and poo poo in her room too (if y'all need cosplay outfits, you know who to call, she's very good), plus I'm trying to get a decent gaming machine set up for her (probably at least 300W). And both of us like the idea of a 3D printer or CNC mill or some other poo poo for projects (although there's no way we can fit it currently).

I don't know where exactly our circuits are, the upstairs was clearly laid out as master+2 kid bedrooms so I wouldn't be at all surprised to discover both "kid bedrooms" are on the same circuit, or possibly even more. Together we've got enough stuff that the wrong combination could pop breakers. And down that road lies madness and corrupted filesystems, ask me how I know this :emo: (she "borrowed a power strip" while my NAS was moving data around...)

(yes I do have a good UPS and will be using a NUTS server to make sure it goes down gracefully)

This apartment is chopped up into way too many small rooms (some of the other units have an open floor plan and it's way, way more usable, but the rent is even more absurd for the size). Trust me, with the layout we've got it's just easier all around if I can tuck my gear into various nooks while keeping it fast and tolerably presentable. We really do just need a bigger house but it's not in the cards right now.

Paul MaudDib fucked around with this message at 22:46 on Jul 26, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Paul MaudDib posted:

edit: By the way, do the HP adapters have the same limitations with RDMA under Windows as Mellanox? (they may actually be rebadged Mellanox anyway, not sure, but visually they do look a little different)
As mentioned earlier, the limitations are artificial/in software.

The adapters do RDMA fine, the relevant network protocols need to support them. You can try some older version of WinOF (I think 4.0.x) or the OFED 3.2 drivers, both support SRP, which literally means SCSI RDMA Protocol, but you need to fiddle around with disabling driver signing checks and what not. From what I've read, Microsoft nudged Mellanox in dropping SRP in newer releases, so that's why. The iSCSI initiator of Microsoft doesn't do iSER, which means iSCSI Extensions for RDMA, so that one's out, if you want fast block devices that way. If you're connecting via SMB, Samba on Linux doesn't support RDMA yet.

Windows itself has specifically an API for RDMA, called NetworkDirect. So the operating system can do it. It's mostly politics.

OEM branding doesn't matter, AFAIK. My Mellanox cards are also HP branded. WinOF doesn't care. On the NAS I'm running the in-box drivers that come with the kernel/distro. They're kinda outdate-ish, but Mellanox' OFED driver package doesn't support Fedora 26 yet (only 24). Performance is well enough. With just four threads, I can shovel 28GBit over the line via iperf. Crystal DiskMark pulls 1800-1900MB/s on linear reads from ZFS' ARC, via non-RDMA accelerated Samba or iSCSI. Everything's good enough.

My card's configured to 40GbE instead of FDR 4x Infiniband. Former performs better in my tests.

--edit:
Windows pudding.

Combat Pretzel fucked around with this message at 22:42 on Jul 26, 2017

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Sounds like you should take an afternoon to play circuit breaker bingo! When I bought my house--helpfully without any floor plans and a breaker box that didn't have a single 15A fuse labeled--I went around and wrote the circuit number on the back of the outlet plates. I found that more useful than writing "MBDR + 2 outlets in the MBTH + 1 on the wall outside + 1 in the garage for some reason" in the tiny-rear end little label spot on the breaker box.

Also, don't treadmill & iron & game at the same time. That just sounds dangerous.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Combat Pretzel posted:

As mentioned earlier, the limitations are artificial/in software.

The adapters do RDMA fine, the relevant network protocols need to support them. You can try some older version of WinOF (I think 4.0.x) or the OFED 3.2 drivers, both support SRP, which literally means SCSI RDMA Protocol, but you need to fiddle around with disabling driver signing checks and what not. From what I've read, Microsoft nudged Mellanox in dropping SRP in newer releases, so that's why. The iSCSI initiator of Microsoft doesn't do iSER, which means iSCSI Extensions for RDMA, so that one's out, if you want fast block devices that way. If you're connecting via SMB, Samba on Linux doesn't support RDMA yet.

Is there any significant problem with running the older drivers except the lack of Win10-compliant signing? Because idgaf about that, it's literally thirty seconds to reboot with signing disabled to install drivers. Unless there's enough churn in this API that features I care about are getting added, or things are breaking, it sounds fine with me.

Could I run a guest VM and pass the InfiniBand adapter through to the guest? This is a game of me trying to keep my options open as much as possible, so I don't want to write Windows off entirely (or at least RDMA), but a lot of the things I'd be doing with it could conceivably run under a guest OS just fine, and VT-D should take care of that.

I was thinking last night about some of the things I could do with it, and virtualization definitely came to mind. With homelab stuff, a VM/container could start its life on the NAS and then if more performance was necessary I could run it on my workstation. The ability to serve disk images also seems really cool to me as an intermediate step between a bare-metal OS and full virtualization. Hypothetically I am open to just setting up a rack and putting all our GPUs and poo poo in there, getting rid of our normal desktops, and just running thin clients in our rooms but I think there's various downsides. I'd be worried about color profiling problems with photo editing (especially with RDP compression), and for gaming GSync really needs the monitor to be plugged in directly to the GPU. I'm not willing to accept streaming as my primary display yet either. But we could definitely boot from disk images that are served from the NAS, or use network shares as much as possible. Simple/easy use-case, we could both have our own SMB share for our Steam Libraries, and any games we have in common should automatically be deduped by ZFS and would consume no additional space. We really wouldn't need hard drives in the machine itself much at all.

(I realize I could already do a bunch of this without IB/ZFS/etc, but please bear in mind that I am really just talking about a general upgrade in my NAS and network here. Can't do any of this poo poo over powerline ethernet to some of the places I need it, and after backup redundancy I am practically full at the moment.)

Paul MaudDib fucked around with this message at 00:18 on Jul 27, 2017

qutius
Apr 2, 2003
NO PARTIES
Finally got a chance to pick up the two WD Easystore drives I bought, popped the drives to find two Thai made reds with the larger cache. Pretty killer deal, thanks, thread!

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Combat Pretzel posted:

In that case, have fun, SRP or iSER should do fine. If you were wanting to use RDMA with Windows, you'd probably have to rely on outdated drivers or something else. Microsoft coerced Mellanox to drop SRP support on their driver stack, which left me a little salty. Too much fiddling to get outdated drivers to work. While Samba has a development branch with SMB Direct support somewhere, it was last updated Dec 2016, and I can't find any progress reports. On the other hand, it'd be nice for Microsoft to implement iSER in their initiator, but hell will freeze over before that happens, due to aforementioned coercion.

What is the antonym of "RDMA mode", is it like "block mode" or "packet mode" or something? Also, are you actually getting a real 40 gbit now that you have RDMA, or is there still some loss from overhead? And is there any difference in IOPS?

By the way, what the hell is Microsoft's problem here? You'd think they'd like enterprise hardware working properly in their ecosystems. Do they hate the idea of remote network adapters being able to twiddle their bits or something? :raise:

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Uh, there's no antonym? It's a different lightweight transport versus TCP/IP.

I'm not using RDMA. I'm still debating whether I want to fiddle with the older drivers. For me, things are currently fast enough as they are, getting functional RDMA is just a plus. As far as performance goes, here are a bunch of numbers:
http://www.zeta.systems/blog/2016/09/21/iSCSI-vs-iSER-vs-SRP-on-Ethernet-&-InfiniBand/

Makes it sound worthwhile, if you absolutely need it.

As far as Microsoft goes, I have no idea. I guess they want to push SMB3 and therefore sales of Windows Server, or something like that. Might want to upvote my request for iSER, if you're in the Win10 insider program: https://aka.ms/Mryspj

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Combat Pretzel posted:

Uh, there's no antonym? It's a different lightweight transport versus TCP/IP.

I'm not using RDMA. I'm still debating whether I want to fiddle with the older drivers. For me, things are currently fast enough as they are, getting functional RDMA is just a plus. As far as performance goes, here are a bunch of numbers:
http://www.zeta.systems/blog/2016/09/21/iSCSI-vs-iSER-vs-SRP-on-Ethernet-&-InfiniBand/

Makes it sound worthwhile, if you absolutely need it.

As far as Microsoft goes, I have no idea. I guess they want to push SMB3 and therefore sales of Windows Server, or something like that. Might want to upvote my request for iSER, if you're in the Win10 insider program: https://aka.ms/Mryspj

Yeah, sounds like I'll cross that bridge when I get there. A lot of the things I'd be doing would be in Unix anyway. Compared to gigabit ethernet I'm sure it would be bitchin'.

Thanks, that's exactly the article I needed to read, that really helped put the pieces together. If I remember this right, SCSI/SAS works at the level of logical commands rather than just manually bitbanging data directly into blocks, iSCSI is a generalization of that to a network with each physical and logical device having a uuid, correct? And the difference between the two is that packets loop through the OS (or a local network stack in the adapter?) in the TCP/IP model, while in the RDMA model you just have the interconnect dial the other device up and you do a DMA across the fiber (just like if it were directly attached to your PCIe), right? Sorry, it's been a long time and I've never actually set up a SAN. :shobon:

Ah, protocol wars. Well, maybe it'll change, you never know. To be honest I've been really impressed with Microsoft's turn of direction, they have legitimately been cleaning up some of their act with things like open-sourcing DotNet and embracing Linux a little. Their code quality is way up too, 8.1 was fantastic and 10 is too, apart from updates wiping your settings. Newer versions of Visual Studio are way faster and way less likely to bomb on install. They're making good direction and technical calls, and I'd say they're actually on a rebound now, which was relatively unthinkable even 5 years ago. And as much as people hate updates, it makes the internet much safer when patches get pushed out in a timely fashion - although it would be nice if you could select "deferred feature updates" on Home.

Shame there's no out-of-the-box way to manage packages like in Linux. Microsoft just needs to buy Chocolatey already.

Paul MaudDib fucked around with this message at 02:23 on Jul 27, 2017

evol262
Nov 30, 2010
#!/usr/bin/perl

Paul MaudDib posted:

Yeah, sounds like I'll cross that bridge when I get there. A lot of the things I'd be doing would be in Unix anyway. Compared to gigabit ethernet I'm sure it will be bitchin'.

Thanks, that's exactly the article I needed to read, that really helped put the pieces together. If I remember this right, SCSI works at the level of logical commands rather than just manually bitbanging data directly into blocks, iSCSI is a generalization of that to a network with each physical and logical device having a uuid, correct? And the difference between the two is that packets loop through the OS (or a local network stack in the adapter?) in the TCP/IP model, while in the RDMA model you just have the interconnect dial the other device up and you do a DMA across the fiber (just like if it were directly attached), right? Sorry, it's been a long time and I've never actually set up a SAN. :shobon:

Ah, protocol wars. Well, maybe it'll change, you never know. To be honest I've been really impressed with Microsoft's turn of direction, they have legitimately been cleaning up some of their act with things like open-sourcing DotNet and embracing Linux a little. Their code quality is way up too, 8.1 was fantastic and 10 is too, apart from updates wiping your settings. Newer versions of Visual Studio are way faster and way less likely to bomb on install. They're making good direction and technical calls, and I'd say they're actually on a rebound now, which was relatively unthinkable even 5 years ago. And as much as people hate updates, it makes the internet much safer when patches get pushed out in a timely fashion - although it would be nice if you could select "deferred feature updates" on Home.

Shame there's no out-of-the-box way to manage packages like in Linux. Microsoft just needs to buy Chocolatey already.

You're really overengineering this whole thing, but to clarify, no. There's 'iSCSI' (TCP), 'iSCSI+iSER' (TCP then DMA), and SRP (DMA).

Yes, though, iSCSI presents LUNs. They don't need to be uuids. You'd present something like "iqn-2017-07.com.example:mysteamlibrary"

iSER has lower latency because the target and initiator are directly connected, but the commands are still handled by the iSCSI target software. It sends a different primitive telling the target that a DMA will be happening, then it does it. In theory, you an skip past the TCP stack on the host and memory copies for most of the operations. This is definitely the way to go.

what you described is SRP, which also works. iSER is nicer to set up, though.

qutius
Apr 2, 2003
NO PARTIES
Are there any good resources on troubleshooting RDMs into Windows VMs? ESX sees my storage drives just fine, but when I attach them via RDM/VMDK, Windows only sees a fraction of the actual space. This is all fairly new to me, am I missing something obvious? ESX sees them as 8TB drives, Windows scans them in as 1.3TB or so. Seems like its a Windows issue, no about of fiddling around will get them to see the actual size. Any thoughts? Is there a better thread to ask ESX related questions, perhaps?

Moey
Oct 22, 2010

I LIKE TO MOVE IT
You are not actually running ESX are you? ESXi I assume? What version?

Can you see your disks under Configuration -> Storage Adapters when you select the proper HBA? Is the size correct?

What about under Configuration -> Storage?

Any specific reason you are going the RDM route?

qutius
Apr 2, 2003
NO PARTIES

Moey posted:

You are not actually running ESX are you? ESXi I assume? What version?

Can you see your disks under Configuration -> Storage Adapters when you select the proper HBA? Is the size correct?

What about under Configuration -> Storage?

Any specific reason you are going the RDM route?

Sorry, yes, ESXi version 6.5.

And that's a good question, I don't really need to do RDMs. I'll create a couple 7TB datastores and go from there with some further testing. No need to make things more complicated.

The issue I was running into was related to partition tables, from some digging around in the logs.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
If I had to take a guess, you are wanting to use these entire disks for some sort of media storage via a Windows VM?

In my home ESXi box, I am doing something very similar. Passed through some 8tb disks this way.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1017530

KOTEX GOD OF BLOOD
Jul 7, 2012

G-Prime posted:

In case anybody missed it, Best Buy has the extremely shuckable 8TB WD EasyStore drives on sale right now for an absurdly low price ($159, lowest I've ever seen for them). They contain a single Red, and people claim they can be RMA'd without the case. I snagged 8, and am currently firing off SMART tests on them. Holy poo poo they're loud in the external cases.
Thank you so much for the heads up about this. I went out and bought one, shucked it and indeed it is an 8TB WD Red.

Mine is a WD80EFAX. Anyone know the distinction between this, the EFRX, and the EFZX?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

KOTEX GOD OF BLOOD posted:

Mine is a WD80EFAX. Anyone know the distinction between this, the EFRX, and the EFZX?

The A's are the 256MB cache, the Z's are the 128MB ones. Unsure about the R's.

qutius
Apr 2, 2003
NO PARTIES

Moey posted:

If I had to take a guess, you are wanting to use these entire disks for some sort of media storage via a Windows VM?

In my home ESXi box, I am doing something very similar. Passed through some 8tb disks this way.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1017530

yes indeed, your assumptions are correct again.

I was following a pretty similar process. I'll take another gander! thanks again!

Moey
Oct 22, 2010

I LIKE TO MOVE IT

qutius posted:

yes indeed, your assumptions are correct again.

I was following a pretty similar process. I'll take another gander! thanks again!

Windows even has access to SMART data this way, so that is a bonus.

KOTEX GOD OF BLOOD posted:

Thank you so much for the heads up about this. I went out and bought one, shucked it and indeed it is an 8TB WD Red.

Ugh, don't tempt me to buy more drives.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Just gonna leave this right here incase someone, you know, might need it or something.

http://www.bestbuy.com/site/wd-easystore-8tb-external-usb-3-0-hard-drive-black/5792401.p?skuId=5792401

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
Currently in the middle of my 5th resilver on my array as I replace drives (had to do my first drive twice because I screwed up the replacement procedure and Corral threw a fit and wouldn't recognize it in the GUI as a part of the array). I'm honestly kinda surprised at how fast it's going. I've seen peak speeds around 680MBps, which seems nuts.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

G-Prime posted:

I'm honestly kinda surprised at how fast it's going. I've seen peak speeds around 680MBps, which seems nuts.

Pretty sure that's the "processing" speed. As in it's churning through 680MB/s, but it'll only be writing a fraction of that to the new drive, since most of the data is just gonna stay where it is on the existing disks with no further action needed. Still, pretty speedy!

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
Yep, that's exactly what it is. I'm just glad to know it's only going to be another few days. I've been sitting on <1 TB free for about 6 months, and getting another ~20 to work with is going to make me feel a lot better.

And make me want to invest in either Infiniband or a couple 10GbE cards and a direct line to connect my desktop to the NAS so I can use it instead of the internal drive I've got.

...Yep. Goals.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

G-Prime posted:

...Yep. Goals.

This is not a thread where responsible purchasing decisions are encouraged, but rather where goals and dreams and crazy projects are encouraged.

Which is as it should be.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Is there any particular reason to choose FreeBSD vs Illumos vs Solaris? Better ZFS features, better driver support (including for InfiniBand), general upkeep/state of support?

My sense here is that I'd prefer FreeBSD over Solaris and then Illumos last but that's not really based on anything in particular.

I won't be using it for anything commercial, just personal/home use, so Oracle's lovely licenses won't affect me.

hifi
Jul 25, 2012

killall kills all of the processes in solaris and illumos

Mr Shiny Pants
Nov 12, 2012

Paul MaudDib posted:

Is there any particular reason to choose FreeBSD vs Illumos vs Solaris? Better ZFS features, better driver support (including for InfiniBand), general upkeep/state of support?

My sense here is that I'd prefer FreeBSD over Solaris and then Illumos last but that's not really based on anything in particular.

I won't be using it for anything commercial, just personal/home use, so Oracle's lovely licenses won't affect me.

Solaris has no subnet manager and neither does FreeBSD I think. That was one of the reasons I switched my NAS from Open Indiana to Linux. It is stable though, COMSTAR is cool and ZFS Shadow Copies work out of the box.

KOTEX GOD OF BLOOD
Jul 7, 2012

So whats the best way to copy all the data on the existing drive in my DS216j to the new drive? I tried rsync but it's only running at 4 mbps or so. Should I use CP? I'm wary of using the DSM File Station app.

e: I should probably clarify that both drives are mounted in the same device.

KOTEX GOD OF BLOOD fucked around with this message at 19:47 on Jul 29, 2017

eightysixed
Sep 23, 2004

I always tell the truth. Even when I lie.

G-Prime posted:

In case anybody missed it, Best Buy has the extremely shuckable 8TB WD EasyStore drives on sale right now for an absurdly low price ($159, lowest I've ever seen for them). They contain a single Red, and people claim they can be RMA'd without the case. I snagged 8, and am currently firing off SMART tests on them. Holy poo poo they're loud in the external cases.

All that and I loving missed it :suicide:

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
About once a month for the last 6 months or so they've done a big sale on them. It's just, historically, been $179 instead of $159. Keep an eye out, it'll probably come around again.

redeyes
Sep 14, 2002

by Fluffdaddy
those are 5400rpm drives right? (or 5900?)

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
I just noticed I'm getting this message from ZFS:

code:
[root@x150 ~]# zpool status        
  pool: pool0                      
 state: ONLINE                     
status: Some supported features are not enabled on the pool. The pool can  
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done, 
        the pool may no longer be accessible by software that does not support 
        the features. See zpool-features(5) for details.       
  scan: scrub repaired 0B in 1h51m with 0 errors on Fri Jul 14 03:51:54 2017    
I remember installing a ZFS update during a regular yum system update a couple of days ago.

Is it generally OK to go ahead and enable new features on your zpool when they're rolled out?

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
As long as you don't think you'll ever need to use the pool on an older version of ZFS software (as a home user, there's really no scenario I can think of where you would), yeah, go right ahead.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply