Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Heh, that's practically the same idea I had last year for running all my poo poo on the NAS. Linux hypervisor with iSCSI extents wrapped in bcache to run the guest on. I feel funny about wasting a core or two on the host, though. Plus there's NVIDIAs ongoing fight against VGA passthrough.

Adbot
ADBOT LOVES YOU

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

Heh, that's practically the same idea I had last year for running all my poo poo on the NAS. Linux hypervisor with iSCSI extents wrapped in bcache to run the guest on. I feel funny about wasting a core or two on the host, though. Plus there's NVIDIAs ongoing fight against VGA passthrough.

Maybe you have a second machine. Running it all on the NAS is still something I'd like to do. But then I'd need something like 128 GB of memory and maybe a ThreadRipper.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

IOwnCalculus posted:

Is this box going into a corporate environment supporting a massive workload and multiple dozens of spindles?

If not, you're way overthinking this. You're not going to run into PCIe bottlenecks in a home environment.

Nope, home NAS.

DrDork posted:

This. I mean, sure, as a fun exercise in overkill you could, but remember that even GigE is limited to ~100MB/s throughput, which is 1/10th of a 1x PCIe 3.0 lane. So....yeah. You're never, ever, ever going to get any sort of congestion due to a lack of PCIe lanes if all you're doing with it is file serving type stuff.

Hell, even with SLI top-end GPUs gobbling up 16 lanes on their own, running into meaningful PCIe lane slowdowns takes effort.

e; also, if all you're using is 4x drives, by all means stick with straight motherboard connections. People start stuffing LSI cards into their builds because they've run out of motherboard ports, not (generally) because there's anything wrong with the motherboard ports available.

I'm just looking at the single PCH/DMI chip on a mainboard and figuring if every SATA port (10) plus the GigE, plus (maybe) USB, and for sure a M.2 slot (that eats up 2 lanes on it's own) there will be contention on that chip to perform. Maybe I'm over thinking it? poo poo, it's $160ish difference at the end of the day, which is less than the cost of the mainboard and roughly a third the cost of the CPU. I'm not doing this to be cheap.

There's another thing that I didn't really get into but if I'm doing more than 10 drives at some point I'd have to buy a card anyway. So why not plan for that now, spend the $120 on it and be done? Hopefully the only thing I ever need to open this case up for is adding a CPU and some memory.

Internet Explorer posted:

Also you don't need hotswappable drives on a home NAS.

It's a creature comfort for sure, and at the end of the day (I found enough backplanes) it was $110 or so for the 16 ports. *shrug* that's an easily justifiable expense at least to me.

ILikeVoltron fucked around with this message at 17:50 on Jul 12, 2017

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

ILikeVoltron posted:

I'm just looking at the single PCH/DMI chip on a mainboard and figuring if every SATA port (10) plus the GigE, plus (maybe) USB, and for sure a M.2 slot (that eats up 2 lanes on it's own) there will be contention on that chip to perform. Maybe I'm over thinking it? poo poo, it's $300 difference at the end of the day, which is less than the cost of the mainboard and roughly a third the cost of the CPU. I'm not doing this to be cheap.

Remember that PCIe is packet-based: that it's communicating with X devices isn't a big issue so long as the combined bandwidth is below its limits. Certain devices do get dedicated lanes for various reasons, but HDDs generally are not one of those items, since it would be an enormous waste to sit a 100MB/s drive exclusively on a 500MB/s lane. USB's data needs are so hilariously low as to be nonexistent on a modern platform, unless you're talking about using USB 3.0 for an external HDD or something, and even then they're no more worrisome than another HDD.

Assuming you're generating the vast, vast majority of your I/O and data requests via the GigE (it's mostly a file server, no?), the built-in limit of ~100MB/s from the network means it will be trivial for the PCH to serve an array of arbitrary size. If you were talking about doing a lot of large file transfers internally (array <-> M.2 for HD video editing, for example) then maybe it would be worth worrying about. Maybe. But still probably not, because even a 10 drive array is gonna be limited to well under 2GB/s write performance, and that's only 2 lanes, figure another 2 for the M.2 you'd be reading from, so you'd have to be actually hitting near those numbers before you'd start maxing out the 8 PCIe 2.0 lanes that the Haswell/Broadwell PCHs have.

Go for it if you want to, but don't fool yourself into thinking you're getting better performance out of it.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

DrDork posted:

Remember that PCIe is packet-based: that it's communicating with X devices isn't a big issue so long as the combined bandwidth is below its limits. Certain devices do get dedicated lanes for various reasons, but HDDs generally are not one of those items, since it would be an enormous waste to sit a 100MB/s drive exclusively on a 500MB/s lane. USB's data needs are so hilariously low as to be nonexistent on a modern platform, unless you're talking about using USB 3.0 for an external HDD or something, and even then they're no more worrisome than another HDD.

Assuming you're generating the vast, vast majority of your I/O and data requests via the GigE (it's mostly a file server, no?), the built-in limit of ~100MB/s from the network means it will be trivial for the PCH to serve an array of arbitrary size. If you were talking about doing a lot of large file transfers internally (array <-> M.2 for HD video editing, for example) then maybe it would be worth worrying about. Maybe. But still probably not, because even a 10 drive array is gonna be limited to well under 2GB/s write performance, and that's only 2 lanes, figure another 2 for the M.2 you'd be reading from, so you'd have to be actually hitting near those numbers before you'd start maxing out the 8 PCIe 2.0 lanes that the Haswell/Broadwell PCHs have.

Go for it if you want to, but don't fool yourself into thinking you're getting better performance out of it.

I guess what I don't understand here is how the I/O controller on the PCH chip works. Another thing is, just because an interface supports something doesn't mean you see that in the real world, so I'm a bit hesitant on it. I get what you're saying though, that even with plenty of overhead it's not something you'll bottleneck on, my only concern here is how the controller itself handles contention and splitting up a big block of writes across 10+ disks.

As far as the data types I'll be working with, it'll be some VMs, some containers, some NFS storage most of the time. Other times I'll be building 8+ VMs to launch openstack tests (between 32 and 64 gigs of memory for this). I'll be unpacking DVD sized files, so there will be some IO that's not coming directly across the wire. I imagine largely the system will be idle most of the day, but while I'm doing testing on various tasks it'll be heavily utilized.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

ILikeVoltron posted:

I guess what I don't understand here is how the I/O controller on the PCH chip works. Another thing is, just because an interface supports something doesn't mean you see that in the real world, so I'm a bit hesitant on it. I get what you're saying though, that even with plenty of overhead it's not something you'll bottleneck on, my only concern here is how the controller itself handles contention and splitting up a big block of writes across 10+ disks.

DMI2 and DMI3 are really just 4 lane Gen2 or Gen3 PCIe links. The total raw throughput of these links is therefore 2GB/s or 4GB/s before packetization and other overhead. 75% efficiency is achievable: I have measured 1.5 GB/s read throughput from a RAID0 of 4 SATA SSDs connected to an Intel DMI2 PCH.

A PCH chip is just a collection of PCIe IO controllers, each equivalent to what you might plug in to a PCIe expansion slot, plus a PCIe packet switching fabric so they can all share the one DMI (PCIe) link to the CPU. The CPU has a "root complex" (another switch fabric) to provide connectivity between DMI/PCIe ports and DRAM.

How PCIe devices and switches handle contention is a major chunk of the specification, but suffice it to say that PCIe has a credit based flow control scheme which does a good job of fairly allocating each link's bandwidth between all the traffic flows passing through it.

Also, the PCH doesn't split writes across disks. It isn't that smart. The OS decides what gets written where and then asks its SATA driver to do writes through an AHCI SATA controller, which in this case happens to be located in the PCH. The PCH SATA controller doesn't truly know whether the disk targeted by a specific I/O operation is part of a RAID or other storage pool, its job is just to perform whatever I/O it's asked to do.

(Real RAID controllers are different, they contain local intelligence to split incoming I/O according to RAID geometry, do parity calculations for RAID levels that need it, and so on. PCH RAID is software RAID behind the curtains, which is honestly just as good or better if all you're doing is RAID0/1/10.)

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BobHoward posted:

lso, the PCH doesn't split writes across disks. It isn't that smart. The OS decides what gets written where and then asks its SATA driver to do writes through an AHCI SATA controller, which in this case happens to be located in the PCH. The PCH SATA controller doesn't truly know whether the disk targeted by a specific I/O operation is part of a RAID or other storage pool, its job is just to perform whatever I/O it's asked to do.

To simplify this: The PCH treats one long-rear end write to a single disk pretty much the same as 10 shorter writes to 10 different disks, the only difference being occasionally varying the recipient device header for the data packet (which, as pointed out, isn't even something the PCH does on its own--it just follows what the OS tells it to do). Otherwise the PCH doesn't really give much of a gently caress about where the data is going in that sense, so as long as the total bandwidth you're trying to utilize is less than what the PCH is able to provide, you're fine.

Again, figure a 10 drive HDD array cannot reasonably be expected to exceed 1500MB/s, that's 3000MB/s total if you're pushing it to a M.2 also hosted on the PCH. A Haswell PCH normally has 8 PCIe 2.0 lanes for 4000MB/s bandwidth. At 75% that's 3000MB/s. So if you're trying to max that out while also doing 100MB/s on the GigE NIC you might see some minor bottlenecking. But even then you're more likely to see performance hits from basically anything else (drive fragmentation, non-sequential requests, whatever) first, and a more realistic upper limit for most HDDs is around 100-120MB/s for flat-out sequential reads, so you're talking a max practical bandwidth of 2400MB/s, meaning you still have 600MB/s left over for your 100MB/s NIC and your <1MB/s keyboard/mouse.

All that said, this is a thread dedicated to excess and "because I can," so you absolutely shouldn't feel bad about deciding to over-think/over-engineer something on the grounds of "gently caress IT I WANT TO."

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I'm putting a 2.5" drive in a shockproof enclosure for movies and stuff. The SSHD version of the drive is basically the same price as the regular version, I'm not expecting much benefit for most stuff but is there any drawback?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Paul MaudDib posted:

I'm putting a 2.5" drive in a shockproof enclosure for movies and stuff. The SSHD version of the drive is basically the same price as the regular version, I'm not expecting much benefit for most stuff but is there any drawback?

Nope.

Price is usually the drawback, but there are no technical downsides to a SSHD (other than the vague argument of having a SSD in there that can now fail, and more parts = more failures). On the other hand, I wouldn't expect a big performance benefit from it, either.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

BobHoward posted:

Also, the PCH doesn't split writes across disks. It isn't that smart. The OS decides what gets written where and then asks its SATA driver to do writes through an AHCI SATA controller, which in this case happens to be located in the PCH.

DrDork posted:

To simplify this: The PCH treats one long-rear end write to a single disk pretty much the same as 10 shorter writes to 10 different disks, the only difference being occasionally varying the recipient device header for the data packet (which, as pointed out, isn't even something the PCH does on its own--it just follows what the OS tells it to do). Otherwise the PCH doesn't really give much of a gently caress about where the data is going in that sense, so as long as the total bandwidth you're trying to utilize is less than what the PCH is able to provide, you're fine.

Yea, poor choice of words on this. I mean to say how it handles the contention of having to make 10 writes to 10 different disks (like say when you're flushing out a large number of blocks to disk). It might only be 2-3 chunks written to 4-6 disks, so the same data written out to each, hence the how would it split question. Again, just poorly written.

DrDork posted:

All that said, this is a thread dedicated to excess and "because I can," so you absolutely shouldn't feel bad about deciding to over-think/over-engineer something on the grounds of "gently caress IT I WANT TO."

gently caress yea! The weird thing that brought me here (not the thread but rather wanting to build a NAS) was that there just doesn't seem to be a clean and cheap way to do 10+ disks in a NAS. Either you're spending $2200+ on something from QNAP/etc or you're building it yourself. When I started looking into the cost of expanding my little NAS I figured I wanted it to do VMs and a few other things and the price kept going up until I was like gently caress this, I'll just build it myself.

BobHoward posted:

DMI2 and DMI3 are really just 4 lane Gen2 or Gen3 PCIe links. The total raw throughput of these links is therefore 2GB/s or 4GB/s before packetization and other overhead. 75% efficiency is achievable: I have measured 1.5 GB/s read throughput from a RAID0 of 4 SATA SSDs connected to an Intel DMI2 PCH.

A PCH chip is just a collection of PCIe IO controllers, each equivalent to what you might plug in to a PCIe expansion slot, plus a PCIe packet switching fabric so they can all share the one DMI (PCIe) link to the CPU. The CPU has a "root complex" (another switch fabric) to provide connectivity between DMI/PCIe ports and DRAM.

How PCIe devices and switches handle contention is a major chunk of the specification, but suffice it to say that PCIe has a credit based flow control scheme which does a good job of fairly allocating each link's bandwidth between all the traffic flows passing through it.

Without bogging everything down to get into the math of SATA overhead, plus every other device and everything; I just looked at the numbers of a M.2 disk and 10+ SATA ports. Assuming they would cache a little bit on disk then be rate limited to how fast they could flush that cache to disk (as in, on the SATA board itself). I figured we were getting pretty close to the numbers on that interface (DMI/PCH). Am I going to have 10 disks right off the bat? hell no. Maybe I'm just thinking of how this thing will scale beyond the 10 onboard SATA ports, or maybe I'm just curious how it all works. Either way, thanks for the explanation.

It looks like I could get away with using it for now, and then maybe upgrade to some PCIe cards, so thanks for explaining these things.

I'll stew a bit on this but I think I might just go with the cards so I don't have to rebuild my case and re-cable everything later on (assuming I'd grow to 12 disks).

Again, thanks for the help

ddogflex
Sep 19, 2004

blahblahblah
I'm sure this is a stupid question, but is there anywhere to get 8GB of unregistered DDR4 ECC cheaper than crucial.com ($100)? Newegg doesn't even sell any that I can find, and on eBay everything is $130+. Which seems pretty crazy to me.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

ddogflex posted:

I'm sure this is a stupid question, but is there anywhere to get 8GB of unregistered DDR4 ECC cheaper than crucial.com ($100)? Newegg doesn't even sell any that I can find, and on eBay everything is $130+. Which seems pretty crazy to me.

DDR4 prices have been going up dramatically. That pricing doesn't seem that out of line for ECC UDIMMs at current pricing, and big RDIMMs are more like $15+/GB if you can find them in stock.

ddogflex
Sep 19, 2004

blahblahblah

Twerk from Home posted:

DDR4 prices have been going up dramatically. That pricing doesn't seem that out of line for ECC UDIMMs at current pricing, and big RDIMMs are more like $15+/GB if you can find them in stock.

Ugh. Guess I'll just deal with my RAM being pegged at 100% constantly. It hasn't actually caused any problems from what I can tell. But I'm really just using it as storage, Plex, and torrents. Overkill machine for that, but that's the point in this thread, right?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

ddogflex posted:

Ugh. Guess I'll just deal with my RAM being pegged at 100% constantly. It hasn't actually caused any problems from what I can tell. But I'm really just using it as storage, Plex, and torrents. Overkill machine for that, but that's the point in this thread, right?

Unused RAM is wasted RAM, dude(ette)!

ddogflex
Sep 19, 2004

blahblahblah

Thermopyle posted:

Unused RAM is wasted RAM, dude(ette)!

But I want to do more stuff! (I don't know what, but... STUFF. (Butt stuff?))

I was thinking about running a VM on it, but I really don't have the spare RAM at all for it right now. It would just be so I could have a REAL DESKTOP OS I could remote to from my Chromebook. Which I don't exactly need.

Also, perfectly acceptable to call ladies "dude". I call my wife dude all the time. She loves it.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

It depends on what is using the RAM. If it's a lot of cached stuff then it'll just get kicked out so you can use the RAM for other things. If it's in-use then yah you've got a problem.

ddogflex
Sep 19, 2004

blahblahblah

Thermopyle posted:

It depends on what is using the RAM. If it's a lot of cached stuff then it'll just get kicked out so you can use the RAM for other things. If it's in-use then yah you've got a problem.

It's that sweet sweet ZFS slurping it all down.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

ddogflex posted:

It's that sweet sweet ZFS slurping it all down.

If you're mostly using it to serve files to one or two users at a time, you can happily get away with much less than the tried-and-true "1GB per TB" rule without much ill effect. But, yeah, if you've got RAM chilling out, most OSs will let ZFS gobble it all up for lack of anything better to do with it.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

DrDork posted:

If you're mostly using it to serve files to one or two users at a time, you can happily get away with much less than the tried-and-true "1GB per TB" rule without much ill effect. But, yeah, if you've got RAM chilling out, most OSs will let ZFS gobble it all up for lack of anything better to do with it.

Does L2ARC affect memory requirements either way?

Shame flash and DRAM prices are skyrocketing again, you can do some very conceptually cool micro-NAS builds with shitloads of NVMe L2ARC on a Xeon E5 or a Ryzen. I still wanna build a sleeper U-NAS NSC-810a someday :v:.

Infiniband-connected RDMA to a server with a whole bunch of NVMe on quad-drive sleds :circlefap:

(only problem is M.2 does put out a lot of heat and that case doesn't have that great a quality of ventilation. Unfortunately I just have very little room for a server rack.)

Paul MaudDib fucked around with this message at 03:30 on Jul 14, 2017

SamDabbers
May 26, 2003



Paul MaudDib posted:

Does L2ARC affect memory requirements either way?

ZFS still needs to keep indexes for the data cached on the L2ARC in regular ARC, so L2ARC doesn't really help you (can actually hurt performance) until your working set is significantly larger than the memory available to ZFS. Max out your RAM before you add L2ARC.

Paul MaudDib posted:

Unfortunately I just have very little room for a server rack.)

So mount them vertically.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
When I eventually got the correct RAM for my ECC server I paid 102 UK pounds for 2x 8GB Kingston DDR4 unbuffered. It's gone back up to £111 now (Amazon) but there are good prices out there if you look around a bit.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

SamDabbers posted:

ZFS still needs to keep indexes for the data cached on the L2ARC in regular ARC, so L2ARC doesn't really help you (can actually hurt performance) until your working set is significantly larger than the memory available to ZFS. Max out your RAM before you add L2ARC.
L2ARC uses 170 bytes of ARC per block. If your block size is 128K, that's certainly hell of a good tradeoff. Hell, it still is for 4K pages to some degree, say if you're running a ZVOL at page size. 2GB of RAM lets you retain 48GB of data in L2ARC in that case.

Combat Pretzel fucked around with this message at 11:32 on Jul 14, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Just tossed Fedora Server 26 onto my NAS to replace FreeNAS 11. My single queue iSCSI random 4K read throughput went from 12MB/s to 14.5MB/s. That's nice.

PraxxisParadoX
Jan 24, 2004
bittah.com
Pillbug
Does anyone know of a way to have docker containers (managed via docker-compose preferably) to have separate, LAN accessible IP addresses? Like FreeNAS jails. I have my router handing out IP's/hostnames based on each jails MAC and would like to do the same when I move to Ubuntu/Docker. The only way I've found is via an external program, though I guess something like a reverse proxy would get me hostname accessible containers in a browser, which is 90% of what I want.

IOwnCalculus
Apr 2, 2003





Reverse proxying is what I'm going to set up on mine.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

IOwnCalculus posted:

Reverse proxying is what I'm going to set up on mine.

Is there a good reverse proxy guide because I will have to do that for emby soon. Some I am finding that I think are good are from 6 years ago and wanted to make sure all options are explored.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

EVIL Gibson posted:

Is there a good reverse proxy guide because I will have to do that for emby soon. Some I am finding that I think are good are from 6 years ago and wanted to make sure all options are explored.

I would go with Nginx but Apache 2.4 is still a popular configuration. Apache 2.4 and 2.2 have different syntax, that's the only thing that's likely changed, Apache is a senescent product.

PraxxisParadoX
Jan 24, 2004
bittah.com
Pillbug

EVIL Gibson posted:

Is there a good reverse proxy guide because I will have to do that for emby soon. Some I am finding that I think are good are from 6 years ago and wanted to make sure all options are explored.

If you've got a setup using docker like I'm thinking, https://github.com/jwilder/nginx-proxy looks like a good start. Accompanying blog post too: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
If any of the stuff you plan to use with that nginx-proxy container is externally facing, there's also a Let's Encrypt companion container for it that keeps certs up to date.

redeyes
Sep 14, 2002

by Fluffdaddy
In case anyone cares the Hitachi 4TB 7200RPM NAS drives are on sale on Amazon. Best price i've seen so far:
https://www.amazon.com/HGST-Deskstar-128MB-Cache-Internal/dp/B01N7YOH4P/ref=sr_1_1?ie=UTF8&qid=1500051854&sr=8-1&keywords=4tb+hitachi

$135.00

Eregos
Aug 17, 2006

A Reversal of Fortune, Perhaps?
I'm considering using Crashplan to backup all of my data, but I'm worried about ever transmitting all of my crucial files online. How future proofed and secure is 448 (blowfish) and 128 bit AES transmission encryption considered to be? There's been successful side attacks on 4096 RSAs via acoustics , which is probably irrelevant but the rumored vulnerability of AES 256 to related key attacks is concerning. Because of the sensitivity of ever transmitting all my personal data online, in order to feel comfortable doing that I need a high level of certainty that 128 bit AES or 448 bit blowfish will never be cracked within my lifetime. Anyone who has read more about encryption than me have advice?

I actually am using a NAS, of sorts, simply linking my Gaming and secondary PCs using Synergy and properly credentialed windows file sharing (there are other solutions, but this has the advantage of being able to interact directly with the windows file structure without relying on any kind of intermediary software that could fail). The short answer to how to do proper file sharing in windows is to set up all the layers and authorizations in compmgmt.msc, of course there's other obscure settings scattered around the OS too. My plan is to run Crashplan using the trick customer service recommended to me, on this page.

edit: I assume that "RSA-768 has 232 decimal digits (768 bits), and was factored on December 12, 2009 over the span of 2 years" relates to basic RSAs without any additional advanced cryptographic methods applied like are actually used in 448 bit blowfish or 128 bit AES and that's why they're considered secure? Since 768 bits is more than 448. I don't actually know anything about cryptography.

2nd edit: Or alternatively, since half of people apparently buy this now, I Know the Most about Cryptography, I'm simply the Best Greatest, everyone else is Sucks Compared to Me.

Eregos fucked around with this message at 21:01 on Jul 14, 2017

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Eregos posted:

I'm considering using Crashplan to backup all of my data, but I'm worried about ever transmitting all of my crucial files online. How future proofed and secure is 448 (blowfish) and 128 bit AES transmission encryption considered to be? There's been successful side attacks on 4096 RSAs via acoustics , which is probably irrelevant but the rumored vulnerability of AES 256 to related key attacks is concerning. Because of the sensitivity of ever transmitting all my personal data online, in order to feel comfortable doing that I need a high level of certainty that 128 bit AES or 448 bit blowfish will never be cracked within my lifetime. Anyone who has read more about encryption than me have advice?

I actually am using a NAS, of sorts, simply linking my Gaming and secondary PCs using Synergy and properly credentialed windows file sharing (there are other solutions, but this has the advantage of being able to interact directly with the windows file structure without relying on any kind of intermediary software that could fail). The short answer to how to do proper file sharing in windows is to set up all the layers and authorizations in compmgmt.msc, of course there's other obscure settings scattered around the OS too. My plan is to run Crashplan using the trick customer service recommended to me, on this page.

edit: I assume that "RSA-768 has 232 decimal digits (768 bits), and was factored on December 12, 2009 over the span of 2 years" relates to basic RSAs without any additional advanced cryptographic methods applied like are actually used in 448 bit blowfish or 128 bit AES and that's why they're considered secure? Since 768 bits is more than 448. I don't actually know anything about cryptography.

This is probably the thread for you.

EssOEss
Oct 23, 2006
128-bit approved
You need to upgrade crypto primitive types and key sizes as science marches on. There is no encryption algorithm that is safe against vulnerabilities being discovered. Over time, new attack methods are discovered and the prudent people move on to higher strength ciphers, re-encrypting their data as needed.

https://www.keylength.com/ is a fairly sensible overview of modern recommendations and their predicted durability (how many years they are expected to be good for).

In short, 128-bit AES will serve you well for at least ten years. Don't worry about it - poor key management is far more likely to jeopardize your security than algorithmic weaknesses. Of course, poorly developed software has been known to use crypto in incorrect ways, so watch out what you use and avoid implementations that someone cobbled together in their spare time after their web development day job.

Comparing key sizes is only meaningful within the same family of algorithms, so don't fall into the trap of looking for bigger numbers.

I have not heard of Blowfish being used in ten years, so rather surprised to see it in Crashplan. I would have used AES instead but as long as there are no significant published weaknesses, I guess it is fine for now. Still, I wonder what their rationale is.

EssOEss fucked around with this message at 22:22 on Jul 14, 2017

EL BROMANCE
Jun 10, 2006

COWABUNGA DUDES!
🥷🐢😬



Probably the decision of the same guy who decided to write their app in java.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

EssOEss posted:

In short, 128-bit AES will serve you well for at least ten years. Don't worry about it - poor key management is far more likely to jeopardize your security than algorithmic weaknesses. Of course, poorly developed software has been known to use crypto in incorrect ways, so watch out what you use and avoid implementations that someone cobbled together in their spare time after their web development day job.

This. Most medium- on up strength cryptography is more than good enough for any sort of personal documents, and will be for long enough that you probably won't need the documents anymore by the time you should re-encrypt. Frankly, no one is going to waste the time and effort trying to crack through any strength AES or even moderate RSA encryption to potentially recover your nudes or whatever. It just isn't worth the time and energy for a questionable payoff. Actually attacking encryption is like state-level shenanigans, and even then it's generally considered a last resort.

It is much easier to get access to things via finding holes in configurations, applications, etc., all of which you generally can do little or nothing about as an end user. The best policy is to use decent encryption from companies and products with good reputations and hope for the best. Or just not put that stuff online in the first place, if you're that worried about it.

Also remember that basically no one goes looking for ways to break into personal Crashplan accounts and crap where they're most likely just going to find a bunch of family photos, when instead they could be hitting up the newest 0-day exploit to grab your online banking credentials or surfing for exposed Amazon containers or whatever. It's just not a particularly worrisome attack vector, all things considered, unless you have reason to suspect you'd be intentionally targeted by someone.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Also, if you're guarding nuclear secrets and must back them up to somewhere you don't control with an application you don't control, encrypt them yourself before giving them to CrashPlan.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Mr Shiny Pants posted:

SRP works, and it is fast. It is not very sexy, just SCSI Luns exported, but it works. I know you are pretty versed in all this stuff, so for shits and giggles you might want to make a Linux KVM machine that runs iSER ( or any other RDMA enabled protocol ) and run your Windows machine from that. :)
I've downloaded some older version of Mellanox WinOF as well as OFED 3.2 and pulled their SRP miniport drivers through PEStudio. Apparently they're pretty self-contained, they only link against operating system files. Maybe I can make it work on Windows 10. I should get a pair of Mellanox ConnectX-3 cards on Monday (couldn't resist :v: ).

Mr Shiny Pants
Nov 12, 2012
That is also the one I got working on Server 2012R2, I used Orca to get around the installer not functioning.

You will probably run into the drivers signing certificate being out of date, but it will work. Don't know about Win 10 though.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Yea, I'm hoping to get specifically only the SRP miniport driver work on top of the newest card drivers though, by installing its INF via device manager. I'm not entirely sure how this SRP stuff works, sounds like the cards will expose a virtual memory-based device of a certain kind, if there's something announced by the subnet manager, for which you'll install the SRP driver. So I'm hoping the old poo poo will work with the new poo poo.

As far as the Open Fabric Alliance isn't maintaining the drivers, I'm not sure. The latest release is from somewhen in 2013. The latest release of their NVMe stuff is dated December 2016.

--edit: Nevermind: winOFED 3.2 is the last winOFED release due to lack of hardware vendor participation.

Adbot
ADBOT LOVES YOU

MeKeV
Aug 10, 2010

MeKeV posted:

Any experience or info on the Toshiba N300 drives? They seem to be the cheapest 4tb 'NAS' marketed drives I can find right now in £s. And better availability than some of the others too.

It's a fair bit noisier (under load) than my other drives. An OLD 2TB Samsung Spinpoint F4, 2TB WD Red, 4TB WD Red.

And it's running a few degrees warmer than my other drives. It's mostly been 40ish and is currently 43°C, while the others are 34C, 39C & 40C (as per above order). But it is seeing more load.

Does it need to go back?

What's the best test routine in a Windows box? Nothing unusual showing in SMART, besides the temp.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply