|
Matt Zerella posted:You have to install SMB1 on windows 10 and the shares will show up when you browse to \\tower.local fuckin yikes
|
# ? Dec 8, 2019 17:35 |
|
|
# ? May 30, 2024 01:17 |
|
H2SO4 posted:fuckin yikes Yep. Annoying. But the fix is right around the corner. 6.8 is at RC9 right now.
|
# ? Dec 8, 2019 17:56 |
Matt Zerella posted:You have to install SMB1 on windows 10 and the shares will show up when you browse to \\tower.local Ah yep, that was it. Thank you.
|
|
# ? Dec 8, 2019 18:22 |
|
Don't use SMB1, kids
|
# ? Dec 8, 2019 19:08 |
|
Lutha Mahtin posted:Don't use SMB1, kids In a corporate env yep. At home? It doesn't really matter.
|
# ? Dec 8, 2019 19:15 |
Lutha Mahtin posted:Don't use SMB1, kids If that was universal then we should tell people up front not to use UnRaid on a network with windows systems. Not being snarky, just... yeah that would have been good advice when choosing UnRaid over other OS.
|
|
# ? Dec 8, 2019 19:41 |
|
Does anyone in here understand Amazon glaciers pricing model? I don't know if it's due to what I used to upload a particular 60gb file but when I restored it it appeared to initially come down as thousands of 16mb blocks which resulted in about $10 of charges due to how many GETs I submitted with my client. I know they recently simplified their pricing to be per GB retrieved plus like 5 cents per 1,000 requests but even using the slowest(and cheapest) retrieval method still resulted in a way more expensive retrieval than what I can figure out. Sorry if this is the wrong thread but I figure if anyone is using glacier it would be you goons.
|
# ? Dec 8, 2019 20:20 |
|
Export pricing is one of the many ways in which Glacier fucks you. It's cheap to store but everything else is expensive.
|
# ? Dec 8, 2019 20:29 |
That Works posted:If that was universal then we should tell people up front not to use UnRaid on a network with windows systems.
|
|
# ? Dec 8, 2019 20:32 |
|
It seems a reboot of unraid server fixed the high cpu usage
|
# ? Dec 8, 2019 20:34 |
|
This is concerning. I just built this array a week ago and it's repairing. Anything actions I should take at this point? code:
|
# ? Dec 8, 2019 21:02 |
BeastOfExmoor posted:This is concerning. I just built this array a week ago and it's repairing. Anything actions I should take at this point? In case you need help reading it, Oracle ZFS is still close enough to OpenZFS derivatives that you can use this documentation to help you see that that you've been getting +2k read and write errors over a week. BlankSystemDaemon fucked around with this message at 21:24 on Dec 8, 2019 |
|
# ? Dec 8, 2019 21:20 |
|
I'd be investigating whatever backplane or chipset they're connected to.
|
# ? Dec 8, 2019 22:04 |
|
cr0y posted:Does anyone in here understand Amazon glaciers pricing model? I don't know if it's due to what I used to upload a particular 60gb file but when I restored it it appeared to initially come down as thousands of 16mb blocks which resulted in about $10 of charges due to how many GETs I submitted with my client. I know they recently simplified their pricing to be per GB retrieved plus like 5 cents per 1,000 requests but even using the slowest(and cheapest) retrieval method still resulted in a way more expensive retrieval than what I can figure out. Sorry if this is the wrong thread but I figure if anyone is using glacier it would be you goons. How exactly did you retrieve it? That should have been under a dollar. https://calculator.s3.amazonaws.com/index.html#key=files/calc-f82c5774a769d4bc994041e12f10ec9811e8ec2c&v=ver20191121vC 60,000 / 16 = 3750, which is 4 blocks of 1000 GET requests. I realize there could be multipliers in there but how did you go 10x over that? Edit: The enterprise storage or IT threads might also be of help, though I haven't heard non-SAN talk in the enterprise storage thread.
|
# ? Dec 8, 2019 23:56 |
|
Less Fat Luke posted:I'd be investigating whatever backplane or chipset they're connected to. This. Also the errors on the vdev row mean you've lost at least some data. With that said, this is why I love ZFS. Nearly any other RAID implementation would have unceremoniously poo poo everything on that array to hell, while ZFS is still going to save whatever it can.
|
# ? Dec 9, 2019 00:37 |
|
"repairing" concerns me. in this situation is continuing to allow a resilver without resolving a suspected controller/backplane issue a good idea? It seems like getting the drives off that controller before it mangles anything further would be ideal. Of course if it's only a week old, so hopefully you have another copy of the data somewhere else still. Paul MaudDib fucked around with this message at 01:46 on Dec 9, 2019 |
# ? Dec 9, 2019 01:41 |
|
That Works posted:If that was universal then we should tell people up front not to use UnRaid on a network with windows systems. UnRaid seriously doesn't support even SMB2 yet? It's Linux-based, right? Samba has supported SMB2 since 2011 and SMB3 wince 2013. What's their excuse? Are they rolling their own SMB server for some idiotic reason? It's not like the fact that SMB1 is a gaping security hole hasn't been well known for years... If they can't be bothered to update this, what else are they slacking off on?
|
# ? Dec 9, 2019 03:13 |
|
wolrah posted:I would say we probably should be... Missed my post where I mentioned the modern SMB is coming when 6.8 goes final?
|
# ? Dec 9, 2019 03:14 |
|
I'm rebuilding my QNAP NAS setup to upgrade 6x6TB in RAID6 to 6 12TB drives. Does RAID-6 still make sense in 2019 with 12TB drives, given the odds of 12TB drives getting bad sectors just given their size and odds of additional drive failure upon recovery? It's home use, I wouldn't die if any of this went away, but annoyance for the most part. The critical stuff (photos, etc.) is backed up on Dropbox. Trying to figure out what makes sense for keeping things reasonably stable, assuming nothing here is truly can't lose.
|
# ? Dec 9, 2019 03:44 |
|
Raid 6 is pretty dependable but with only 6 drives, i mean the probabilities are pretty low from here for example: https://www.wintelguy.com/raidmttdl.pl With enterprise numbers since they are only available for 12 tb, at least low level, here are the stats:
|
# ? Dec 9, 2019 03:57 |
|
Matt Zerella posted:You have to install SMB1 on windows 10 and the shares will show up when you browse to \\tower.local Is this the loving reason I can see unraid shares on one Win10 desktop but not the other few? I have no idea why the one can see it but not the others and it pisses me off.
|
# ? Dec 9, 2019 04:47 |
|
Matt Zerella posted:Missed my post where I mentioned the modern SMB is coming when 6.8 goes final? I'm gonna go ahead and say this was a reaction to the fact that it's evidently 7 years late to the party. The question about whether they're rolling their own SMB implementation is also a very valid one.
|
# ? Dec 9, 2019 04:50 |
|
D. Ebdrup posted:Those columns for read, write, and checksum should be 0, my friend. Paul MaudDib posted:"repairing" concerns me. in this situation is continuing to allow a resilver without resolving a suspected controller/backplane issue a good idea? It seems like getting the drives off that controller before it mangles anything further would be ideal. Yea, this is concerning. As mentioned, I just built this array last week so all the data is basically backed up in multiple places and/or replaceable. Unfortunately I accidentally rebooted the server before I realized what was going on (I ran the above while it was attempting to reboot). This is what it looks like now: code:
|
# ? Dec 9, 2019 05:24 |
|
I would probably point the finger at the HBA. If your expander cables are new I would personally not guess that first but it's possible. If you are running ecc then look through dmesg.log and see if there are ECC errors reported there. If not, the way to rule out that failure mode is memtest86, ideally run one of your dimms tonight and the other tomorrow night. It's something to do while you're waiting for a new HBA/expander cable. Go back to your prior settings and test that first, see if it's failing, then see if backing down fixes it. Side note BOY is ECC a killer feature for overclocking... it's literally a "the engines cannae take much more captain!" alarm bell. It's kind of aggravating that it's not treated as standard especially now that Rowhammer is a thing. Intel disables it outright but AMD ain't making any great strides either, basically just Asrock doing the needful.) Paul MaudDib fucked around with this message at 05:41 on Dec 9, 2019 |
# ? Dec 9, 2019 05:37 |
|
Matt Zerella posted:Missed my post where I mentioned the modern SMB is coming when 6.8 goes final? SMB1 is horrifically insecure for a variety of reasons and having it enabled at all means that someone who is able to gain a man-in-the-middle position could downgrade a connection even between two modern systems and then do whatever they wanted with it. Even Microsoft has been recommending that everyone disable SMB1 since 2016 and has been doing so by default in Windows since late 2017. It'd be OK if they were just testing a new version that disabled SMB1 where previous versions supported all of them, but if they really do not support anything but SMB1 on the current stable release that's just plain irresponsible. Anything that requires SMB1 be enabled in the last few years should have been treated as outdated junk. As far as I can tell they use Samba and aren't doing anything special with it, they just for whatever reason have configured it to disable the newer protocols. There are instructions on their forums for enabling and enforcing SMB2+ with a few lines of config, why they didn't do the same long ago and require those who need to support ancient trash to do the config edits I have no idea. wolrah fucked around with this message at 07:10 on Dec 9, 2019 |
# ? Dec 9, 2019 07:08 |
I might just set up an Ubuntu server with zfs instead of buying the Unraid license once the trial period ends.
|
|
# ? Dec 9, 2019 09:49 |
|
wolrah posted:I would say we probably should be... I haven't used Unraid in months but they do a lot of strange things like this. I looked at things they do and went "yikes", investigated and found that they're doing something different that satisfies a niche usecase but is a horrible idea for other reasons (security, usability, data integrity, etc). The whole docker stack is extremely popular on Unraid but not an official project, so you end up installing a third party app store with "community maintained" docker containers running with root privileges. What could possibly go wrong? Their SSD caching mechanism allows the array to stay spun down when writing, but if the SSD fails you lose all writes to the array since the last "flush", which I believe is one day by default. Login to the webinterface was handled via unencrypted HTTP and the root password for the longest time. SSH was a third party package, telnet the default. Some of this might still be true but from what I understand they've started taking security more seriously. Unraid is nice for "homelab" type of experiments with easily replaceble data but you'd have to be crazy to run it in any type of business environment.
|
# ? Dec 9, 2019 13:49 |
So I'm only running this as a home Plex server and just for some secondary backup of other media (photos / videos also archived elsewhere). Unraid would technically work, but I've got 20 days left until I have to pay for it and now that I've got it all up and running I feel like the "learning" part of the project is minimally done. Another project I had coming up was to do more to secure my home network / learn more about that in general and get everything behind a VPN with a new router (using the Verizon FIOS-supplied router which supposedly does not play nice with VPNs, getting rid of it shortly). Anyway, I am leaning less towards keeping UnRaid going forward given some of these issues. I have (very minimal) familiarity with Ubuntu and Linux CLI for some bioinformatics work but nothing like a NAS setup. I have zero familiarity with BSD operating systems. Given all that would I be better off trying to learn how to configure a linux-based zfs server to run dockers with all my usenet and plex stuff and Samba for the rest or should I just suck it up and learn how to operate FreeNAS instead? I'm just in the reading phase and trying to figure out if there's any reason to go one way or the other. I certainly could just stay with UnRaid, but if its gonna cost me and is ultimately not as secure then I'd rather move on. Server hardware: Intel DH67BL mobo Intel i3-3220 CPU 16gb DDR3-1333 ram 1 -256gb SSD hdd(Cache) 1 8tb WD-white hdd(parity) 1 8tb WD-white hdd(pool) 1 2tb Maxtor hdd(pool) I do not feel strongly about making this the ultimate failsafe fully-redundant zero-data loss type of setup. That's not really necessary given my needs for it.
|
|
# ? Dec 9, 2019 14:05 |
|
Unraid is great for media and photos (which are backed up to the cloud). If you want to increase the safety of your writes, you can slow down your writes by skipping the cache drive, or run 2 cache drives. Edit: I love the docker implementation. I use it for all my random poo poo: Plex, Factorio Server, various grad school things. As for login security, you have to VPN to my network to get behind the firewall, so I haven't worried about that. I think it is getting changed in the next major release. Horses for courses / right tool for the job. If you are keeping your top secret designs for Skynet on it, maybe use zfs. My setup: Dell R720 with 12 cores and 72gb of ram, 2 x 8tb shucked WD, 2x Seagate 2TB, with a 500GB cache drive. I got a good deal on it, even if it uses too much power. dexefiend fucked around with this message at 15:01 on Dec 9, 2019 |
# ? Dec 9, 2019 14:47 |
|
That Works posted:So I'm only running this as a home Plex server and just for some secondary backup of other media (photos / videos also archived elsewhere). Unraid would technically work, but I've got 20 days left until I have to pay for it and now that I've got it all up and running I feel like the "learning" part of the project is minimally done. Unraid is kind of unique in that it gives you access to the Linux stack without requiring a whole lot of underlying knowledge or computer janitoring. The community is ok, there are simple YouTube tutorials for advanced topics, manual updates are relatively rare and done via webinterface, docker containers can be set to auto-update if you trust the third parties publishing them. In my experience it is quite hands off once up and running. If you go with Ubuntu you’ll save some money but you’ll have to spend more time setting things up and keeping it up to date if/when you run into issues with unattended upgrades. If you want to pick up some Linux skills then by all means go for it, if not maybe stick with Unraid or look into an appliance like Synology. I can’t comment on FreeNAS but last time I checked it wasn’t the first choice for docker (there is no native BSD version of docker, containers run in a small Linux VM).
|
# ? Dec 9, 2019 15:38 |
|
eames posted:Unraid is kind of unique .... Agreed with all of that, it reflects my experience. Also keep in mind that you've got a collection of projects there. You've broken them down well so I'd tackle OS first and Plex, then layer each on top, e.g. VPN etc. I strongly agree with the Unraid being the no week to week effort whereas, but not by nature, Ubuntu Server can have some upkeep. I ran Ubuntu Server for several years and did like it... But unraid is minimal computer janitorial duty. Also liked OpenMediaVault if you want an alternative.
|
# ? Dec 9, 2019 15:49 |
eames posted:I can’t comment on FreeNAS but last time I checked it wasn’t the first choice for docker (there is no native BSD version of docker, containers run in a small Linux VM).
|
|
# ? Dec 9, 2019 15:56 |
|
D. Ebdrup posted:On the other hand, FreeBSDs jails provide isolation rather than simply being a orchestration tool like docker. No doubt, not being able to run docker can be either negative or positive depending on your opinion of docker.
|
# ? Dec 9, 2019 16:02 |
|
D. Ebdrup posted:On the other hand, FreeBSDs jails provide isolation rather than simply being a orchestration tool like docker. Whether this is a good thing or just another hurdle to get over to make your poo poo talk to each other is, of course, entirely dependent on what you're doing. Still, I wouldn't want to run anything that requires SMB 1.0 at this point. It's just way, way too easy to end up with someone else inside your network (lol if you think that your rando D-Link router with "firewall" is actually secure), and at that point exploitation of SMB 1.0 is trivial.
|
# ? Dec 9, 2019 16:04 |
|
eames posted:I can’t comment on FreeNAS but last time I checked it wasn’t the first choice for docker (there is no native BSD version of docker, containers run in a small Linux VM). FreeNAS has an option to automatically set up a linux VM (RancherOS) that you can use for your docker containers. Seems like they don't care about supporting that though, so it will be gone with the next version. (You can still use existing VMs or manually make your own)
|
# ? Dec 9, 2019 16:16 |
Thank you all. This is giving me lots to think about. Learning more stuff is one thing, but creating later work is also another that I'd probably start avoiding when my job picks back up when the semester starts in the spring. Then, any advantages of security become disadvantages again later. I'll have to do some reading and think things over on what I want to do.
|
|
# ? Dec 9, 2019 16:32 |
|
That Works posted:Thank you all. This is giving me lots to think about. Learning more stuff is one thing, but creating later work is also another that I'd probably start avoiding when my job picks back up when the semester starts in the spring. Then, any advantages of security become disadvantages again later. Do your server config in Ansible and store it in a git repo. Any changes moving forward you do them in ansible and commit them. If your machine ever blows up you've got an immediate way to set it back up.
|
# ? Dec 9, 2019 16:45 |
|
While I cannot say when Unraid 6.8 will come out, it's been previously posted that it ups the SMB version to people's satisfaction. I've personally just disabled it for a bit (wasn't day to day using SMB). I will say that the Unraid community has some atypical views on security practices. SSH keys? Not easily. Routinely using an account that isn't root? Also no. External ssh access, quite rightly given the first two, not recommended. VPNs are the one true way to access anything! When I said atypical, I didn't mean necessarily poor, but it does get a bit heard mentality at least, sometimes cutting off otherwise accepted methods. Personally I use a fully patched RPi1 with Dietpi + ssh key access only for external SSH access + tunnels. EDIT: And fail2ban Rooted Vegetable fucked around with this message at 18:23 on Dec 9, 2019 |
# ? Dec 9, 2019 16:48 |
Matt Zerella posted:Do your server config in Ansible and store it in a git repo. Any changes moving forward you do them in ansible and commit them. If your machine ever blows up you've got an immediate way to set it back up.
|
|
# ? Dec 9, 2019 16:53 |
|
|
# ? May 30, 2024 01:17 |
Matt Zerella posted:Do your server config in Ansible and store it in a git repo. Any changes moving forward you do them in ansible and commit them. If your machine ever blows up you've got an immediate way to set it back up. Oh? Cool. I need to read and figure out what ansible and git repos are then!
|
|
# ? Dec 9, 2019 17:19 |