|
e: this should really be in the Plex thread
Chumbawumba4ever97 fucked around with this message at 15:37 on Mar 12, 2019 |
# ? Mar 12, 2019 15:33 |
|
|
# ? Jun 2, 2024 09:01 |
Greatest Living Man posted:There was one error I encountered specific to WSL Ubuntu with installing a python package that seemed to not have a solution. How does the linuxulator work? Is it possible to install in a FreeBSD jail? Linuxulator is a kernel module that simply translates code based on path, if I recall correctly. It's defaults to /compat/linux, but it's build-time configurable. It's not just possible to run it in a jail, it's very strongly recommended for the process, filesystem, SysV IPC and network isolation that jails provide. /compat exists in hier(7) and is/was also used for BSD43 and iBCS for compatibility with BSD, Xenix, SCO Unix, and Unix System 5 Release 4 respectively, so it kinda makes sense to keep it there even in jails.
|
|
# ? Mar 12, 2019 15:46 |
|
WWWWWWWWWWWWWWWWWW posted:Thank you! I've installed Ubuntu this way I think the C drive is supposed to be mounted as /mnt/c/ in WSL, so you should be able to run python3 /mnt/c/path/to/dumpedb.py (tab completion should probably work) and the paths in the script should also be in the format /mnt/c/whatever
|
# ? Mar 12, 2019 15:47 |
|
Eletriarnation posted:If I recall correctly, it is actually possible to install a desktop environment in WSL and connect to it using VNC - it's just not there by default. You can install an X server (cygwin, vcxsrv, xming) and any X applications that you install into your WSL instance. Cywgin has the best x server IMO but it can be fussy with the $DISPLAY and .Xauthority.
|
# ? Mar 12, 2019 16:02 |
|
Eletriarnation posted:
Thank you! I'm almost there! I am getting this error: lunix@server-PC:~$ python3 /mnt/c/lunix/dumpedb.py OSError: pyesedb_file_open: unable to open file. libcfile_file_open_with_error_code: no such file: /mnt/C/lunix/Windows.edb. libcfile_file_open: unable to open file. libbfio_file_open: unable to open file: /mnt/C/lunix/Windows.edb. libbfio_handle_open: unable to open handle. libesedb_file_open_file_io_handle: unable to open file IO handle. libesedb_file_open: unable to open file: /mnt/C/lunix/Windows.edb. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/mnt/c/lunix/dumpedb.py", line 5, in <module> d = pyesedb.open("/mnt/C/lunix/Windows.edb") SystemError: <built-in function open> returned a result with an error set lunix@server-PC:~$ I am trying to figure out where the mistake is. I tried removing /mnt/. I have a C:\lunix folder on my Windows 10 machine with the appropriate files in there, so I am guessing I just messed up the path in the script somehow?
|
# ? Mar 12, 2019 16:10 |
|
I'm not really sure, but instead of giving the full path as an argument you can make sure you're in the right place by 'cd'-ing to the destination directory and just using the filename. If that doesn't work then I would assume there's something about the host filesystem mount that Python dislikes, and would recommend trying to copy the files you're working on into the Linux filesystem from /mnt instead of trying to run the mounted files directly.
|
# ? Mar 12, 2019 16:48 |
|
Winbuntu is the worst and corrupts your data silently if you try to do anything between the two OS's. At least the version I used 2 years ago. Seems like there is a caching layer that it doesn't handle gracefully through the abstration which led to files being written partially by two programs. (File open in Atom, hit save, run it in Winbuntu, errors, look at the file, it's obviously a corrupted save with partial data.)
|
# ? Mar 12, 2019 17:06 |
|
WWWWWWWWWWWWWWWWWW posted:Thank you! I'm almost there! mystes fucked around with this message at 17:21 on Mar 12, 2019 |
# ? Mar 12, 2019 17:17 |
|
mystes posted:Try making the C in /mnt/c lowercase. Also, you might want to check whether Windows.edb appears as uppercase or lowercase in wsl (do "ls /mnt/c/lunix" and check) although this may not matter (WSL might mount the c drive as case insensitive but I'm not sure). Thanks! I changed it to lowercase, and now it looks like it might be doing something? An output_file.txt was created, but after 20 minutes it's still at 0 bytes. It looks like I can still type stuff in the Linux window, though it's not accepting commands, so maybe it's just doing some background work and I have to wait a while? Not sure why the txt document is not getting larger, though.
|
# ? Mar 12, 2019 17:40 |
|
WWWWWWWWWWWWWWWWWW posted:Thanks! I changed it to lowercase, and now it looks like it might be doing something? An output_file.txt was created, but after 20 minutes it's still at 0 bytes. It looks like I can still type stuff in the Linux window, though it's not accepting commands, so maybe it's just doing some background work and I have to wait a while? Not sure why the txt document is not getting larger, though. Maybe check task manager and see if the HD or CPU is doing stuff.
|
# ? Mar 12, 2019 17:41 |
|
redeyes posted:Maybe check task manager and see if the HD or CPU is doing stuff. Good idea! The Linux window is using 0% but it says python 3 is using about 4% (it's going up and down between 2-4%) so I am guessing that's a good thing
|
# ? Mar 12, 2019 17:46 |
|
H110Hawk posted:Winbuntu is the worst and corrupts your data silently if you try to do anything between the two OS's. At least the version I used 2 years ago. Seems like there is a caching layer that it doesn't handle gracefully through the abstration which led to files being written partially by two programs. (File open in Atom, hit save, run it in Winbuntu, errors, look at the file, it's obviously a corrupted save with partial data.) The WSL folks were pretty clear about this when it came out, and then there was this repeat of the warning : https://blogs.msdn.microsoft.com/commandline/2016/11/17/do-not-change-linux-files-using-windows-apps-and-tools/ And there is no problem editing files on /mnt/C/whatever from the WSL side. But they now/in the next release have a way to edit the WSL files from Windows: https://blogs.msdn.microsoft.com/commandline/2019/02/15/whats-new-for-wsl-in-windows-10-version-1903/
|
# ? Mar 12, 2019 20:07 |
|
ChiralCondensate posted:The WSL folks were pretty clear about this when it came out, and then there was this repeat of the warning : https://blogs.msdn.microsoft.com/commandline/2016/11/17/do-not-change-linux-files-using-windows-apps-and-tools/ I disagree that some random blog post makes that clear.
|
# ? Mar 12, 2019 20:08 |
|
H110Hawk posted:I disagree that some random blog post makes that clear. That was plastered all over the official instructions for enabling WSL. The fact that you could dig down and access the files that unsafe way at all isn't great but probably unavoidable? And they didn't make it easy for that reason.
|
# ? Mar 12, 2019 20:25 |
|
The Milkman posted:That was plastered all over the official instructions for enabling WSL.
|
# ? Mar 12, 2019 20:35 |
|
1 Day to go on the Unraid trial and, sincerely, my mind isn't made up. I'm not sure it's reducing my CJing over Ubuntu server but I really do see the potential, especially in regard to home centric VMs (I.e. I won't be getting a gaming PC any time soon but could get a graphics card, more ram and do it occasionally on this). Unraid state you can have two 15 day trial extensions. Has anyone actually done this? I think I'm going to request more time if it's relatively easy.
|
# ? Mar 12, 2019 21:09 |
|
Having an issue with my DS216j. The upstream over LAN seems to be dropping in speed every so often – enough that 4K MKV files are buffering, but not 1080p MKVs. I can watch this happening in DSM. My other Synology NAS isn't having any issues. The drives both pass SMART and I seem to be having the problem on both of them. Any ideas as to what I should check for next? KOTEX GOD OF BLOOD fucked around with this message at 03:20 on Mar 13, 2019 |
# ? Mar 13, 2019 02:33 |
|
mystes posted:If you can install python and the python libesedb bindings (you can probably install them somehow in windows, otherwise in wsl or booting from an ubuntu usb stick run "apt-get install python3-libesedb"). I was playing with this because it piqued my curiosity. Whatever this library is is wildly inefficient. Given it's a known-format database it sure does like to scan the whole file. Basically loaded up an `edb` file, pulled out `records = t.records`. len(records) yields 3 million rows, consuming ~371MB of RAM. It took like 15 minutes of 100% cpu time to do this, with my disk at ~10% i/o usage. strace showed a steady stream of lseek() and read() in a loop with 32k chunk sizes. From there doing "records[0]" re-scans the whole drat file near as I can tell, I'm about 10 minutes in and it's doing the exact same thing. Makes me think this python library doesn't understand the index format and is basically doing it through brute force every single time. Either that or because the file doesn't fit into RAM it's not storing the index anywhere? That seems doubtful - it should just crash. I'm debating throwing this onto a server at work and mapping the file into a memory disk and letting it rip. This is on Fedora 27 with this RPM: https://forensics.cert.org/fedora/cert/27/x86_64/repoview/libesedb-python3.html On topic: The fan in my Synology is super loud/dying, I found https://forum.synology.com/enu/viewtopic.php?f=140&t=97437&start=15#p374386 and bought that fan for $20. We'll see if it works! I had a dream last night I took it apart and found like 5 little muffin fans inside.
|
# ? Mar 14, 2019 18:22 |
|
H110Hawk posted:I was playing with this because it piqued my curiosity. Whatever this library is is wildly inefficient. Given it's a known-format database it sure does like to scan the whole file. Basically loaded up an `edb` file, pulled out `records = t.records`. len(records) yields 3 million rows, consuming ~371MB of RAM. It took like 15 minutes of 100% cpu time to do this, with my disk at ~10% i/o usage. strace showed a steady stream of lseek() and read() in a loop with 32k chunk sizes. From there doing "records[0]" re-scans the whole drat file near as I can tell, I'm about 10 minutes in and it's doing the exact same thing. Makes me think this python library doesn't understand the index format and is basically doing it through brute force every single time. It seems like you can also read the files from .net which would probably be better but it looked hard to find actually documentation.
|
# ? Mar 14, 2019 18:40 |
|
How does this thread feel about ArsTechnica's most recent Server build guide recommendations? And yes, they're recommending refurbished HDDs. For Too Long Didn't Read: quote:Entry Level
|
# ? Mar 14, 2019 18:49 |
|
I feel like I would want a lot more hard data on reliability before being willing to consider using refurb drives. I'm not sure that anecdata like "I've used these time and again, and in my experience, they're a good deal" really cuts it.
|
# ? Mar 14, 2019 18:57 |
|
I’m keen on maybe a slightly more pumped up version of the Entry level with a xeon-D, perhaps the 1541. Have not had much luck finding a retailer selling supermicro boards for acceptable prices though (in ). Good to see the options for cases with hot swap drives. Related, if I were to move my unraid setup to a new motherboard, how tricky/complex is it to move the data drives? Boot drive is easy I just wonder if it requires the drives to be ordered or if it does all its unraid stuff just magically.
|
# ? Mar 14, 2019 19:03 |
|
I mean, they're refurbs, but they're also the stupid reliable HGST drives. I have four refurb HGST 8TB in my array right now. Of course with how cheap shucked drives are now, and the fact that opening a product is no longer a legal reason to void a warranty, I'm just going to go with shucked drives in the future anyway.
|
# ? Mar 14, 2019 19:08 |
|
H110Hawk posted:I was playing with this because it piqued my curiosity. Whatever this library is is wildly inefficient. Given it's a known-format database it sure does like to scan the whole file. Basically loaded up an `edb` file, pulled out `records = t.records`. len(records) yields 3 million rows, consuming ~371MB of RAM. It took like 15 minutes of 100% cpu time to do this, with my disk at ~10% i/o usage. strace showed a steady stream of lseek() and read() in a loop with 32k chunk sizes. From there doing "records[0]" re-scans the whole drat file near as I can tell, I'm about 10 minutes in and it's doing the exact same thing. Makes me think this python library doesn't understand the index format and is basically doing it through brute force every single time. mystes posted:I feel like I would want a lot more hard data on reliability before being willing to consider using refurb drives. I'm not sure that anecdata like "I've used these time and again, and in my experience, they're a good deal" really cuts it. Yeah I've pretty much given up hoping to ever get the file names back at this point. The one single program that I know will actually works is no longer able to be activated (but the company will still happily take your money!). I really do appreciate everyone helping me try to figure this out though. It means a lot to me. On that note, why is RAID not considered backup? I am planning on setting up a RAID5 with all my drives and from what I understand, I would have to have two drives fail at the exact same time for me to lose data? I really don't see that being a reasonable scenario. I mean yeah it could happen but it's not worth having to buy a 2nd of every single hard drive I already have which would cost me a fortune. Is there something I'm missing? Also I am kind of new to RAID. Does it matter that about 5 of my server's hard drives are USB? The rest are either SATA direct to the motherboard or SATA to this RAID card that I had to flash to IT mode. Can I do RAID5 with this mishmash of hookups?
|
# ? Mar 14, 2019 19:10 |
|
RAID doesn't protect against file deletion, file corruption, or malicious acts (such as a cryptolocker type virus). RAID5 in particular is also questionable with modern drive sizes because the odds of a "not failed" drive returning an unrecoverable read error during your rebuild process are now significant because of how much data is being read. With a traditional RAID system, one URE during the rebuild means your whole array is trash. ZFS is smart enough in that situation that it can tell which files it cannot reliably recover, flags them, and recovers everything else without making GBS threads the bed. WWWWWWWWWWWWWWWWWW posted:Also I am kind of new to RAID. Does it matter that about 5 of my server's hard drives are USB? The gently caress are you doing, really?
|
# ? Mar 14, 2019 19:19 |
|
It's mostly RAW 1080p Blu-ray rips and games So what exactly is recommended if I don't want to buy a backup hard drive for every single hard drive I already have? Anything?
|
# ? Mar 14, 2019 19:50 |
|
WWWWWWWWWWWWWWWWWW posted:It's mostly RAW 1080p Blu-ray rips and games RAID is not backup because it provides no segregation of failure domain. One virus, one faulty raid controller, one filesystem bug, one user error, one disk failing during a rebuild causes you to lose some-to-all of your data. You need to segregate and tier your data resiliency needs. For your sacred dick picks, you should be putting them ideally into 3 places: the live data (RAID), a secondary hard drive that's offline most of the time, and offsite somewhere (Google Drive/Backblaze B2/Etc with some kind of erasure protection/versioning.) For your less sacred but still annoying to lose porn put it on that live data array (RAID) and make a second copy somewhere else. For "Don't Care" data like family photos just stick it on the RAID array but don't cry if/when it's gone.
|
# ? Mar 14, 2019 20:09 |
WWWWWWWWWWWWWWWWWW posted:It's mostly RAW 1080p Blu-ray rips and games It depends on what you are trying to protect your data from In my case, I want to protect against the things IOwnCalculus mentioned (file deletion, corruption, malicious acts) but also protect against physical issues like theft or fire. For protecting against theft or fire, that means I need the data replicated in multiple physical locations. For that aspect I store backups on the cloud (I use both Google Drive as well as backblaze & restic). If cloud isn't an option, then I think it would be buying more hard drives and storing them in different physical locations
|
|
# ? Mar 14, 2019 20:10 |
|
I was actually just wondering about picking up one of those exact He10 drives, but for my desktop as a new games drive. I don't necessarily need that much space. I'd be okay at 4-6TB -- but that's firmly out of affordable SSD range, and hybird drives don't seem to exist anymore. I was looking at what the fastest spinny drives might be, and that looked like it was up there.
|
# ? Mar 14, 2019 21:50 |
|
The Milkman posted:I was actually just wondering about picking up one of those exact He10 drives, but for my desktop as a new games drive. I don't necessarily need that much space. I'd be okay at 4-6TB -- but that's firmly out of affordable SSD range, and hybird drives don't seem to exist anymore. I was looking at what the fastest spinny drives might be, and that looked like it was up there. The fastest spinning drives are still orders of magnitude below SSD in performance, but yes, the more dense the drive, the marginally better performance it will have. Of course I also manage to keep my desktop confined to a 500GB SSD and no other disks other than a single unshucked 10TB Easystore. In your shoes for $200 I'd be doing my damnedest to try and shrink below 2TB, because even the Intel 660p NVMe has been as low as $220, and there have been 2TB SATA SSDs in the $190 range.
|
# ? Mar 14, 2019 21:56 |
|
The Milkman posted:I was actually just wondering about picking up one of those exact He10 drives, but for my desktop as a new games drive. I don't necessarily need that much space. I'd be okay at 4-6TB -- but that's firmly out of affordable SSD range, and hybird drives don't seem to exist anymore. I was looking at what the fastest spinny drives might be, and that looked like it was up there. I have a refurb'd 6 TB He6 in my gaming desktop and it's still fantastic. It might be a little clicky/loud but that's irrelevant to me, and it's pretty fast (IIRC I benchmarked it around 200 MB/s r/w sequential.) They're still regularly around $100, I think mostly from GoHardDrive, and they resell on Newegg Flash and eBay. I have that drive backed up to a USB Easystore/Elements. The only reason I haven't bought more of those refurb'd drives is because for both my gaming desktop and my Plex server, I'll need to upgrade capacity to 8 or 10 TB in the near future, but otherwise I'd recommend the HeX refurbs. For HDD pricing, basically $20 or better per TB is good, although NAS/enterprise stuff is always more expensive.
|
# ? Mar 14, 2019 22:47 |
|
IOwnCalculus posted:The fastest spinning drives are still orders of magnitude below SSD in performance, but yes, the more dense the drive, the marginally better performance it will have. I could shave down to 2tb of stuff pretty easily -- right now. Games be gettin bloated though. 80-100GB is becoming normal. I'm less concerned with load times than having to janitor the space. Last time I tried games off an SSD, it wasn't that dramatic of an improvement anyway, it's mostly occasional sequential reads. Granted that was pre-nvme so maybe there's a bigger difference these days. But I've been not unsatisfied running off a clogged up very old cheapo 4tb Atomizer posted:I have a refurb'd 6 TB He6 in my gaming desktop and it's still fantastic. It might be a little clicky/loud but that's irrelevant to me, and it's pretty fast (IIRC I benchmarked it around 200 MB/s r/w sequential.) They're still regularly around $100, I think mostly from GoHardDrive, and they resell on Newegg Flash and eBay. I have that drive backed up to a USB Easystore/Elements. The only reason I haven't bought more of those refurb'd drives is because for both my gaming desktop and my Plex server, I'll need to upgrade capacity to 8 or 10 TB in the near future, but otherwise I'd recommend the HeX refurbs. For HDD pricing, basically $20 or better per TB is good, although NAS/enterprise stuff is always more expensive. Good to know these refurbs are reliable, that was my other concern. Or at least, not especially more likely to up and die compared something new.
|
# ? Mar 14, 2019 23:29 |
|
OK so I take it either Backblaze or S3 should work fine for backing up my whole NAS, are there any current recommendations for backing up a Windows PC to the NAS? I've tried some of the free software that's been mentioned here before and FileHistory in Windows itself, but we've been having issues with stability/reliability and FileHistory seems to have a mind of itself about some things so it's a bit inconsistent about what it chooses to backup. Are there any paid solutions that have official support and are more robust than the free alternatives?
|
# ? Mar 14, 2019 23:35 |
Thwomp posted:How does this thread feel about ArsTechnica's most recent Server build guide recommendations? I was just looking at new chassis last night. The 6x6TB WD Reds in my N40L are coming up on 4 years in age so I figured maybe it's time for a new set of 10TB drives and upgrading the rest of the hardware. Looked around at Lenovo TS440/TS460, some HPE tower servers, DIY silverstone DS380b (airflow doesn't sound good there), Dell PowerEdge tower servers...nothing really stood out to me as a compelling option. Then I was thinking maybe I should just get a little rack mount cabinet since that opens up a bunch of possibilities, and having an excuse to get some rackmounted poo poo at home would be fun. Not sure what else I would rack in there though aside from the storage array and a UPS. So maybe not worth it. I dunno, what else should I look at?
|
|
# ? Mar 15, 2019 01:47 |
|
fletcher posted:It depends on what you are trying to protect your data from What if the only thing I want to protect my data from is a dead hard drive? Let's say I don't care about a fire, I don't care about a virus, I just care about one hard drive dying and that data being gone? Is raid5 OK for that? I understand that any virus or natural disaster would mean I'd lose data
|
# ? Mar 15, 2019 01:56 |
|
WWWWWWWWWWWWWWWWWW posted:What if the only thing I want to protect my data from is a dead hard drive? Let's say I don't care about a fire, I don't care about a virus, I just care about one hard drive dying and that data being gone? Is raid5 OK for that? I understand that any virus or natural disaster would mean I'd lose data A number of us have been there when the drive controller has failed and destroyed a raid 5 array. Risk of data loss is not just the probability of individual drive failure. There are far more risks which is why people suggest having a backup. In the end it's up to you. If you lose all the data and you won't care then just use a raid 5. If there is data you care about keep a copy on a separate device. Best practice is always 3-2-1 backups, 3 copies, 2 on-site and 1 off-site.
|
# ? Mar 15, 2019 02:31 |
|
WWWWWWWWWWWWWWWWWW posted:What if the only thing I want to protect my data from is a dead hard drive? Let's say I don't care about a fire, I don't care about a virus, I just care about one hard drive dying and that data being gone? Is raid5 OK for that? I understand that any virus or natural disaster would mean I'd lose data You use RAID 1 like me. Or logical RAID 1 like Unraid. Forget RAID5
|
# ? Mar 15, 2019 03:18 |
|
WWWWWWWWWWWWWWWWWW posted:What if the only thing I want to protect my data from is a dead hard drive? Let's say I don't care about a fire, I don't care about a virus, I just care about one hard drive dying and that data being gone? Is raid5 OK for that? I understand that any virus or natural disaster would mean I'd lose data Jesus christ why do you have to make everything so difficult? RAID is not backup, it's for redundancy and uptime, period. You know what's backup? An actual backup solution! You have a separate HDD back up your stuff and/or a remote solution; the number of backups depends on the value and replaceability of the data. Use FreeFileSync, for example, for a quick/free/easy solution to mirror drive A to backup drive B (which you then take offline until the next time you need to update the backup.) Use a service like Backblaze to remotely backup appropriate stuff. For example, I have my game HDD backed up onto a single external HDD on-site, not because the games are particularly valuable but because they're large enough to make re-downloading them a chore (like, a multiple-days-long endeavor) but sensitive banking/tax documents should have at least a 3rd backup, in line with the 3-2-1 strategy mentioned by Devian.
|
# ? Mar 15, 2019 03:51 |
|
WWWWWWWWWWWWWWWWWW posted:What if the only thing I want to protect my data from is a dead hard drive? Let's say I don't care about a fire, I don't care about a virus, I just care about one hard drive dying and that data being gone? Is raid5 OK for that? I understand that any virus or natural disaster would mean I'd lose data Even then, even if everyone else fails to talk you out of this, at least use RAID 6. 5 just doesn't cut it anymore.
|
# ? Mar 15, 2019 04:24 |
|
|
# ? Jun 2, 2024 09:01 |
|
Unraid or Snapraids parity drives may be of interest to you.
|
# ? Mar 15, 2019 04:43 |