|
I have a hardware question. If this is not the best place to ask this, I would appreciate pointing me to it. What kind of equipment/how much money would one need to recover data from hard-drives which have been dismantled and the platters have been scratched? I assume that someone at the NSA level can recover data from anything baring shred to pieces platters. In case you're wondering: I have found 5 UATA HDDs from years ago in the basement. I also happened to find a PCI card with ATA connectors and an ATA cable. Since I have a motherboard with a PCI slot I wanted to see if I can dd the HDDs. Unfortunately, something changed in the requirements (voltage?) since ages ago, and my 5 year old motherboard would not post with the PCI card attached. So, since I may have on those hard-drives tax returns from who knows when, I wanted to dispose of them in a manner that would make it impractical someone with some kind of a budget (not NSA of course) to retrieve the data. So I dismantled them and scratched and bent the platters. Are they reasonably fine to put in the trash? I don't particularly care more than that to protect whatever data may be on them. I figure that if you can get my SIN from the HDD you can get it easier from the government database by just asking. So, it's probably not worth your time.
|
# ? May 20, 2017 02:51 |
|
|
# ? May 26, 2024 18:08 |
|
At this point you are fine.
|
# ? May 20, 2017 03:21 |
|
Volguus posted:So, it's probably not worth your time. Unless you have some reason to believe you're a tasty subject for targeted collection activities, if all you did was take a hammer or screw driver and break the controller board on the HDD, you can be reasonably sure that no one in their right mind is going to go through the rear end-pain of trying to track down a compatible replacement board, on the off chance that there's something on the drive that might be interesting to some guy dumpster diving. So....yeah. Don't worry about it.
|
# ? May 20, 2017 03:28 |
|
Now i feel like I over-did it....oh well, at least i got some magnets for the fridge.
|
# ? May 20, 2017 03:48 |
|
Getting the magnets out is all the excuse you need imo. If you ever do this again watch out for glass platters. Some HDDs use them because glass is lighter and can provide a smoother surface finish (allows heads to fly closer, which is important for increasing recording density). The glass they use is pretty tough and it really looks just like metal (because it's been plated with a metal oxide), so if you try to bend it you'll put a lot of force into it with no visible results right up until it shatters and possibly cuts the poo poo out of you while sending shards everywhere.
|
# ? May 20, 2017 05:12 |
|
IOwnCalculus posted:There is no upgrade path from 10 to 11, and the FreeNAS forums make SH/SC seem downright welcoming.
|
# ? May 20, 2017 13:51 |
|
Combat Pretzel posted:Tell me about it. I'm just trying to find out how to install FreeNAS to a partition instead of letting it hog the whole drive. Holy poo poo the vitriol. There's r/freenas on reddit. I think it's usually just people asking for hardware recs though.
|
# ? May 21, 2017 02:58 |
|
BobHoward posted:Getting the magnets out is all the excuse you need imo. somebody get the Hydraulic Press Channel on the line stat
|
# ? May 22, 2017 01:23 |
|
phosdex posted:There's r/freenas on reddit. I think it's usually just people asking for hardware recs though. FreeNAS' installer insists on using the whole device, which is stupid. I wanted to stick the stuff on the SSD I've put in (256GB SSD, of that 128GB as L2ARC, 32GB as ZIL, rest free and trimmed for overprovisioning), so that I don't need to crap my pants the stick breaks eventually. I guess cloning the stick to the SSD would work, but I can't be assed to reinstalling it yet another time. Going from Corral to 11 was already annoying enough.
|
# ? May 22, 2017 03:13 |
|
Combat Pretzel posted:I relented and continue to use the USB stick. I guess so long the RRD logs aren't on it, there shouldn't be an issue. Ah, this is exactly what I also wanted to do but could not find anything about, only about using an USB stick. Which, as you say, sucks from a reliability standpoint. So it's just not *supported*?
|
# ? May 22, 2017 06:00 |
|
It's not really possible to do in any sane way. FreeNAS' design architecture dates back to a time before LSI2008s could be had for cheap and motherboards sometimes only had two SATA ports to go with two IDE ports, so they built it from the ground up that they were "helping" you by using a USB boot device. Of course, that was also a time when your FreeNAS box was just storage, and didn't easily do much else. They steadfastly cling to using the whole USB device because clearly this is THE superior solution and gently caress you if you don't agree because FreeNAS is awesome because we say so. If you know enough FreeBSD to pull this off, you probably don't need FreeNAS on top of it.
|
# ? May 22, 2017 06:16 |
FreeBSD is easier to do than most people think.
|
|
# ? May 22, 2017 08:30 |
|
I'm just about to throw in the towel with FreeNAS. I love many, many of its features, but I feel that they've been getting progressively more broken as time goes on. Currently, on solid hardware (Lenovo TS440, Xeon E3-1225, 32GB ECC, H310 HBA), I can't get Plex to stream a single 1080p movie file without stuttering and the audio getting out of sync. The phpVirtualBox install I had going perfectly has decided to lunch itself, and the installer doesn't work properly, so I've had to manually install everything, and Transmission has some odd user-level permissions fuckery going on that defies my BSD knowledge, and CrashPlan is a royal pain in the rear end every time they update the software (yes, I know this isn't FreeNas's fault). Combined with the fact that I want to increase my array size and I'll have to do the whole zpool backup, flatten, and re-allocate and the questionable future of FreeNAS 11, I'm strongly considering just switching to Windows and using the built-in RAID on the HBA, or something like UnRaid with functional Docker implementation. Any other options that I'm ignoring? I really like ZFS for certain things, but my data are mostly backed up off-site anyhow, and anything that isn't... isn't THAT critical.
|
# ? May 22, 2017 19:45 |
|
I'm convinced that the only way to do Crashplan on *nix these days is in Docker. I have a box colocated at work that I set up docker in just to run Crashplan, because it's so much more reliable than running it natively in Ubuntu. I like ZFS too much to abandon it altogether but I'm with you on loving off of FreeNAS. As soon as I can scrounge up a couple of cheap SSDs (ideally once I sell the motherboard I pulled out of this server), I'll start that process.
|
# ? May 22, 2017 19:50 |
|
sharkytm posted:I'm just about to throw in the towel with FreeNAS. I love many, many of its features, but I feel that they've been getting progressively more broken as time goes on. Currently, on solid hardware (Lenovo TS440, Xeon E3-1225, 32GB ECC, H310 HBA), I can't get Plex to stream a single 1080p movie file without stuttering and the audio getting out of sync. The phpVirtualBox install I had going perfectly has decided to lunch itself, and the installer doesn't work properly, so I've had to manually install everything, and Transmission has some odd user-level permissions fuckery going on that defies my BSD knowledge, and CrashPlan is a royal pain in the rear end every time they update the software (yes, I know this isn't FreeNas's fault). Combined with the fact that I want to increase my array size and I'll have to do the whole zpool backup, flatten, and re-allocate and the questionable future of FreeNAS 11, I'm strongly considering just switching to Windows and using the built-in RAID on the HBA, or something like UnRaid with functional Docker implementation. Pay for unraid and stop janitorial your Linux isos.
|
# ? May 22, 2017 19:58 |
|
Matt Zerella posted:Pay for unraid and stop janitorial your Linux isos. Yeah this x 100 ZFS is nice but not nice enough to deal with FreeNAS and its community. The recent "Corral" clusterf*** just confirmed that to me. Unraid has its flaws (plenty of them and mosty security related) but once running it allows you to have things like nextcloud/plex/crashplan up and running in 2 minutes instead of 2 days of browsing arcane blogs on FreeBSD... and that includes working, hands-off autoupdates.
|
# ? May 22, 2017 20:13 |
|
Don't forget a robust, helpful, and non-hostile support community.
|
# ? May 22, 2017 20:16 |
|
Is there a favorite way to get a shitload of 2.5" drives stacked into a case? Best controller backend? They're just SATA and it's for my own use, so it can't get too fancy.
|
# ? May 22, 2017 20:36 |
|
D. Ebdrup posted:FreeBSD is easier to do than most people think. This. The documentation for FreeBSD (handbook, man pages) is very high quality, and FreeBSD doesn't constrain your configuration choices, because it's intended to be a general purpose OS. That seems like a decent trade-off vs FreeNAS's web GUI.
|
# ? May 22, 2017 20:43 |
|
As someone who does basic sysadmin stuff for several Ubuntu Server boxes at work... is FreeBSD really going to be that much of a step upwards in difficulty?
|
# ? May 22, 2017 20:44 |
|
Paul MaudDib posted:As someone who does basic sysadmin stuff for several Ubuntu Server boxes at work... is FreeBSD really going to be that much of a step upwards in difficulty? Not at all. Check out the handbook I linked above. AlternateAccount posted:Is there a favorite way to get a shitload of 2.5" drives stacked into a case? Best controller backend? They're just SATA and it's for my own use, so it can't get too fancy. Slap a few of these into your case: https://www.newegg.com/Product/Product.aspx?Item=N82E16817198068 And grab a cheap LSI2008-based controller or two if you need more SATA ports.
|
# ? May 22, 2017 20:47 |
|
Matt Zerella posted:Don't forget a robust, helpful, and non-hostile support community. eames posted:Yeah this x 100 IOwnCalculus posted:I'm convinced that the only way to do Crashplan on *nix these days is in Docker. I have a box colocated at work that I set up docker in just to run Crashplan, because it's so much more reliable than running it natively in Ubuntu. I have no problem spending the money on the software. $140 for UnRaid? Fine, done. I'll start researching the setup, and I'll probably bug you folks. Wish me luck. Time to dig out a couple of archive drives and start backing everything up.
|
# ? May 22, 2017 20:50 |
|
Be sure to make use of Unraid's trial period. I used most of the given time to look at everything it had to offer until I ultimately decided not to use it in favor of OpenMediaVault 3 with mergerfs and SnapRAID.
|
# ? May 22, 2017 21:06 |
|
Paul MaudDib posted:As someone who does basic sysadmin stuff for several Ubuntu Server boxes at work... is FreeBSD really going to be that much of a step upwards in difficulty? I think FreeBSD is easier to manage since you have one 'distro' to learn... so to speak. But I mean its easy peasy if you can admin Linux.
|
# ? May 22, 2017 21:21 |
|
SamDabbers posted:Not at all. Check out the handbook I linked above. Nice. Yeah, looks like that controller is the cheapest way to get +8 drives into a system. And I just need a couple of the 4 SATA breakout cables and I am good to go? Works with just about any NAS software?
|
# ? May 22, 2017 21:36 |
|
Yeah, the LSI2008 is probably the most well-supported storage controller outside of Intel's own chipsets.
|
# ? May 22, 2017 21:39 |
|
8-bit Miniboss posted:Be sure to make use of Unraid's trial period. Yeah, seconded. If you're like me, you'll initially be put off by the 2000-era looking user interface and some pretty confusing concepts. I found that most of these concepts are that way for a reason. Having the initial installation default to telnet instead of ssh or logging into the web-GUI via plaintext http with the root password are not examples for this.
|
# ? May 22, 2017 21:45 |
|
IOwnCalculus posted:Yeah, the LSI2008 is probably the most well-supported storage controller outside of Intel's own chipsets. Can I run 3 of them in one box?
|
# ? May 22, 2017 22:11 |
You can run as many as you have PCIe lanes for - so, on AMD Zen with up to 128 PCIe lanes, you can have a maximum of 16, assuming you can find risers, pcie switches, and a case that's big enough - which gives you 128 disks, assuming you're not using port multipliers.
BlankSystemDaemon fucked around with this message at 22:19 on May 22, 2017 |
|
# ? May 22, 2017 22:16 |
|
Paul MaudDib posted:As someone who does basic sysadmin stuff for several Ubuntu Server boxes at work... is FreeBSD really going to be that much of a step upwards in difficulty? Why not just use Ubuntu Server?
|
# ? May 23, 2017 00:48 |
|
I run Ubuntu server at home and it works. Like works, works. You set it up, import your pools. ( All the way back from Open Solaris ) and off you go. You have KVM from VMs, LXC or Docker for containers, Samba for filesharing and my Plex VM streams without a hiccup. It is not that hard. Sure you don't get a nice gui, but I don't futz around making shares the whole day so taking the time to set them up once is not that big of a hassle.
|
# ? May 23, 2017 06:17 |
|
Paul MaudDib posted:As someone who does basic sysadmin stuff for several Ubuntu Server boxes at work... is FreeBSD really going to be that much of a step upwards in difficulty? I don't even work in IT and I use FreeBSD purely from the terminal. My prior Linux experience was just loving around with it on old hardware. The handbook is divine-level documentation and it is really mostly a case of remembering the little differences (eg: remembering that config files are stored in different locations to Linux). Join us, it's truly better over in BSD land.
|
# ? May 23, 2017 08:14 |
The good thing about FreeBSD is that it's an OS that is designed to be consistent, even in the ways that it lets you shoot yourself in the foot - but it's designed this way, because this also lets you accomplish things which will make you feel like a genius. Which is not to say that you can easily break your system, because that's quite difficult (especially with root on zfs and boot environments, nowadays) - but every single way I know of to break it requires you to use -f, which is a switch to be avoided unless you know exactly what you're doing. And there's no systemd BlankSystemDaemon fucked around with this message at 09:17 on May 23, 2017 |
|
# ? May 23, 2017 09:12 |
|
Now you jerks have me thinking about moving off freenas and I don't have any current problems with it.
|
# ? May 23, 2017 22:44 |
|
phosdex posted:Now you jerks have me thinking about moving off freenas and I don't have any current problems with it. If it ain't broke, don't fix it. TM
|
# ? May 23, 2017 22:49 |
|
8-bit Miniboss posted:If it ain't broke, don't fix it. TM Yeah I doubt I will soon. Just thinking about the future. I only have 1 jail on freenas right now. But I have a vsphere server that is way more powerful than I need now and it would be nice to consolidate into a single server.
|
# ? May 23, 2017 22:56 |
|
Just don't install Corral. FreeNAS 11 appears to be just fine.
|
# ? May 24, 2017 00:57 |
|
I'm still running Corral until they release 11 with VM support and Docker again. I'm quite happy where I am. And FreeBSD is the bee's knees. CommieGIR fucked around with this message at 01:26 on May 24, 2017 |
# ? May 24, 2017 01:23 |
|
I'm on 9.10.2, I believe the plan is for 11 to get final release next week. Then I think I'm gonna wait another month to see how stuff shakes out before I upgrade to it.
|
# ? May 24, 2017 01:34 |
|
|
# ? May 26, 2024 18:08 |
|
Barring another high level dude leaving, 11 should be fine. 9 was just solid.
|
# ? May 24, 2017 01:48 |