|
Matt Zerella posted:Expand the storage enough and depending on IOPS needed, this gets expensive quick as well as having storage people on hand to tune zfs. Putting aside zfs for a moment, where's the extra expense?
|
# ? May 2, 2020 23:36 |
|
|
# ? May 26, 2024 08:37 |
|
Once you get past our already stretched-to-the-breaking-point definitions of "home" or "personal" use, I'm not aware of any actual business or enterprise oriented storage solutions that aren't using software. Caching at the controller might have some small benefits but if you're in a business environment where you care about such things, adding a flash tier in front of it and having software smart enough to move data between disk, flash, and the fuckton of RAM that system should have, is way more effective than any hardware RAID solution.
|
# ? May 3, 2020 00:23 |
|
everyone hated hardware raid when it was a thing anyway afaik
|
# ? May 3, 2020 03:09 |
|
Folks I'm going to move my home server over to a slightly newer old laptop and I'm thinking of slapping on a NAS specific OS onto my previous machine until I build a "real" setup or pony up for a Synology or whatever. I was going to go with FreeNAS for no other reason than I already have a tutorial bookmarked about how to get that all going. Any reason why or why not that may be a good idea? I'm using a few external drives via USB of varying sizes and formats so this would mainly be doing network file sharing with no fancy RAID business going on.
|
# ? May 3, 2020 04:48 |
|
Thermopyle posted:Putting aside zfs for a moment, where's the extra expense? For one, memory requirements on large arrays. And someone who's a storage janitor. And like I said if there are IOPS/VM/Network needs add dollar signs for both hardware and people. At that point you're probably looking at an actual vendor like netapp.
|
# ? May 3, 2020 05:03 |
Bob Morales posted:A monkey can replace the disk with hardware raid Out-of-band software RAID has nothing to do with it. Zorak of Michigan posted:If I could use ZFS, I'd always use IT mode and software RAID over any hardware solution. I'm not as sure if I'm stuck with Windows, or it's a work situation where nobody wants me to add ZFS to enterprise Linux. Matt Zerella posted:For one, memory requirements on large arrays. And someone who's a storage janitor. And like I said if there are IOPS/VM/Network needs add dollar signs for both hardware and people.
|
|
# ? May 3, 2020 10:45 |
|
Just added a drive to my existing synology 1019+. I am going from two 12 GB drives to three 12 GB drives. Seemed simple enough, especially for a NAS newbie like myself. I guess this expanding part takes a a while? I have it set to 'run raid resync faster', but this is as far as it has made it after about six hours. I still don't quite need all the space, but I wanted to have a bit more room available. By the time I need more space I'll probably just get two more drives at the same time. Will likely be a long while though. Still, I figured a five bay NAS would give me plenty of room to expand. I am sure it is still probably babytown stuff compared to what some people here are doing though.
|
# ? May 3, 2020 16:47 |
|
Filthy Monkey posted:Just added a drive to my existing synology 1019+. I am going from two 12 GB drives to three 12 GB drives. Seemed simple enough, especially for a NAS newbie like myself. I guess this expanding part takes a a while? I have it set to 'run raid resync faster', but this is as far as it has made it after about six hours. The expansion takes quite a bit of time but it's still usable while it's doing the operation. If you have it set for faster expansion over performance it'll just be slower on NAS operations that's all. I have the same model with five 12TB disks in RAID6 with BTRFS and it did take a few days to totally sync. Nothing really to worry about.
|
# ? May 3, 2020 17:21 |
|
Matt Zerella posted:For one, memory requirements on large arrays. And someone who's a storage janitor. And like I said if there are IOPS/VM/Network needs add dollar signs for both hardware and people. It seems like by the time you're at the point where these become significant, you're also at the point where the costs of the hardware for hardware RAID also become significant. I'm not saying software RAID isn't more expensive at some scale. I'm just saying it's not obviously the case that this is so and it's probably going to be very situation-dependent if it is.
|
# ? May 3, 2020 18:25 |
I picked up a couple 10TB WD My Book drives a few months back, and just got around to installing them. Tested/formatted both before taking one out of the enclosure. I followed a guide to remove the 3.3v pins on the power connector because apparently they prevent the drives from being recognized on boot and can even catch on fire if you leave them in? I then plugged it into an external usb sata adapter/power cable to test it. It immediately shorted out a component on the drive's circuit board, which is connected to the ground pins: Has anyone else had this happen when covering/removing the 3.3v pins, or is this just incredibly bad luck? Do you have to use a specific kind of connector cable or else this happens? I'm going to attempt to replace the shorted component if I can figure out what it is, but otherwise where would be a good place to order a replacement circuit board?
|
|
# ? May 3, 2020 22:40 |
|
Monitor Burn posted:I picked up a couple 10TB WD My Book drives a few months back, and just got around to installing them. Tested/formatted both before taking one out of the enclosure. I followed a guide to remove the 3.3v pins on the power connector because apparently they prevent the drives from being recognized on boot and can even catch on fire if you leave them in? I then plugged it into an external usb sata adapter/power cable to test it. You might've shorted some other pins/misaligned them when you removed the 3.3v pins. I think many people were just covering the pins with kapton tape, or removing the pins on the PSU cable, which is easier since it's usually a crimp connector. It does look like the damage is localized. If someone has a similar model drive (post a picture of the label?), you can probably get someone here to look on their board to identify that component. It doesn't look too hard of a solder job. Re replacing the whole board: you might be able to, but I have a vague recollection about calibration data being stored on particular boards. Dunno if that's model-level or drive-level (e.g. bad sectors). I'm at the limit of my knowledge so I'll stop talking.
|
# ? May 3, 2020 23:06 |
Yeah I assumed it was the removed pins also, but I tested with a multimeter and didn't get any shorts with ground or other pins. Here's the drive and board:
|
|
# ? May 3, 2020 23:14 |
|
Yeah, that was definitely not the right way to go about that. Not sure where you picked up the fire risk if you leave them in, the only real fire risk in hard drives these days is using crappy molded molex adapters. Given that voltage was pumped where it's not supposed to go, I don't think you can reliably trust the multimeter readings at this point to verify that it wasn't something you did.
|
# ? May 3, 2020 23:21 |
Buff Hardback posted:Yeah, that was definitely not the right way to go about that. Can you link a guide to the recommended way to do this properly?
|
|
# ? May 3, 2020 23:26 |
|
Monitor Burn posted:Can you link a guide to the recommended way to do this properly? Throw kapton tape on the 3 3.3v pins, or modify a PSU extension cable (don't modify the stock PSU SATA cables). Given that in the USA, those drives are RMAable bare if a fault were to occur down the line (not you shorting it out), purposefully yanking pins off just ruined your warranty on that one.
|
# ? May 3, 2020 23:29 |
|
Also the other important thing: removing 3.3v depends on implementation of your specific PSU. Some PSUs keep 3.3v high (generally the older SATA spec ones), as devices used to actually use 3.3v. These days nothing uses 3.3v, so the spec was changed to allow for remote reboot of SATA/SAS devices when the 3.3v pins were held high. Especially if you're doing something as destructive as ripping pins off the drive's power connector, it would be a good idea to try your drive first with your PSU before taking a pair of needlenose pliers to it.
|
# ? May 3, 2020 23:37 |
|
You removed three pins from the connector. That's too many pins, you can just cover the one long one that's the third one in with kapton or remove it: https://www.instructables.com/id/How-to-Fix-the-33V-Pin-Issue-in-White-Label-Disks-/
|
# ? May 3, 2020 23:57 |
|
Rexxed posted:You removed three pins from the connector. That's too many pins, you can just cover the one long one that's the third one in with kapton or remove it: Just tape them all. They're all involved in 3.3v and there's no harm in taping those 3. Don't rip them off though.
|
# ? May 4, 2020 00:17 |
|
Kinda dumb question, but did you try them in your system before you pulled the pins? Your post made it sound like you only tested them while they were still in the enclosures, rather than bare. I ask because most drives work just fine with most PSUs without needing any modification whatsoever. The whole pin-taping thing is supposed to be a remedial action to take if you drives don't work, not a proactive measure. e; You can Google around and find a few shops offering replacement PCBs, like https://www.hddzone.com/western-digital-sata-pcb-c-18.html They don't look terribly expensive. DrDork fucked around with this message at 15:11 on May 4, 2020 |
# ? May 4, 2020 15:07 |
|
Been getting random crashes, I think I got stung by that Ryzen C-State bug. Made adjustments to fix, but it's weird since I was running Linux (ubuntu server 18.04) on this PC before I started using unraid and didn't have any issues.
|
# ? May 4, 2020 18:08 |
|
DrDork posted:Depends entirely on how comfortable you are with a DIY system. If you are, we can give some recommendations, and in that case a 6-drive setup isn't crazy. What would be a good setup to shoot for? I think the 4x8tb single redundancy setup with 24tb should last me for a while. Would a build like that be very expensive right now?
|
# ? May 4, 2020 20:20 |
Parts have arrived for the new build! Time to finally replace my N40L Just need some drives now
|
|
# ? May 4, 2020 21:15 |
|
My server is a huge full tower case and it's got a lot of drives in it. It lives on a shelf that's at a perfect working height with the side panel facing out so I haven't moved the thing since it had just one drive in it many years ago. Anyway, I needed to move it today and didn't think about the fact that it now has 40 pounds (!!!) of drives in it. I've never really thought about that aspect of hard drives.
|
# ? May 4, 2020 21:40 |
|
Oh, yeah, do not gently caress around with that. A coworker of mine injured his back trying to solo-lift a loaded disk shelf into a cabinet.
|
# ? May 4, 2020 21:58 |
|
I got a hold of a cheap batch of cmr wd blue 2tb drives to replace a couple failing drives I had been exposing to the os bare. I ended up with 4 drives in "raid 5" through storage spaces and I'm honestly really liking how stupidly simple it all ended up being. I was fully prepared to research into what Nas software to run, what I needed to buy or do to make it work how I wanted it to, but once the PC is up it's done, it's there, Plex is working. It's nowhere near the level of you folks, but ~6TB of single drive redundancy storage is really, really easy and cheap nowadays. Everyone that runs Plex should be doing at least this bare minimum. I realize that all of you are doing way more. But to anyone reading this thread wondering if they should bother with a storage array of some kind, just do it. It'll work. It'll be fine. It'll be easy.
|
# ? May 4, 2020 22:44 |
|
Monitor Burn posted:I picked up a couple 10TB WD My Book drives a few months back, and just got around to installing them. Tested/formatted both before taking one out of the enclosure. I followed a guide to remove the 3.3v pins on the power connector because apparently they prevent the drives from being recognized on boot and can even catch on fire if you leave them in? I then plugged it into an external usb sata adapter/power cable to test it. You should have tossed the bare drive into a spare PC to see what would happen. My ~2012/Ivy Bridge era power supply worked fine without having to break out the kapton tape.
|
# ? May 5, 2020 02:00 |
|
That thing that blew up is just a diode, someone could maybe fix that if you can find out what a non burnt one says on it Might even work if you take it out but you lost the protection that you toasted last time so... Uh.. find someone who knows what they are doing
|
# ? May 5, 2020 02:14 |
|
I was zeroing out a drive to mess around with zfs this evening. Yes, I did in fact somehow gently caress that up and had it pointed at my is drive. Whoopsiedoodle. On the plus side, zfs is p cool.
|
# ? May 5, 2020 04:35 |
|
Warbird posted:I was zeroing out a drive to mess around with zfs this evening. Yes, I did in fact somehow gently caress that up and had it pointed at my is drive. Whoopsiedoodle. On the plus side, zfs is p cool. LOL. Not the first time that's been done. Nice thing about FreeNAS, if you backed up your config, it's less than 20 minutes to be back up and running. I have mine backed up to the array, and it emails me a copy once a month if it's changed.
|
# ? May 5, 2020 16:52 |
|
VostokProgram posted:Please do not toss electronics into lakes, they have toxic metals in them Are toxic metals bad for electronics?
|
# ? May 6, 2020 20:02 |
|
Lowen SoDium posted:Are toxic metals bad for electronics? Tin Whiskers https://en.wikipedia.org/wiki/Whisker_(metallurgy)
|
# ? May 6, 2020 20:19 |
|
Wiskering is a pretty weird (and cool) phenomenon, but I don't think it's actually exacerbated by exposure to something like mercury.
|
# ? May 6, 2020 20:54 |
|
Lowen SoDium posted:Are toxic metals bad for electronics?
|
# ? May 6, 2020 20:56 |
|
I found a server leaning on a junk appliance in the alley behind my building recently and, since I don't have any kettle bells of my own and gyms in my state are closed for COVID19, have been using it for suitcase carries, but I'm wondering if it's even worth plugging in and using as a computer. I tried looking up the service tag but Dell's website says it doesn't exist, which I take to mean that the machine is old as hell. Looks like it has an expansion card for fiber networking and maybe the power supplies are still good, so it could be worth something in parts, but IDK if it's so old that it'd be outperformed by an RPi or what. Anyone know what model/line the front bezel is from?
|
# ? May 8, 2020 16:36 |
|
The service tag lookup sort of worked for me but didn't return a whole lot of detail - PowerEdge 2950 that was manufactured in 2006. So unless someone has done some crazy rear end parts swapping in it, it's more valuable for the exercise you're getting.
|
# ? May 8, 2020 16:49 |
|
I was gonna say my service tag from 1999 still works, but yeah I recognize those from the mid to late 2000s.
|
# ? May 8, 2020 16:54 |
|
Munkeymon posted:I found a server leaning on a junk appliance in the alley behind my building recently and, since I don't have any kettle bells of my own and gyms in my state are closed for COVID19, have been using it for suitcase carries, but I'm wondering if it's even worth plugging in and using as a computer. If it is indeed a PowerEdge 2950, it's basically worthless--hence being outside like that. Dual quad-core Xeons at 2.x Ghz would be trivial to beat these days--if it's a E5410, it's got a per-CPU benchmark score of ~1800, while a $20 i3-4130 benches at ~3300. If it still has drives in it you might be able to sell those for like $20/ea, maybe. But the rest of it would probably cost more to ship than it would be worth.
|
# ? May 8, 2020 17:30 |
|
IOwnCalculus posted:The service tag lookup sort of worked for me but didn't return a whole lot of detail - PowerEdge 2950 that was manufactured in 2006. Huh, yeah, it does come up now that I'm at my laptop. Weird. It looks like a factory build inside, so I'd guess it's all original. DrDork posted:If it is indeed a PowerEdge 2950, it's basically worthless--hence being outside like that. Dual quad-core Xeons at 2.x Ghz would be trivial to beat these days--if it's a E5410, it's got a per-CPU benchmark score of ~1800, while a $20 i3-4130 benches at ~3300. If it still has drives in it you might be able to sell those for like $20/ea, maybe. But the rest of it would probably cost more to ship than it would be worth. Nope, no drives. Guess I'll eventually take it apart and see if the motherboard mounting is standard? (lol) At least I'll be able to dispose of it more responsibly than the guys who just come by and take anything metal that's not anchored to the ground.
|
# ? May 8, 2020 17:58 |
|
2950 first gen is probably a Cedarwood (Cedar Mills? I forget the name) Pentium 4. Literal garbage at this point. A Dell R610 looks like a paragon of performance and efficiency in comparison. It notionally should boot a modern 64 bit OS but that chassis looks like it was gutted and may be missing parts. Paul MaudDib fucked around with this message at 18:12 on May 8, 2020 |
# ? May 8, 2020 18:08 |
|
|
# ? May 26, 2024 08:37 |
|
Munkeymon posted:Nope, no drives. Guess I'll eventually take it apart and see if the motherboard mounting is standard? (lol) You already know it's not, nothing is standard in those Dells.
|
# ? May 8, 2020 20:36 |