|
Took 3 days to clear and build parity, but got a few more 12TB drives added to Unraid now. One is being used as a second parity drive, and the other was added to the array for 72TB of total storage. I've got the physical space for 4-6 more drives, but I think I'm at the point now where future upgrades will be swapping out 8TB drives for 12TB drives.
|
# ? Oct 3, 2021 15:27 |
|
|
# ? Jun 6, 2024 02:38 |
|
Embarrassing confession time. So after messing about with drive health checking etc, my system rebooted itself overnight and the problem with my un-interactable mystery files revealed itself. I think because the drive had come from another PC the files required administrator privileges to move, and prior to the reboot explorer decided not to show me the popups, making everything hang. Now the popups appear, I click authorise and everything moves as it should. There isn't a big enough to describe my utter failure as a goon.
|
# ? Oct 3, 2021 23:21 |
|
Z the IVth posted:Embarrassing confession time. its ok, windows just be like that sometimes
|
# ? Oct 4, 2021 04:10 |
|
Z the IVth posted:Embarrassing confession time. It could be a lot worse man. And like the other dude said, windows be like that sometimes.
|
# ? Oct 4, 2021 04:28 |
|
I am coming here for a confidence check. I need a critique. I want to build a rackmount PC and place it inside of something like a NavePoint 15U server cabinet ([https://www.newegg.com/p/2BA-001H-000Z4]) and run Unraid (or goon-suggested option). I want to put a few ideas/limitations upfront and provide my rationale. I thought about posting this in the HomeLab thread, but it didn't seem too appropriate. So here's what I'm thinking:
So the major parts would be:
politicorific fucked around with this message at 05:10 on Oct 4, 2021 |
# ? Oct 4, 2021 05:07 |
|
politicorific posted:big post Various thoughts in no particular order: - im pretty sure you cant install normal 120mm fans in a 3U case, so youre going to be stuck with LOUD fans. a 4U is probably better from that perspective - water cooling unnecessary assuming you have enough airflow going through this case - UPS is a must have - transfers between VMs over the hypervisor's network bridge are pretty much at RAM speed - if your gaming machine is a VM on this server you'll have to look at GPU passthrough. doable but takes effort - you'll definitely want all the VMs to be installed on SSD-based storage (especially the VM you use as a PC), but hard drives will be fine for bulk data - RAID is not a backup, it exists purely to improve availability (uptime) so plan to have a proper backup for all this stuff - consider having your main PC as a dedicated desktop anyway so you can still check your email and pay your bills online whenever this homelab inevitably explodes, figuratively or literally - consider going AMD and getting a 5950X - personally I would rather do all this on TrueNAS than unraid but that will take away your GPU passthrough option. there's also Proxmox as an option - do NOT buy a single DIMM of RAM, buy a kit that populates your channels (which is 2 on most platforms)
|
# ? Oct 4, 2021 05:48 |
|
I'm not sure if this is out of scope for the thread, but I've just received an IBM FlashSystem 5035. After configuring the IP for initial setup, I change the cable to Port 1 and change the IP on my device to be in the same range as the IP of the SAN - However, the NIC ports doesn't show any link or activity lights. I can see the NIC adapters through the Technician UI but no dice getting to the initial configuration.
|
# ? Oct 4, 2021 10:54 |
|
Happy Dolphin posted:I'm not sure if this is out of scope for the thread, but I've just received an IBM FlashSystem 5035. After configuring the IP for initial setup, I change the cable to Port 1 and change the IP on my device to be in the same range as the IP of the SAN - However, the NIC ports doesn't show any link or activity lights. I can see the NIC adapters through the Technician UI but no dice getting to the initial configuration. Enterprise SAN thread: https://forums.somethingawful.com/showthread.php?threadid=2943669
|
# ? Oct 4, 2021 11:28 |
|
Nulldevice posted:Enterprise SAN thread: https://forums.somethingawful.com/showthread.php?threadid=2943669 Much appreciated. Didn't show up in my search!
|
# ? Oct 4, 2021 13:09 |
|
politicorific posted:I am coming here for a confidence check. I need a critique. I have a similar unbranded server cabinet like the one you linked. You'll need to keep in mind the depth of the cabinet, as that will limit what kind of chassis you can put in it unless you're ok with things sticking way out the back. Short-depth server cases often have trade-offs that need to be kept in mind, like not being able to use full-length graphics cards if other components are installed, weird/bad fan placements (don't want to cook your HDDs), no externally-accessible HDD bays (hope you're ok with taking everything apart to get to a failed HDD). Going up to 4U can help with finding a case that can fit your needs. For how many drives you want to put in it, figure out what you want to store, and how long you want to store them. Depending on how many HDD drives you need, it can significantly limit your chassis choice. A few drives won't change much, but 6+ will. Cooling a power-hungry CPU 24/7 quietly in a 3u chassis is going to be a challenge. Here's my current setup. The bottom server is my storage server which runs FreeNAS and hosts a Nextcloud instance. It uses a low-powered CPU and runs 24/7, while the other two servers I turn on when needed to keep power usage and noise down. The middle and top servers are virtualization hosts, and the middle one has a gtx1080 that I passthrough for a gaming VM. Splitting up Storage and Virtualization allowed me to fit everything inside the cabinet without sacrificing noise, capacity, or performance.
|
# ? Oct 4, 2021 13:38 |
|
It's been mentioned, but most mini racks are for network equipment and it can be difficult to find one that'll fit even a shallow depth server chassis. If it's something that is seen and may need to move with you in the future it's worth looking at audio equipment racks. They're still the standard 19" wide, usually deep enough to fit shallow chassis, can usually be found on castors so they're easy to move, and generally look pretty nice compared to server racks (and they're usually much cheaper than server racks). It just means that you likely won't have a square hole system, you'll have a round hole system. This is really only an issue if you're frequently putting stuff into and out of the rack (can wear out fixed threaded holes with no way to replace them like you could using cage nuts, or you have to deal with unthreaded holes needing a nut on the back which is annoying to install) or if you're looking to use surplus enterprise gear since most of their rail systems are designed to work toollessly with square posts (but you mentioned that you're really not in the used enterprise gear market). I can vouch for Unraid in terms of ease of use for someone not very familiar with Linux. The ability to add whatever size drives whenever you get them is great as long as you don't need high read/write speeds (an SSD cache drive is a must). Their interface for both VMs and Docker containers make it very simple to get stuff up and running without needed to learn all networking/cli commands for docker settings but also lets you use those advanced features if needed. I haven't tried to do GPU passthrough yet but as long as it's a 10 series or newer Nvidia card it seems much easier than it used to be. I run a whole bunch of containers, Plex, and HomeAssistant (VM) on a Ryzen 1600AF and it seems to handle the load just fine. In general I'd say find a case for the server first (you don't have to buy it first, but find a case you know you'd be fine with) and then look for a rack. That way you can be sure you'll be able to at least fit that case instead of getting a rack and then hoping you'll be able to find a case that'll fit. And, as has been mentioned, if you're going to exist in the same room as the server then go 4U, anything less than that means you're going to need server grade fans to push air through the box instead of a normal tower cooler and that means a lot more noise. Scruff McGruff fucked around with this message at 18:16 on Oct 4, 2021 |
# ? Oct 4, 2021 16:48 |
|
The rack I bought is this one https://www.amazon.com/gp/product/B082XVLG91 It's suitable for my Supermicro 3U and the Ubiquiti Dream Machine Pro and I can use a shelf for equipment that isn't rack-capable. quote:
I would generally advise against this for most setups unless you have a constant stream of Plex users or have high power costs directly attributable to the transcoding. For personal use, I would advocate for a decently powerful CPU and undervolting it - this is what I'm going to do with my 3900x system I use as my desktop now, in fact. If power stability is a concern, I'd try to make sure that your Internet connection is also protected. I found out that even though my equipment is all on a UPS that somewhere else along the way to my ISP power goes out anyway, so I primarily use a UPS to condition my power and to keep things running smoothly for a power down.
|
# ? Oct 4, 2021 18:02 |
|
necrobobsledder posted:If power stability is a concern, I'd try to make sure that your Internet connection is also protected. I found out that even though my equipment is all on a UPS that somewhere else along the way to my ISP power goes out anyway, so I primarily use a UPS to condition my power and to keep things running smoothly for a power down. Cable Internet generally requires power to be up on the street to function, all the cable equipment is driven from line power, as opposed to traditional telephone loops (and probably DSL) where it’s all driven from the telco office and they have giant banks of batteries and/or generators, and will operate during a power outage. This is one of the reasons that for some stuff you still can't replace hardline phones with SIP receivers or similar - the hardline phones will be running even during a power outage while the SIP phones will go down when the internet does (or when the power does). Drive down a suburban street sometime and you'll see the big cable boxes on telephone poles, they have a big green or red light depending on their status. They're just fed off the transformers AFAIK. Paul MaudDib fucked around with this message at 05:19 on Oct 5, 2021 |
# ? Oct 4, 2021 23:25 |
|
Thank you all for your replies! It's nice to have somethingawful as a resource. While digesting your responses, I came across two websites which I felt were very useful. I didn't see these in the first post, but they look reputable enough that including them might be helpful to newcomers (along with any relevant subreddits, servethehome, and levelonetechs). https://unraid-guides.com/ https://www.serverbuilds.net/ Serverbuilds in particular has builds calculated down to the dollar... This tells me that I have a lot more reading to do to see if it impacts my plans at all. I'm going to reply/ask questions about a few points some of you made. VostokProgram posted:- transfers between VMs over the hypervisor's network bridge are pretty much at RAM speed VostokProgram posted:- consider having your main PC as a dedicated desktop anyway so you can still check your email and pay your bills online whenever this homelab inevitably explodes, figuratively or literally VostokProgram posted:raidisnotbackup, ups, dimms Next, Actuarial Fables posted:4U case recommendations Scruff McGruff posted:4U case recommendations Please check my math on my cabinets and cases: The Silver Stone CS350 https://www.silverstonetek.com/product.php?pid=760&area=en 440 mm (W) x 161.2 mm (H) x 474 mm (D) Let's use this link as a representative example of the server cabinets I can purchase. I don't know if there is enough clearance for the server to fit inside without hitting the back. Do these cases typically have the ability to move the rack posts back and forward? https://www.alibaba.com/product-det....6bc243a2Ax3FYu (width x height x depth) 600x(depends on number of Rack Units)x600 There should be enough space to fit this case and future equipment. I guess I need to figure out how tall of a server cabinet I want(600, 800, 1000 mm?). Ha, I just got the idea to replace my other furniture with rackmount cabinets. At least it'd all match. Actuarial Fables posted:For how many drives you want to put in it, figure out what you want to store, and how long you want to store them. Depending on how many HDD drives you need, it can significantly limit your chassis choice. A few drives won't change much, but 6+ will.
Going with the SAS extender route won't make me fret so much about cramming all of my stuff in this first case. It also means I don't need to be disappointed about trading HDDs for GPUs. Just thinking now, it'd be nice if there were a 3.5/2.5 drive "vertical" cage meant for SAS expanders which could be mounted in ATX case motherboard mounts. This would be one way to cram additional hard drives into standard cases for DIYers. Maybe one day I'll do some industrial design and get the parts all laser cut. Yes, there is plenty of space for the future if I get a SAS card and expander. Yesterday I saw a 16 TB Toshiba drive go up on a local site for about 360 USD, $22.5/TB is a little bit more than I want to pay. So maybe for now will just set up something small to run the VMs and figure out the HDD long-term storage a different way. Actuarial Fables posted:Cooling a power-hungry CPU 24/7 quietly in a 3u chassis is going to be a challenge. Scruff McGruff posted:It just means that you likely won't have a square hole system, you'll have a round hole system. This is really only an issue if you're frequently putting stuff into and out of the rack (can wear out fixed threaded holes with no way to replace them like you could using cage nuts, or you have to deal with unthreaded holes needing a nut on the back which is annoying to install) Scruff McGruff posted:I haven't tried to do GPU passthrough yet but as long as it's a 10 series or newer Nvidia card it seems much easier than it used to be.
|
# ? Oct 5, 2021 05:07 |
|
Do note that most MMOs will ban you for running in a VM. Especially anything even slightly related to Asia, their anticheats are rabid against VMs.
|
# ? Oct 5, 2021 05:16 |
Paul MaudDib posted:Cable Internet generally requires power to be up on the street to function, all the cable equipment is driven from line power, as opposed to traditional telephone loops (and probably DSL) where it’s all driven from the telco office and they have giant banks of batteries and/or generators, and will operate during a power outage. This is one of the reasons that for some stuff you still can't replace hardline phones with SIP receivers or similar - the hardline phones will be running even during a power outage while the SIP phones will go down when the internet does (or when the power does). This is, in theory, a real issue for countries like Denmark where DSL, FTTH, FTTC/N and other systems are so widely deployed that almost nobody has a regular telephone anymore - so, if cellphone services go down and all electricity is out, almost nobody will have any ability to phone. Biowarfare posted:Do note that most MMOs will ban you for running in a VM. Especially anything even slightly related to Asia, their anticheats are rabid against VMs.
|
|
# ? Oct 5, 2021 09:54 |
|
politicorific posted:This is a nice nugget of experience. I see that both the Silverstone CS350 and RM4000 have round holes, as well as some of the no-name Chinese-made cases I can purchase all have rounded mounting holes. Looks like all the racks have square holes. Another great Unraid resource is SpaceInvader One. He seems to have a video guide for almost everything you'd want to do in Unraid and does a great job explaining the process.
|
# ? Oct 5, 2021 14:26 |
|
politicorific posted:[*]This will be an Intel build (I've got nothing against AMD). The thing that really opened my eyes about virtualization is that extra cores don't seem to be impacting the performance of many applications. I can benefit by combining multiple machines into one box and running unRaid. For example, take this video comparing different gaming performance of extreme editions. The top-end, double the price 10980EX (18 core/36 thread) has very similar performance to the 'entry-level' 10900XE (10 core/20 threads). https://www.youtube.com/watch?v=r3JRKhEu0SI Others have answered a lot of your other questions, but I think this one got missed. So you're right in the sense that many applications have limited scaling with cores--many older games, in particular, simply don't bother to use more than 2-4 cores, regardless of how many you have available. That said, if you're thinking about a single large box with a whole bunch of VMs, you will absolutely benefit from a high-core count CPU, since the less over-provisioning of cores between the VMs, the better off you'll be. But it's very possible to push that too high--if you figure 6 for your Windows box, 1-2 for Plex, maybe 4 total for Ansible, PiHole, Joplin, Git, HomeAssistant, etc., you might not need more than 12c, and could likely get away with less given that things like Ansible, Joplin, Git, HA, aren't constantly churning through data. A i9-10980XE will run you north of $1100 (possibly much more depending on your local markets). A AMD 5950X is faster in most regards and costs only $800, and you get better motherboard expansion capabilities as a bonus. Or, if after you've sketched out what all you plan to stick on the system, you find that 30+ threads is unnecessary, you could consider dropping down to something like a 10850 (10c/20t) for $400 or a 5900X (12c/24t) for $550. If you have the money and don't care, then sure, go hog wild. But if you're price sensitive it's something to give some thought to.
|
# ? Oct 8, 2021 17:26 |
|
Appears Synology devices will soon see improved performance when it comes to transcoding HDR content via Plex: https://forums.plex.tv/t/plex-media-server-1-24-5-5071-new-transcoder-preview/746527
|
# ? Oct 9, 2021 08:02 |
|
Internet Explorer posted:You should be able to migrate the disks over like you said, the OS is on the disks themselves, so that part should be very straightforward. Going from RAID-1 to RAID-5 is also easy. Shouldn't be any issues. Of course, you should already have a backup of this data, but yeah. Thanks, this all seems to have worked fine. Though I sort of wish I knew adding the new drives and shifting to RAID 5 would take 4 days (per the estimates a day in)
|
# ? Oct 10, 2021 15:05 |
|
Sir DonkeyPunch posted:Thanks, this all seems to have worked fine. Though I sort of wish I knew adding the new drives and shifting to RAID 5 would take 4 days (per the estimates a day in)
|
# ? Oct 10, 2021 16:27 |
|
So... I chickened out on becoming an hero and before doing so I deleted all my zvols and then destroyed the zpool on my freenas server. I didn't wipe the disks. Any way to restore that poo poo? edit: and by zvol I mean a dataset I think. kiwid fucked around with this message at 19:54 on Oct 12, 2021 |
# ? Oct 12, 2021 19:49 |
|
kiwid posted:So... I chickened out on becoming an hero and before doing so I deleted all my zvols and then destroyed the zpool on my freenas server. I didn't wipe the disks. Any way to restore that poo poo? Try to recover a zpool: https://www.unixarena.com/2012/07/how-to-recover-destroyed-zfs-storage.html/. Never actually tried it to find out if that'll pull back the zvol/dataset, though.
|
# ? Oct 12, 2021 20:15 |
So long as you don't attempt to do any write operations to the vdev members after you've destroyed a pool, it still exists and can be reimported - but it'll be in whatever state it was in last. When you delete any dataset, the associated records gets marked to be freed by a transaction group operation, which in turn triggers the background operation in ZFS that happens in between all the other operations. So, the only way I can think it could work is if you immediately forcefully exported the pool then used zdb(8) to find the transaction group associated with the free operation, and import the pool before that transaction group using a series of potentially risky flags. However, since I've never actually tried doing this, it's just a guess and I can't guarentee it'll work nor even that it won't blow up in your face (although you might mitigate this by trying to import it read-only). ZFS checkpoints were invented to make these sorts of administrative commands possible to roll back, but the downside to checkpoints is that you can only have one, and everything is written in an append-only transaction log until you remove the checkpoint or rewind to it on a subsequent import.
|
|
# ? Oct 13, 2021 08:44 |
|
The serverbuilds team are a bunch of pricks and it’s going to be heavily Western Hemisphere centric when it comes to parts and availability.
|
# ? Oct 14, 2021 03:49 |
|
Is there an easy way to look up a given hard disk model number and figure out if it's shingled or not? Are shingled drives still terrible for sustained write speed? I was curious what 2.5" disk availability was like nowadays, and it looks like the only 5TB 2.5" disks available are Seagate Barracuda ST5000LM000, which are actually pretty drat cheap with not much price premium vs a 3.5" disk at around $140, but are shingled.
|
# ? Oct 14, 2021 04:41 |
|
Twerk from Home posted:Is there an easy way to look up a given hard disk model number and figure out if it's shingled or not? Are shingled drives still terrible for sustained write speed? The formatting is kinda a mess, but https://nascompares.com/2021/04/22/smr-cmr-and-pmr-nas-hard-drives-a-buyers-guide-2021/ has a pretty comprehensive list. SMR aren't terrible for sustained write, they're terrible for overwriting. If you took a fresh disk, did a continuous data dump to it and then basically used it as a WORM drive, it would preform quite well. But fill it up and start deleting individual files and then trying to write to it and it's gonna thrash itself into the ground. Buff Hardback posted:The serverbuilds team are a bunch of pricks and it’s going to be heavily Western Hemisphere centric when it comes to parts and availability. This is true, but at the very least it's still a good resource because they identify the best bang:buck ratio parts, which often are still the most cost-efficient even in Asian markets (just at different price points). Also a lot of eBay sellers ship international these days, which can be useful for smaller/lighter components like NICs and such. DrDork fucked around with this message at 05:38 on Oct 14, 2021 |
# ? Oct 14, 2021 05:33 |
|
DrDork posted:The formatting is kinda a mess, but https://nascompares.com/2021/04/22/smr-cmr-and-pmr-nas-hard-drives-a-buyers-guide-2021/ has a pretty comprehensive list. I’d be more hesitant about that now, they leaned hard into referral links for “supporting the server”, and referral links and best bang for buck are kinda opposing goals in my opinion.
|
# ? Oct 14, 2021 05:54 |
SMR drives are also terrible when doing ZFS resilvering, because they simply don't handle the random I/O read patterns, since they're built for sequential access. So a ZFS pool consisting of SMR drives may work passably for years if it's WORM storage, right up until one drive fails and you replace it with another. Then you're stuck waiting for a resilver that's gonna take weeks if not months.
|
|
# ? Oct 14, 2021 10:40 |
|
Hey folks, a couple of quick questions for the room. I've recently found myself with a couple of 10TB drives that I'm no longer using and figure I'll put them towards an offsite backup location. There are plenty of ways to skin that particular cat, but I'd be interested to hear if there are any solutions around making managing an offsite Pi or whatever and keeping things sane there. I could just have it VPN into my network and that may be what I do, but I figured I'd ask around. For context this is coming off a Synology machine, so I wouldn't be opposed to picking up a very basic 2bay model and having it just sit somewhere back at the homestead and I could do my thing via the web portal. My folks are not tech inclined in the least so the more I can do to be able to troubleshoot and be fault tolerant while 8 hours away, the better. Question 2: I've got a couple of drives in the process of being added to an existing Syn storage pool and it's taking, understandably, a long rear end time. That's all fine and good, but I did notice that the current process is listed as Step 1 of 2. What's the second step? Repeating the whole shebang with the second added drive?
|
# ? Oct 17, 2021 16:27 |
BlankSystemDaemon posted:SMR drives are also terrible when doing ZFS resilvering, because they simply don't handle the random I/O read patterns, since they're built for sequential access. This is why I ended up going with a non-zfs solution for my NAS. I had 6 8Tb drives I could shuck, and that's hundreds of dollars of "free" storage I don't have to buy by going with a Synology. Maybe a few files will bitrot over the life of the NAS, but I'm willing to risk that to save 7 or 8 hundred dollars
|
|
# ? Oct 17, 2021 16:43 |
Nitrousoxide posted:This is why I ended up going with a non-zfs solution for my NAS. I had 6 8Tb drives I could shuck, and that's hundreds of dollars of "free" storage I don't have to buy by going with a Synology. All the SMR disks that got submarined into existing product lines by WDC were in 6TB-or-less category, so if you bought 8TB external drives and shucked them you would either get a whitelabel Red or some other equally-good drive for NAS use. Better yet would be to not give money to companies that submarine inferior technologies into existing product lines to save money. SMR absolutely has a use - even with ZFS, as I think I've mentioned before that I think they could make a good ZFS snapshot destination (ie. where you just send the raw snapshot directly to the character device, like you would a tape drive) - which means once corrective receive lands, you should be able to fix anything from single files lost to an URE during a rebuild, to entire arrays if you lose too many disks with mirrored, ditto or distributed parity blocks. In this scenario, it also makes sense to use SMR (with its higher density) to make drives with as high capacity as possible, instead of how SMR is currently used now - which is that they use the higher density to remove a platter, thus saving on manufacturing costs.
|
|
# ? Oct 17, 2021 17:33 |
Warbird posted:Hey folks, a couple of quick questions for the room. I would really recommend the synology then. The raspberry pi needs poo poo like a ups and even then it loves to eat SD cards and the troubleshooting for that requires physical access. If you do want to go the pi route I’d recommend grabbing one of the UPS hats and a sata hat as well, since even the pi 4’s bus is really low, like 35MB/s ime. You can make it more resilient by grabbing an extra sd card and imaging your working install to it, then if it breaks you can at least get family to swap out the broken one and cycle the power on the thing. In Canadian dollars a synology ds 220j breaks even with the pi + all the hardware to make it a passable NAS, but the synology will be a tank and has an offsite backup solution built in.
|
|
# ? Oct 21, 2021 00:29 |
|
Yeah, I was eyeballing two bay models and I figure you’re right. May have to see if I can source a used one as I don’t need anything particularly fancy for these efforts.
|
# ? Oct 21, 2021 01:26 |
|
The problem with the SMR reds and ZFS wasn't the SMR part; it was the device-managed SMR part where it would write 200ish GB to a CMR area of the platter, then halt/throttle/otherwise go comatose while the drive firmware destaged it all to the SMR area. During this time ZFS is asking it about the status of random blocks all over the disk, and soon the onboard controller just loses its poo poo and ZFS declares the drive offline. A prior SMR drive would block on IO requiring rewriting of a shingled area, which would be slow but ZFS would wait for it. WD's DM-SMR commits the cardinal sin of lying to ZFS about what data is on what areas of the disk and, crucially, whether it's performing an operation or not.
|
# ? Oct 21, 2021 01:29 |
|
Yeah that's a great way to illustrate it; its the same with controllers--you want a ZFS array to have full pass-thru because it wants to talk to the drive directly. In this case, the drive thinks it knows better than the kernel. While for most layperson operations that's probably okay and will only cause a slowdown, its nigh-fatal to many ZFS operations.
|
# ? Oct 21, 2021 04:22 |
xarph posted:The problem with the SMR reds and ZFS wasn't the SMR part; it was the device-managed SMR part where it would write 200ish GB to a CMR area of the platter, then halt/throttle/otherwise go comatose while the drive firmware destaged it all to the SMR area. During this time ZFS is asking it about the status of random blocks all over the disk, and soon the onboard controller just loses its poo poo and ZFS declares the drive offline. Crunchy Black posted:Yeah that's a great way to illustrate it; its the same with controllers--you want a ZFS array to have full pass-thru because it wants to talk to the drive directly. In this case, the drive thinks it knows better than the kernel. While for most layperson operations that's probably okay and will only cause a slowdown, its nigh-fatal to many ZFS operations.
|
|
# ? Oct 21, 2021 07:23 |
|
I got a Synology DS1513+ NAS in 2014 and it's got 5x WD red 3TB drives in it. It's worked great all that time. Given that it's getting a bit old now, if the NAS enclosure itself dies, can I still take the discs and put them in a new Synology enclosure to recover data? The most important stuff is backed up externally anyway.
|
# ? Oct 24, 2021 13:48 |
|
As far as I am aware you can just drop them in another Syn enclosure with little to no issue.
|
# ? Oct 24, 2021 16:07 |
|
|
# ? Jun 6, 2024 02:38 |
|
I'm not sure there's a better thread for this question, but: I haven't had to think about old SCSI cabling for forever. I have a computer with a DB25 SCSI port, and a SCSI device with a DB25 port. Do I need a special kind of cable, or is any DB25-DB25 straight through cable going to connect these two? A lot (most/all) of the DB25M/DB25M I see are "serial" so I think they swap RX/TX pins and that'll probably be a no go. I'm thinking I could just ebay an old iomega Zip drive cable maybe? Sucks to pay the ebay "vintage equipment" markup but whatever.
|
# ? Oct 26, 2021 14:22 |