|
Asked this in the hardware questions megathread and was pointed in the direction of here and the home networking megathreadZiggy Smalls posted:I work for a small metal fabrication shop and my boss uses Mycloud for most of his CAD file data storage so he can do the modelling work for our contracts at the shop and at home. However he has had serious issues with mycloud being down repeatedly preventing him from actually working.
|
# ? Apr 6, 2024 06:34 |
|
|
# ? Jun 10, 2024 13:26 |
|
I mean if a single user remains your use case I'd just stick to cloud storage/sync. There's a ton of alternatives to whatever mycloud is
|
# ? Apr 6, 2024 07:55 |
|
I turned the tolerances to low in PowerChute and it seems to have fixed my constant switching to battery issue on my Synology. Seems super strange to just have started, but it seems to be happy for the time being.
|
# ? Apr 6, 2024 09:50 |
|
Aware posted:I mean if a single user remains your use case I'd just stick to cloud storage/sync. There's a ton of alternatives to whatever mycloud is Seconded, plus the fact that there's a business involved - that really makes the argument much easier for a paid solution where there are also paid support options available in case things go wrong.
|
# ? Apr 6, 2024 20:45 |
|
Ziggy Smalls posted:Asked this in the hardware questions megathread and was pointed in the direction of here and the home networking megathread Is this MyCloud the Western Digital product that just makes a NAS in one location available over the internet in other locations? If so, definitely agree with other posters that using some cloud storage product that syncs to the drives on your machines would be better. How large are the individual files and how much storage do you need altogether? I don't know your CAD workflow but as an alternative I might also look at something like Autocad Vault.
|
# ? Apr 6, 2024 21:05 |
|
I see ugreen is trying to break in to the NAS with a kickstarter https://www.kickstarter.com/projects/urgreen/ugreen-nasync-next-level-storage-limitless-possibilities I've a few of their peripherals (usb cables, monitor adapters) and they've all been well built. I can image the hardware will be value for money. But security wise I'm not sure I will want to trust an early generation from an upstart peripherals (plus one subject to the whims of the China government).
|
# ? Apr 6, 2024 21:40 |
|
If these places chuck x86 hardware together and then it runs TrueNAS or whatever then I'd be interested, I'm not trusting data to someone who's never released storage software before.
|
# ? Apr 6, 2024 21:43 |
|
Henrik Zetterberg posted:I turned the tolerances to low in PowerChute and it seems to have fixed my constant switching to battery issue on my Synology. Seems super strange to just have started, but it seems to be happy for the time being. Almost certainly the line voltage going out of spec causing the ups to go to battery. (aka brownout) Two major reasons - change in power routing in the neighborhood (longer distances = more voltage drop) and/or if a new user in the neighborhood (think large commercial or some kind of industrial type) has a piece of gear/motor starting up and drawing power. Could be damaged lines, but I'd guess by now you'd have a full outage with that many drops. AVR in the ups covers most of these issues (hence why having it is good). Except apc is so utterly cheap nowadays that they'll only boost the voltage and took out the the reduction side of the circuit to save a couple of pennies. Keep an eye on your voltage levels - the ups usually will report it to your computer, and many also just display it. Should be between 115-120v.
|
# ? Apr 6, 2024 21:45 |
|
Sounds pretty rough around the edges, and other OSes don't work (they seem undecided as a company) so it would be a definite no from me for now. https://www.youtube.com/watch?v=lXhrmtNzZAI
|
# ? Apr 6, 2024 21:51 |
|
That’s pretty outdated. Users have gotten p much every interesting option in terms of 3rd party software to run on the ugreen hardware. And yeah there’s no way I’d run their software stack (debian and proprietary glue/UX) but the hardware seems nice for the price.
|
# ? Apr 8, 2024 02:06 |
|
Thanks Ants posted:If these places chuck x86 hardware together and then it runs TrueNAS or whatever then I'd be interested, I'm not trusting data to someone who's never released storage software before. Bingo. There are a few companies that are making interesting boxes using various flavors of standard commodity compute - toss whatever BS frontend they wrote and install good software instead.
|
# ? Apr 11, 2024 18:58 |
|
Does anyone have a recommended fan to zip tie to storage controllers? I don't want fancy or PWM or style, I want these cool. These are LSI 9361-8is in a 4U supermicro chassis that I thought had sufficient front to back airflow in a cool room, but clearly do not:quote:04/11/24 18:36:43: C0:Max Temp is 110 Deg C on Channel 4 Maybe something like https://www.arctic.de/en/S8038-10K/ACFAN00291A and use some splitters to get power to them? Twerk from Home fucked around with this message at 21:15 on Apr 11, 2024 |
# ? Apr 11, 2024 20:18 |
|
My NAS just died. It was a prebuilt thing by a company called Asustor. When I turn it on, the lights on the front come on at first, but then they just turn off. Yet the disks still spin (I can hear them) but the web interface never comes up. The old crappy NAS had two 16TB drives in RAID mirror mode. So I took out the two drives from that crappy NAS and added them to an old gaming PC that I don't use anymore. I then wiped windows and then installed TrueNAS on the old gaming PC. My intent was to import the data on those drives into TrueNAS and then just use the old gaming PC as a NAS. The problem is that when I got to the final step of creating a new pool, it told me I had to wipe the data on my drives to continue. I then did some research and apparently it's not possible to import data into TrueNAS unless the data is already on a drive that is formatted to ZFS. I'm pretty sure my drives are not ZFS. I used a USB stick to boot into a live installation of Ubuntu, and was able to see the drives. Under "partition type" it just says "Linux RAID", and it is not able to be mounted and browsed. How can I mount this disk to my system so I can see the contents of it? I assume I have to convert it from "Linux RAID" to something else? Then the plan is to wipe one of the disks, add it to TrueNAS. And then manually copy over the contents from the other disk. Then wipe the other drive and then add it to TrueNAS.
|
# ? Apr 12, 2024 19:47 |
|
I assume "Linux RAID" is mdadm. I haven't had much experience doing recovery on that, but if it's a mirror and the data is uncorrupted, then it might not have any problem just picking everything off one drive. Do you get anything from "cat /proc/mdstat"?
|
# ? Apr 12, 2024 19:57 |
|
Yea. You just need to mess around in the terminal to do everything, tho. TrueNAS ships all relevant modules for md, ext4, xfs and so on. Perhaps just not btrfs. --edit: Apparently even btrfs. code:
|
# ? Apr 12, 2024 20:07 |
|
School of How posted:My NAS just died. It was a prebuilt thing by a company called Asustor. When I turn it on, the lights on the front come on at first, but then they just turn off. Yet the disks still spin (I can hear them) but the web interface never comes up. Some quick research leads me to believe that the Asustor uses 'mdadm' which is a Linux software defined RAID. Luckily it looks like someone else has gone before you recovering data from a failed Asustor appliance: https://forum.asustor.com/viewtopic.php?t=12860 Your best bet is going to be: 1. Set up your new TrueNAS server, storage pool(s) and network share(s) 2. Attach those Asustor drives to another PC 3. Boot from a Linux live CD, mount the volume(s) on the mdadm RAID volume(s) and copy the files into your TrueNAS share(s) And per the above two posts you could skip the 'attach drives to another PC' step if your new server is running TrueNAS SCALE.
|
# ? Apr 12, 2024 20:07 |
|
Eletriarnation posted:I assume "Linux RAID" is mdadm. I haven't had much experience doing recovery on that, but if it's a mirror and the data is uncorrupted, then it might not have any problem just picking everything off one drive. Do you get anything from "cat /proc/mdstat"? I figured out how to get my RAID to show up, by running this command: `mdadm --assemble --scan` code:
But I'm contemplating whether I should quit while I'm ahead, or risk messing up things further. I could just ditch TrueNAS and install ubuntu while using the RAID I just imported. Is my understanding correct that TrueNAS can not use this RAID at all in it's current form? Do I absolutely have to convert these drives to ZFS in order for TrueNAS to work with it?
|
# ? Apr 12, 2024 23:20 |
|
I'm pretty sure TrueNAS (SCALE, at least) has mdadm and should be able to do the same thing, then just mount the md device to a temporary mount point and copy all its contents to the intended ZFS target. I don't think though that there is any way to convert the drives and leave the data in place - you're going to need to copy that data off, convert the disks to ZFS, and copy it back. I guess if you don't have another way to store that much data and they're currently mirrored then you could wipe one, turn it into a ZFS mirror (with only one drive), copy the data from the md drive to the ZFS drive, and then wipe the md drive and add it to the ZFS mirror. Seems kind of risky though. TrueNAS will complain if you try to create a mirror with only one drive, but you can tell it that you understand the risks and force it to proceed. Eletriarnation fucked around with this message at 00:37 on Apr 13, 2024 |
# ? Apr 13, 2024 00:34 |
|
Eletriarnation posted:I'm pretty sure TrueNAS (SCALE, at least) has mdadm and should be able to do the same thing, then just mount the md device to a temporary mount point and copy all its contents to the intended ZFS target. Yeah, I don't have another 16GB drive to copy to. Its either buy a new hard drive, or figure out how to make this work withing buying any extra hardware. If there was an "import from mdadm raide" option in TrueNAS, I'd be more comfortable doing it. But manually, I'm going to procrastinate.
|
# ? Apr 13, 2024 04:49 |
|
mdraid and ZFS are fundamentally different ways of writing data to disks so unfortunately there's no way to "convert" from one to the other without a full wipe in between. If your current setup is indeed a mirror, the "set up a degraded ZFS mirror and copy your degraded mdraid data to it" is an option, but a high risk one. I'd do it with Linux ISOs but not family photos. With that said: School of How posted:The next step I think is to run `mdadm --zero-superblock /dev/sda4 /dev/sdb4` Why do you want to do this? That's going to literally trash the mdraid you have. If picking up another drive even temporarily isn't an option, I would just run an Ubuntu LTS server and manage it that way instead of trying to make TrueNAS play nice with it. If you decide you want to go to ZFS in the future, you can still do that just fine in Ubuntu.
|
# ? Apr 13, 2024 07:13 |
|
You could also give Openmediavault a try. It's a Linux (Debian) based distro with a web-UI very much like FreeNAS (because it was made by a former FreeNAS dev) and it should be able to just mount the old raid. https://docs.openmediavault.org/en/latest/administration/storage/raid.html https://docs.openmediavault.org/en/latest/administration/storage/filesystems.html (I've never had to do that, but at least the documentation makes it seem possible) If you insist on using ZFS, it can do that too, but there's no way around copying the data off (having a backup is never a bad idea!) and wiping the original disks. quote:mdadm --zero-superblock Tamba fucked around with this message at 08:26 on Apr 13, 2024 |
# ? Apr 13, 2024 08:23 |
|
Hmm degraded mirrors? Last I remember, you can create a zpool with a single disk, and when you zpool attach (not add) another to it, it'll turn into a mirror.
|
# ? Apr 13, 2024 09:51 |
Just an observation, but if you aren't willing to switch to ZFS, it probably means you either don't have backups, or don't trust them - so you might also wanna address that. If you do trust your backups, you can remove one of the disks from the existing mirror, put a ZFS pool on it, move data over to it, and then the second disk and use zpool-attach(8) to add the second disk to turn the vdev into a mirror. If you're using something based on FreeBSD, you can also use gnop(8) and zpool-replace(8).
|
|
# ? Apr 13, 2024 09:53 |
|
Alright, I had an adventure with a FriendlyElec CM3588. Gather around for a tale! I got the model with eMMC onboard. My goal was to put TrueNAS or OpenMediaVault on it. The docs lean toward OMV, so it'd probably be that. This thing ships with Ubuntu on it, but they immediately have docs that suggest you flash it to Debian 11. Weird that it doesn't just ship with that, but whatever - fine. An important note here. They actually have a lot of docs for this device, but 'a lot of docs' does not mean 'good docs'. This is probably the worst technical writing I've come across in a decade. Some of the instructions are incomplete. Some are missing whole chunks. Some will leave you in a bad state permanently if you follow them verbatim. Some are just wrong. I spent about 4 hours trying to get this to boot from a microSD card and failed. I tried to follow the instructions verbatim and then when I realized that they were just bad, I tried to fill in the blanks as best as I could, but this thing would just not find 3 different microSD cards that other computers in my house were happy to read from. I eventually somehow put this device in a state where it won't boot at all, and it's functionally bricked. There is some software they include that can address the storage and firmware from outside the computer over USB, but none of it's in English and their English config may as well be a markov chain of instructions - it's unusable. This has been a disaster. I believe if the stars align, this can be a wonderful device because the idea of a small, compact, 4 NVMe drive, low wattage NAS solution with features is really compelling, but there is less than zero attempts to make this usable at all. I do not recommend this device unless you are fluent in Mandarin and have infinite patience for jank hardware / software. I've returned this device and have a new appreciation for software/hardware solutions that are a little better supported. With all that said: Anyone have a recommendation for a storage solution that can consume 4 NVMe drives that supports RAID 5 (or some kind of reasonable fault tolerance) Canine Blues Arooo fucked around with this message at 01:28 on Apr 14, 2024 |
# ? Apr 14, 2024 01:25 |
Well that’s disappointing… Thanks for the info though.
|
|
# ? Apr 14, 2024 01:28 |
|
Wow, good to know. I had been looking at some FriendlyElec SBCs, though not that NVME one. But these problems sound like whole-company problems, not just that one product.
|
# ? Apr 14, 2024 01:43 |
|
BlankSystemDaemon posted:Just an observation, but if you aren't willing to switch to ZFS, it probably means you either don't have backups, or don't trust them - so you might also wanna address that. Or they are using a different mature solution. Not everyone is in the same ZFS cult as you. The backhand poo poo that comes out of you is amazing.
|
# ? Apr 14, 2024 05:39 |
|
Moey posted:Or they are using a different mature solution. Not everyone is in the same ZFS cult as you. TO be fair... 1) They were replying to How 2) How was explicitly asking how to import it into TrueNAS Scale which is ZFS based.
|
# ? Apr 14, 2024 08:33 |
|
Hughlander posted:TO be fair... Yeah, I didn't even look at the context before the reply, that's on me. But this isn't the first time I've seen similar responses/comments
|
# ? Apr 14, 2024 09:32 |
|
Canine Blues Arooo posted:Alright, I had an adventure with a FriendlyElec CM3588. Gather around for a tale! Sorry about that debacle! But if you or anyone else finds something like this that actually works then please let us know.
|
# ? Apr 14, 2024 10:18 |
Moey posted:Or they are using a different mature solution. Not everyone is in the same ZFS cult as you. What I should've written is "if you aren't able to switch to ZFS by degrading your existing mirror" - because that's a fairly trivial operation if you have backups.
|
|
# ? Apr 14, 2024 11:57 |
|
BlankSystemDaemon posted:I'm sorry I phrased myself so poorly, it wasn't my intention to come off backhanded, but I can see how I did. I also apologize, didn't mean to come off so pissy, it just seemed very blunt when I read it without scrolling back. So I'm the dumb looking one here. I do appreciate your ZFS knowledge bombs in here. Now let's carry on with nerd storage chat.
|
# ? Apr 14, 2024 12:22 |
|
For the NVME side, do any ITX motherboards support PCIe bifurcation? If so, you can get PCIe cards that split an x16 slot into four proper NVME slots. I have an Asus one, and apart from being the size of an old GPU it works fine in the ASRock motherboard I'm using. I had to fiddle with the BIOS to get bifurcation working; it would only show me the first drive until that was sorted. Otherwise it Just Works, the drives show up as normal and I have a pretty fast zpool on them. IIRC from last time they came up, there are cheap AliExpress cards that work about the same. e: This is a completely different scale from the "four NVME drives on an SBC" systems, of course; you would probably need one of the gaming ITX cases to make this fit. Cute compared to a tower or rackmount, but a big hunk of steel compared to an RPi-style board. Computer viking fucked around with this message at 14:06 on Apr 14, 2024 |
# ? Apr 14, 2024 14:03 |
|
Several years ago I needed to build a mini-server with bifurcation. I'm pretty sure I couldn't find a mini-itx board that did it, and had to move up to microATX (which is bigger than mini-itx). It was for a 10-core Xeon server on a backpack frame, powered by an e-bike battery. We used it to record raw video from an 8-camera 3D imaging system. e: As I recall, besides bifurcation, most of the mini-itx boards had major architectural issues with keeping 4 NVMEs fed. Things like number of memory channels, slots for incoming i/o, etc. If your multiple SSDs are mostly for capacity, not pure speed, maybe it doesn't matter. ryanrs fucked around with this message at 16:50 on Apr 14, 2024 |
# ? Apr 14, 2024 16:32 |
|
ryanrs posted:Several years ago I needed to build a mini-server with bifurcation. I'm pretty sure I couldn't find a mini-itx board that did it, and had to move up to microATX (which is bigger than mini-itx). Oh neat, that's a much more reasonable use than mine, which boils down to "this SATA controller seems to be failing and there's a sale on NVME drives".
|
# ? Apr 14, 2024 23:07 |
|
I am putting together a TrueNAS build that will replace my two existing, aging TrueNAS servers using eBay as the source and I have a question about disk performance for a larger set of SSDs. Right now I have: SAN1 with 16 Evo 860 500gb drives in RAID-Z2 and four 1TB 970 Pro NVME disks in RAID-10 SAN2 with 8 Hitachi 4TB spinners in RAID-Z These are both on Supermicro X9SRL-F motherboards / 128GB Registered ECC RAM / E5-2667 8-core 3.3GHz CPUs / IBM M1015 LSI SAS9220-8i HBAs: I'd repurpose one of the motherboards/CPU/RAM and ditch the rest of the hardware and go for a more simple setup: 48 1TB 870 Evo SSDs in RAID-Z2 using my venerable 4U Server case with 2x ICY DOCK 24-in 3 drive bays and three LSI 9300-16i HBAs While I've run an enterprise SAN with hundreds of disks before, but never something based on TrueNAS. My question is: how do I set up the disks to minimize the impact of drive failures while still preserving a modicum of performance? This device will be running a combination of AD-based SMB, NFS, and iSCSI shares. Do I just create a monster RAID-Z3 pool and create a bunch of chares for various tasks? Do I create multiple pools based on share type? What are the best practices here? Do I ditch all of this as well as my 3-node vmware cluster and create a 4-node ProxMox cluster with Ceph?
|
# ? Apr 15, 2024 01:48 |
|
Agrikk posted:Do I ditch all of this as well as my 3-node vmware cluster and create a 4-node ProxMox cluster with Ceph? I'd do this. But you need SSDs with power loss protection
|
# ? Apr 15, 2024 06:56 |
|
Wibla posted:I'd do this. But you need SSDs with power loss protection I’m less worried about power loss protection as I’ve a full grip of UPSes with network shutdown configured. (Does ProxMox have a network shutdown agent for APC UPSes?) But what is the compelling reason from moving away from ESX to ProxMox? I get free demo licenses from a contact at VMware so licensing isn’t an issue so besides price, why switch? E: this is probably a discussion for a virtualization thread.
|
# ? Apr 15, 2024 18:24 |
|
There's no reason to switch unless you want to learn something else or lose access to free licenses. For most of us home gamers both are strong reasons right now.
|
# ? Apr 15, 2024 22:26 |
|
|
# ? Jun 10, 2024 13:26 |
|
With ZFS, do mirrored vdevs need to be the same size? Let's say I have three mirrored vdevs setup like so: vdev1: 2x 8 TB drives vdev2: 2x 10 TB drives vdev3: 2x 2 TB drives I would end up with a single 20 TB pool. Right? Also, it's not a problem to add a new pair of drives as a mirrored vdev after the pool has been created and is in use, correct? I understand that you aren't really meant to add drives to expand the size of a pool in a RAIDZ setup, but if I'm just using mirrored pairs of drives, adding a new vdev is a simple and expected use case, right? Sorry for the newbie questions. I'm slowly going through ZFS documentation, but there's a lot of it and I'm dumb. mekyabetsu fucked around with this message at 16:18 on Apr 16, 2024 |
# ? Apr 16, 2024 16:05 |