|
Yeah probably! That's the normal corporate thing to do, at least.
|
# ? Dec 22, 2023 05:17 |
|
|
# ? May 18, 2024 07:40 |
|
Kung-Fu Jesus posted:Someone from iX in reddit comments saying "this is the end of CORE" as the only way this is being communicated seems pretty sketch to me. Kung-Fu Jesus posted:Beforehand, I tested SCALE, encountered the ARC memory size restriction/default configuration, read up on some of the reasons why it does that, and cheerfully hosed off back to CORE.
|
# ? Dec 22, 2023 20:02 |
|
Wild EEPROM posted:well here are a number of reasons why it sucks quote:- usual linux vs bsd hardware support
|
# ? Dec 22, 2023 20:18 |
|
Kung-Fu Jesus posted:Yeah probably! That's the normal corporate thing to do, at least. OK, I guess I'll agree to disagree. I would like to have more information sooner, all else being equal, and I think the "normal corporate thing" is generally at least as much about CYA and avoiding liability/bad PR as it is actually doing something helpful to the end user. Eletriarnation fucked around with this message at 20:45 on Dec 22, 2023 |
# ? Dec 22, 2023 20:42 |
|
Is this just a general purpose storage Q/A thread? Sorry in advance if this is the wrong location, but I'm very bad with computers. After my one and only hard drive failed in 2018, I bought two duplicate 2TB Seagate external hard drives. I run one of them 24/7 for storing/torrenting 1.5TB of movies and storing 250GB of all my other personal files. I meant for the second one to be a backup that I would only power on to sync every so often, but I never figured out what software I would need to make this easy. Is there is some free program that could allow me to periodically backup any new or changed files from my last sync, without having to meticulously drag and drop every one? I guess my personal storage needs are pretty simple but you can also call me dumb if this is a dumb scheme. Thank you kindly.
|
# ? Dec 22, 2023 21:44 |
|
I think this is likely the most relevant thread for that question, especially if you're not a Linux user. If you're talking about two separate machines, rsync/syncthing/Resilio Sync is probably the way to go depending on how much you like the CLI and what OS you're using. If one machine, I would probably just use a smarter file copier like TeraCopy to drag over the whole directory tree and then tell it to ignore files which are the same size/timestamp as what's already present. There are more advanced utilities out there for Windows which will do differential and/or automatic backups to local storage, but I am not familiar with any of them except Macrium Reflect which is (1) not free and (2) more of a disk imaging tool.
|
# ? Dec 22, 2023 22:03 |
|
wolrah posted:With you up to here, but... Not necessarily better, but if you have hardware that you know works in freebsd and you havent tested in linux yet then that’s an additional hurdle for switching over
|
# ? Dec 22, 2023 22:12 |
|
I would expect Core to be supported long enough for any existing hardware it's running on to become life expired or uneconomical to keep running on power consumption vs. performance grounds.
|
# ? Dec 22, 2023 22:25 |
|
Luckily I installed scale when I moved in from nothing to NAS. I run a heretic configuration where Proxmox hosts a few vm’s, truenas being one of them. I can just use truenas for nas stuff and run containers, services and other crap in other vm’s. Most annoying of this is to keep updating all them. So much more to update instead of running a single os with a single server. Now I am waiting for the zpool expansion. I started with 4x18TB z2 and would not mind a 5th disk in future.
|
# ? Dec 22, 2023 22:44 |
|
Thanks Ants posted:I would expect Core to be supported long enough for any existing hardware it's running on to become life expired or uneconomical to keep running on power consumption vs. performance grounds. You say this, but my employer is running at least some Westmere and hundreds or possibly thousands of Sandy Bridge hosts. poo poo, aren't EC2 VMs of that era still available, so Amazon must be too?
|
# ? Dec 22, 2023 23:00 |
|
Ihmemies posted:Luckily I installed scale when I moved in from nothing to NAS. I run a heretic configuration where Proxmox hosts a few vm’s, truenas being one of them. oh poo poo i just came here to ask about this kind of setup. I'm running scale on bare metal and tbh everything is running great, but the truecharts support avenue is super stressful to read b/c the maintainer is just angry all the time and i kinda wanna not deal with that no more.
|
# ? Dec 22, 2023 23:20 |
|
Twerk from Home posted:You say this, but my employer is running at least some Westmere and hundreds or possibly thousands of Sandy Bridge hosts. poo poo, aren't EC2 VMs of that era still available, so Amazon must be too? Well, your employer is running those machines presumably in spite of them being no longer supported, so why would they care if they have to run some unsupported CORE hosts as well?
|
# ? Dec 22, 2023 23:39 |
|
Korean Boomhauer posted:oh poo poo i just came here to ask about this kind of setup. I'm running scale on bare metal and tbh everything is running great, but the truecharts support avenue is super stressful to read b/c the maintainer is just angry all the time and i kinda wanna not deal with that no more. I guess theoretically it can be more trouble. I passthrough the whole sata controller from proxmox to truenas so it works directly with the discs. In case the setup some day breaks the discs can be moved to new hardware easily enough. Vm’s run my homelab/dev server/databases/portainer with docker containers etc. I block all incoming traffic from wan since I don’t need to really offer any services to outside world except through vpn’s. Ihmemies fucked around with this message at 23:49 on Dec 22, 2023 |
# ? Dec 22, 2023 23:46 |
|
Eletriarnation posted:Well, your employer is running those machines presumably in spite of them being no longer supported, so why would they care if they have to run some unsupported CORE hosts as well? Eh, modern OSes support them just fine so it's a radically different thing than running an OS or any application with a big attack surface that isn't getting patches. This is the second thread where someone suggested to me that running an old CPU is opening oneself up to a bunch of security vulnerabilities though, so maybe I'm being obtuse and out of support hardware is a bad idea too. Edit: the other CPU that someone called "dangerously old" is Haswell, lol. Twerk from Home fucked around with this message at 00:20 on Dec 23, 2023 |
# ? Dec 23, 2023 00:16 |
|
I would not worry about Linux having worse hardware support than FreeBSD.
|
# ? Dec 23, 2023 00:20 |
|
I built baby's first NAS. I got a Terra-Master F4-223 on sale at Newegg, added 4 10TB WD red drives, an extra 16GB of RAM, and a SSD for the OS drive. Immediately installed TrueNAS SCALE and have been learning my way around permissions and app containers and whatnot. So far I'm pretty happy with it but I feel like I'm barely scratching the surface. It's far more complicated than I expected, not that it's a problem, just interesting and a big leaning curve. I haven't used a NAS since an old qnap a decade ago for work.
|
# ? Dec 23, 2023 01:00 |
|
Combat Pretzel posted:Someone from iX is their head engineer, IIRC. Eletriarnation posted:OK, I guess I'll agree to disagree. I would like to have more information sooner, all else being equal, and I think the "normal corporate thing" is generally at least as much about CYA and avoiding liability/bad PR as it is actually doing something helpful to the end user. I come from a world where if some (head or not) engineer started posting on reddit about prospective roadmaps that hadn't been communicated officially, there'd be a whoopin. Not even publicly traded or anything. It's possible I have a poisoned brain and when I see something like this, alarm bells start going off up there for no good reason.
|
# ? Dec 23, 2023 01:34 |
|
Twerk from Home posted:Eh, modern OSes support them just fine so it's a radically different thing than running an OS or any application with a big attack surface that isn't getting patches. This is the second thread where someone suggested to me that running an old CPU is opening oneself up to a bunch of security vulnerabilities though, so maybe I'm being obtuse and out of support hardware is a bad idea too. The only thing dangerous about Haswell is the power consumption relative to anything newer. Completely agreed that as long as a modern OS runs reliably on it, send it.
|
# ? Dec 23, 2023 02:13 |
Combat Pretzel posted:Ostensibly iX is working towards making the ARC not be a red headed stepchild under Linux memory management. I wonder how long that'll take. I haven't spotted anything obvious in the OpenZFS repo commits yet so far. There are two theoretical solutions: 1: The same kind of tight integration, as is found in the FreeBSD implementation of the SPL - which uses uma(9) in ~300LoC - could be made. 2: The Linux VM subsystem could be made to handle large allocations and improved to handle something other than itself doing larger allocations without it becoming sad about it. In practice, the first won't happen because the GPL license is incompatible with CDDL (according to some lawyers), and the second won't happen because Linux kernel maintainers' opinions on ZFS. EDIT: Neither of those are technical, they're political - and in my experience, you don't solve the latter kind of issues by using the former kind of solutions. BlankSystemDaemon fucked around with this message at 03:05 on Dec 23, 2023 |
|
# ? Dec 23, 2023 02:47 |
|
kreeningsons posted:Is this just a general purpose storage Q/A thread? Sorry in advance if this is the wrong location, but I'm very bad with computers. On Windows, I just use Free File Sync. Just set your set main drive on the left, backup drive on the right, set it to Mirror and hit the Synchronize button, it works for me.
|
# ? Dec 23, 2023 04:15 |
|
Twerk from Home posted:Eh, modern OSes support them just fine so it's a radically different thing than running an OS or any application with a big attack surface that isn't getting patches. This is the second thread where someone suggested to me that running an old CPU is opening oneself up to a bunch of security vulnerabilities though, so maybe I'm being obtuse and out of support hardware is a bad idea too. Well, I don't think running a Westmere or Sandy Bridge CPU is opening you up to a bunch of security vulnerabilities generally speaking. I still run a 2500K myself in a ripping/transcoding machine, and just a year ago gave my cousin a Westmere Xeon system for gaming. The comparison is because CORE's end of support is still well in the future and even once that comes it will still be an appliance, which presumably lives behind a firewall, and is based on a relatively recent version of FreeBSD - it doesn't sound like a particularly easy target, even if it is technically "end of life". I am sure that someday it will be considered "dangerously old", but my bet is that you will have plenty of time before then to migrate to a new deployment running SCALE or whatever else you prefer. Eletriarnation fucked around with this message at 08:43 on Dec 23, 2023 |
# ? Dec 23, 2023 08:38 |
|
BlankSystemDaemon posted:In practice, the first won't happen because the GPL license is incompatible with CDDL (according to some lawyers), and the second won't happen because Linux kernel maintainers' opinions on ZFS.
|
# ? Dec 23, 2023 13:54 |
|
BlankSystemDaemon posted:
One of my favorite lines lately is that you can't solve people problems with technology. Glad to see a version of it floating around elsewhere.
|
# ? Dec 23, 2023 14:23 |
Combat Pretzel posted:I kind of expect a TrueNAS specific kernel patch. At least those allusions made over time, whenever someone bugged them about an update in regards to this, read like it. They intend to stick to LTS kernels, so any homebrew solution would last quite a while and gives them time to port to the next LTS. Can’t say I like the idea of making distribution-specific changes that aren’t upstreamed. Same reason I’m not super enthusiastic about Illumos taking in bhyve, as they’ve not been good about upstreaming. withoutclass posted:One of my favorite lines lately is that you can't solve people problems with technology. Glad to see a version of it floating around elsewhere. At least we can live and learn.
|
|
# ? Dec 23, 2023 16:09 |
|
BlankSystemDaemon posted:Can’t say I like the idea of making distribution-specific changes that aren’t upstreamed. Also, TIL that Illumos is still alive.
|
# ? Dec 23, 2023 16:25 |
|
Eletriarnation posted:I think this is likely the most relevant thread for that question, especially if you're not a Linux user. Wifi Toilet posted:On Windows, I just use Free File Sync. Just set your set main drive on the left, backup drive on the right, set it to Mirror and hit the Synchronize button, it works for me. Rad, thank you. I installed both of these.
|
# ? Dec 23, 2023 16:30 |
|
Combat Pretzel posted:Yea well, blame the Linux kernel devs for having a stick up their rear end. While I don't expect them to merge all of ZFS, some support code in the kernel would be neat. More so, given the popularity of ZFS in the Linux ecosystem. Illumos is not only alive, there is a buzzy, funded startup building a brand-new multi-million dollar computing platforms on it: https://oxide.computer/. This thing is using ZFS for their storage, bhyve for the hypervisor, and illumos for the actual OS. I don't know if they're using ZFS for replication though, they may be doing it at a higher level on their storage application.
|
# ? Dec 23, 2023 16:32 |
|
Twerk from Home posted:Illumos is not only alive, there is a buzzy, funded startup building a brand-new multi-million dollar computing platforms on it: https://oxide.computer/. This thing is using ZFS for their storage, bhyve for the hypervisor, and illumos for the actual OS. I don't know if they're using ZFS for replication though, they may be doing it at a higher level on their storage application. Kind of surprising that the BSD <-> Solaris code exchange is still going on, but I'm not opposed to people spending money in that area. Hopefully some of their code makes it back to FreeBSD.
|
# ? Dec 23, 2023 16:53 |
|
Computer viking posted:Kind of surprising that the BSD <-> Solaris code exchange is still going on, but I'm not opposed to people spending money in that area. Hopefully some of their code makes it back to FreeBSD. They've been promising that they'll open source everything and doing a decent job of it so far: https://github.com/orgs/oxidecomputer/repositories?type=all. Looks like it's under the Mozilla public license, though.
|
# ? Dec 23, 2023 17:04 |
Computer viking posted:Kind of surprising that the BSD <-> Solaris code exchange is still going on, but I'm not opposed to people spending money in that area. Hopefully some of their code makes it back to FreeBSD. Best newer example I can think of is the FreeBSD standard boot loader, because of its tight integration with ZFS, boot environments, bootonce functionality, and so on and forth. Twerk from Home posted:They've been promising that they'll open source everything and doing a decent job of it so far: https://github.com/orgs/oxidecomputer/repositories?type=all. Looks like it's under the Mozilla public license, though. It's interesting to me that the CDDL header seems to imply that it's relicensing the BSD bits - which I don't think Pluribus Networks had the rights to do back when they ported bhyve to Illumos.
|
|
# ? Dec 23, 2023 20:20 |
|
I am stuck impatiently waiting on my eBay SATA DOMs (USPS shipping around Christmas, RIP / salute postal workers) and then I can finally get moving. I might make a SeaChest boot disk to at least check on the settings of my HDDs. Anyone here play with those tools to mess with the Seagate version of TLER / power settings? My drives are all Exos X16 w/ 2x Exos X22.
|
# ? Dec 23, 2023 21:06 |
|
What's the advantage of SataDOMs these days vs just a plain old cheap nvme disk?
|
# ? Dec 23, 2023 21:12 |
Twerk from Home posted:What's the advantage of SataDOMs these days vs just a plain old cheap nvme disk? Also, cheap NVMe disks are probably going to be QLC or beyond, so will have no loving write endurance, whatsoever.
|
|
# ? Dec 23, 2023 21:29 |
|
Twerk from Home posted:What's the advantage of SataDOMs these days vs just a plain old cheap nvme disk? For me, all bays / slots in my case are spoken for + the X11SSL-CF is old enough to pre-date M.2. But, it has two SataDOM connectors on it so I figured it was the best go for a mirrored boot store for FreeNAS. BlankSystemDaemon posted:You can't get SLC flash on NVMe anymore, whereas there's plenty of SLC flash in SATADOM devices in stock. I got Innodisk SATADOM-ML 3SE, Dell P/N 0T4M4. SLC flash! Downside of building near the holidays... ran into the big-TB drive mounting hole problem in the 804, for the 2 extra 3.5" spots. These aren't the holes on the sides, but the 4x on the bottom. The Exos retain the standard holes on the side which I guess I never ran into this before. Need to go find an adapter for it, or guess I'm drilling / bending some metal. e: Ah I think I need this thing: https://www.thingiverse.com/thing:4791974 movax fucked around with this message at 21:46 on Dec 23, 2023 |
# ? Dec 23, 2023 21:33 |
|
I've been using Kingston DC series disks as OS drives recently, but I can't yet say if they are any better in the long run. They claim endurance way beyond what I need, at least - and the 500 series I used were not that expensive.
|
# ? Dec 23, 2023 22:20 |
|
Those should be fine, lots of endurance. I've got a few of their cheaper 120-240GB drives and only one died, after hard use. SSD's last a lot longer if you overprovision them a bit - I pulled this Kingston A2000 1TB out of a server with heavy database load, but I left about 20% unpartitioned and that seems to have helped a fair bit. A2000 1TB is rated for 720 TBW.
|
# ? Dec 23, 2023 22:38 |
|
While I wait for stuff to come in, I'm going through all my SSDs that are lying around (mostly SATA ones -- I have some NVMe ones, but will set those aside...) and I have a decent amount (probably 10 or so) of Samsung 850 and 860s ranging from 256GB to 1 TB in size. Also found a bunch of older Intel models, but they are all 120 GB or so... not that useful. If I want to turn this into the '1' or part of the '2' in 3-2-1, what's the lowest-power HW configuration + chassis I should be looking at? I'd be drawn to some kind of ARM appliance but TBH, this impressed me and I forgot x86 E-Cores can still play in low power. So I think it's more of a question of a decent backplane / case setup to cram in say... 8 drives or so? Unraid or the most "JBOD-like" setup I can get from TrueNAS sound optimal to me -- mostly to simplify my life in sending / receiving snapshots from the primary appliance. tl;dr -- with around 10 SATA SSDs ranging from 256.. 1 TB in size, what's the best way to turn that into a low-power pool of /backup01 that can be my 2nd/3rd tier data storage appliance?
|
# ? Dec 24, 2023 20:56 |
|
Anyone have any experience updating SSD firmware using USB enclosures? Specifically I'm trying to update a SATA Samsung 870 Evo but I don't have any SATA ports on my motherboard. Everything I've read tells me this is impossible due to the way USB enclosure controllers typically emulate the SATA connection. I'm not poo poo out of luck because I do have an old PC that does have SATA Ports. I'm curious though, if I didn't have an old PC laying around would I be SOL? Is there an external SATA controller that would work for this use case? e: for what It's worth I've dickered with both Samsung Magician on Windows 10 and the Linux Live OS Firmware ISO you can get from Samsung.
|
# ? Dec 24, 2023 21:12 |
|
I have had no luck until I plugged it in myself - but may I ask what sort of PC you have without SATA ports? They still seem fairly standard on motherboards. I guess you could find a thunderbolt SATA controller (or put a PCIe one in an eGPU enclosure), but that sounds pointlessly expensive.
|
# ? Dec 24, 2023 21:43 |
|
|
# ? May 18, 2024 07:40 |
|
Hey ZFS long-timers, I've got an offsite storage box with 36 disks that I want to work and not lose sleep over. I also need to use at least ~65% of the physical raw capacity, so RAID 10 is out. Suggested topologies for vdevs? The naive immediate one seems to be 4x9 disk raidz2, but I could also get greedy with 3x12 disk raid z2 or get smarter with some hot spares with 3x11 raid z2 with 3 hot spares. Thoughts? Does ZFS handle odd striping across disks like this well, or do people stick to 4/6/8 disk vdevs for a reason and I should do something much more aligned with common usage, like 4x8 raidz2 with 4 hot spares? Well, it looks like striping does become a problem. Time to think about this a moment more: https://jro.io/capacity/. I'm not married to ZFS for this yet, but I specifically preferred an alternative storage tech to the Ceph cluster that this will be the DR environment for.
|
# ? Dec 27, 2023 16:18 |