|
Theophany posted:Reddit is poo poo but tbh I was very much expecting a big rug pull based on how other software companies have done things like this in the past. I'm glad it hasn't gone like that. Honestly this seems like the best way for them to have pivoted to a new pricing model especially compared to so many other tech companies enshittification stage. I can't really blame folks for their initial reaction considering how many other companies have completely hosed their users with this sort of change. Scruff McGruff fucked around with this message at 21:15 on Feb 19, 2024 |
# ? Feb 19, 2024 21:11 |
|
|
# ? May 29, 2024 11:09 |
|
That could have been a lot worse and now I don't have to rush to upgrade from basic. Brainworms have me considering buying a basic key just in case I ever need to set up another.
|
# ? Feb 19, 2024 21:55 |
|
Scruff McGruff posted:
Yeah zero blame for redditors on their response to this one. Same deal as when a beloved game franchise announces it’s pivoting to GaaS.
|
# ? Feb 19, 2024 22:31 |
|
Henrik Zetterberg posted:It’s funny reading the Reddit comments from last night and this morning before the official announcement. Everyone was frothing at the mouth pissed. Frothing is the only appropriate response to such news, so that the details can ameliorated before the official announcement.
|
# ? Feb 19, 2024 22:36 |
|
Has anyone ever found a way to use the disk tray and backplane of a Synology with a standard PC? I've got a dead DS1812+, but it would be nice to be able to use the backplane and disk tray still. The connectors look vaguely pci-e-ish, but I have no idea if they actually are
|
# ? Feb 19, 2024 23:37 |
|
I've been buying 3rd party modular power supply cables and just clipped / cut / removed the 3.3 V line from the PSU side connector. Most PSU cable sets have SATA and molex cables so the offending cable is easy to single out and cut by comparing them. Some new PSUs (iirc Seasonic) even ship with SATA power cables marked with HDD and no 3.3 V line on them.
|
# ? Feb 20, 2024 07:32 |
|
why is the 3.3v line even present on consumer SATA power supplies anyway? It's not used for anything except telling enterprise drives to turn off AFAIK.
|
# ? Feb 20, 2024 16:55 |
It's finally happened, it's finally happened, it's finally happened, No idea when it'll hit the market, or what the price will be. It's wild to me that the PCIe specification lets you pull 75W via daughterboard slot, but it's so rare to find a GPU that uses less than two dautherslots for a fan, without ever getting near those 75W TDP. Harik posted:why is the 3.3v line even present on consumer SATA power supplies anyway? It's not used for anything except telling enterprise drives to turn off AFAIK. EDIT: If memory serves, VRMs used to need the 3.3VDC. BlankSystemDaemon fucked around with this message at 17:20 on Feb 20, 2024 |
|
# ? Feb 20, 2024 17:11 |
|
That A310 will be neat as a compact way to get a hardware AV1 encoder - I wonder what it will cost, I assume in the $100 range. There are already some good used options too (at least in the US) if you just want some 4K-capable display outputs on a small card. Radeon WX 2100 is around $30 on eBay and Quadro P600 is $40. Both are new enough to still have driver support for Windows (although I assume you're not concerned with that) and the P600 at least seems happy being passed through to a VM guest in TrueNAS Scale. I started out with the WX 2100 in that role and it worked for a while but then stopped after a TrueNAS 23.10.1 -> 23.10.1.3 update - I'm not sure if the problem is inherent to the card or something to do with the Windows drivers. Eletriarnation fucked around with this message at 17:36 on Feb 20, 2024 |
# ? Feb 20, 2024 17:28 |
|
Newegg already has it listed at $105. I'm very tempted since that could get me into upgrading my library to 4K and not giving a gently caress about transcoding.
|
# ? Feb 20, 2024 17:32 |
|
drat, I bought their normal A310 a few months ago.
|
# ? Feb 20, 2024 17:39 |
|
drat, I feel like they're going to clean up in the Plex/Jellyfin market with that.
|
# ? Feb 20, 2024 18:00 |
|
Oh that's awesome. I've been keeping an eye on the effort to add HW transcode support to Plex for AMD APUs so I could use my NAS box as-is and progress has been buggy and slow. I'd much rather get a cheap single-slot GPU but Nvidia doesn't seem to want to make those.
|
# ? Feb 20, 2024 18:16 |
|
Again, a Quadro P600 is $40 used and it has pretty much everything except AV1. If you specifically want AV1 though, this is a nice development. e: Reference is here - the P600 isn't specifically listed but it's a GP107, which is the same chip as a 1050, so feature set 'H'. vvv Yeah the P400/P600/P620 are almost indistinguishable and all around $35-45 at a glance. P1000 is also a P107 and looks the same, but I guess it's either uncommon or the 4GB memory is really popular because it's $100. Eletriarnation fucked around with this message at 18:30 on Feb 20, 2024 |
# ? Feb 20, 2024 18:18 |
|
A bunch of Quadros are low profile, a P620 should be dirt cheap and does H.264 and H.265 encoding https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new E: F, b
|
# ? Feb 20, 2024 18:22 |
|
If you're trying to add transcoding to something that can't transcode at all, sure. But if you're just CPU-bound on being able to do 4K, that doesn't move the needle at all:
|
# ? Feb 20, 2024 21:54 |
|
That's why you slap an A2000 in there
|
# ? Feb 20, 2024 22:02 |
|
Which is 3x+ the cost of the Arc A310 and draws a lot more power.
|
# ? Feb 20, 2024 22:09 |
|
I don't entirely agree with the statement that 2-3 simultaneous 4K streams "doesn't move the needle at all" unless your reference point is a rather powerful CPU, since my first Plex server couldn't even handle CPU transcoding for a single 4K stream. But yes, if you want several simultaneous 4K streams then it might behoove you to spend more than $40. My P600 recommendation was just a response to the general desire for a cheap low-profile single slot card.
|
# ? Feb 20, 2024 22:31 |
|
IOwnCalculus posted:Which is 3x+ the cost of the Arc A310 and draws a lot more power. It's 45W vs 70W peak, don't need to get too dramatic about it
|
# ? Feb 20, 2024 22:36 |
|
Azhais posted:It's 45W vs 70W peak, don't need to get too dramatic about it Alright, I admit I saw the big shrouded fan and figured it was well north of 100W. Still $300+ on eBay used versus $105 new.
|
# ? Feb 21, 2024 00:29 |
|
edit: n/m
SgtSteel91 fucked around with this message at 03:54 on Feb 21, 2024 |
# ? Feb 21, 2024 03:49 |
|
Henrik Zetterberg posted:Well poo poo. Instead of just expanding and upgrading my Unraid, I went ahead and got a Synology DS1822+ 8-bay system and picked up 3x 22TB Ironwolf Pros. I don’t like the Synology having a CPU from 2018, but I figure waiting for a refresh on it, while my Unraid quickly runs out of space, probably wasn’t worth it, and having a AIO system that just works and doesn’t require a lot of maintenance is probably a good thing. Plus, it’s a pure file server. My NUC is the one running PMS, so it doesn’t need any hardware encoding support. Yeah, Synology is out of their loving minds when it comes to RAM and drive prices. I have a pair of 1522+ each with DX517 expanders and 32GB ECC RAM in them, and through working on the first unit it wouldn't boot with both RAM sticks installed. Put in only one in the innermost slot, boots fine, put it in the outer slot and nada. I upgraded the second unit to 32GB and it worked perfectly. Put the original 8GB sticks from each unit back in the first one, and still wouldn't boot after 8 hours. Contacted support doing all of this and they were asking for photos of the DIMMs and packaging, toward the end I told them it didn't work with their RAM either, provided a picture of the Synology RAM and told them to RMA it. Fastest I've seen a support issue go from mindless testing to RMA. I also believe I'm using A-Tech in one of my systems. Problem was a failed DIMM slot.
|
# ? Feb 21, 2024 21:52 |
|
Nulldevice posted:Yeah, Synology is out of their loving minds when it comes to RAM and drive prices. I have a pair of 1522+ each with DX517 expanders and 32GB ECC RAM in them, and through working on the first unit it wouldn't boot with both RAM sticks installed. Put in only one in the innermost slot, boots fine, put it in the outer slot and nada. I upgraded the second unit to 32GB and it worked perfectly. Put the original 8GB sticks from each unit back in the first one, and still wouldn't boot after 8 hours. Contacted support doing all of this and they were asking for photos of the DIMMs and packaging, toward the end I told them it didn't work with their RAM either, provided a picture of the Synology RAM and told them to RMA it. Fastest I've seen a support issue go from mindless testing to RMA. I also believe I'm using A-Tech in one of my systems. Problem was a failed DIMM slot. Their HD (rebranded Toshiba's) and DDR prices should be loving criminal. And in the xs+ models, it flags non-Synology branded HDDs as a "critical" / red error in the health check thing. So as long as you don't have a system of pure Synology drives, it will always say that it's in critical condition or something. Absolutely stupid, and probably easy to miss an actual critical error that may occur. I've seen posts where they planned to "downgrade" it to a yellow warning, but not sure if that actually ever happened. My 1822+ arrived today and is sitting on my floor waiting for this stupid meeting to end
|
# ? Feb 21, 2024 22:44 |
|
Harik posted:why is the 3.3v line even present on consumer SATA power supplies anyway? It's not used for anything except telling enterprise drives to turn off AFAIK. Because it’s part of the spec? Pretty sure (but not positive) it’s used by some LED controllers for instance. And those tend to be exclusively in the consumer space.
|
# ? Feb 21, 2024 22:51 |
|
Micro SATA drives use it. e.g:
|
# ? Feb 21, 2024 23:01 |
|
Anyone have any experience with the asustor flashstor products? https://www.amazon.com/dp/B0BZCM22WD/?th=1 I have a fairly low end proxmox cluster set up and put in ceph on some nvme for storage, but while I don't need a lot of performance for what I'm doing, I need more than what what I'm getting after the ceph overhead. Long term my new plan is to set up a real ceph cluster that I can use for multiple things, but I don't want to spend that kind of money right this second, so just a NAS with NFS will do the job for a while, and since I've already got the 6 nvme's... But if this thing is hot garbage I'll go back to looking at building my own little box to host them, but I like the simplicity of just getting a prebuilt.
|
# ? Feb 22, 2024 00:07 |
|
Azhais posted:Anyone have any experience with the asustor flashstor products? At $500 and being as comfortable with DIY as you clearly are with your Proxmox setup, I'd just do a normal desktop with a PCIe to M.2 adapter, assuming they're all M.2. The N5105 in that $500 NAS is slow if you're wanting good performance out of NVMe. Have you tuned your ceph cluster for how you are running it? That's tiny for ceph, but there's a ton of knobs you can turn and config to adjust. Ceph itself is capable of getting about 2/3 of raw disk throughput for reads with replication, and about 1/3 for erasure coding. Those numbers might sound bad, but should be able to saturate 2.5gbit for you for sure, and even 10gbit with 6 disks. If you want to read about a ceph cluster with 680 disks that does 1 terabyte per second reads, here you go: https://ceph.io/en/news/blog/2024/ceph-a-journey-to-1tibps/
|
# ? Feb 22, 2024 00:28 |
|
Well poo poo, I didn’t realize for Synology in an SHR configuration, you can only expand the data pool with a disk that’s same size or bigger as your largest. Or the same size as an existing drive. I figured I’d just load the Synology with 3x 22TB new drives, network transfer all my data from my old NAS (<44TB), then expand the Synology pool with the disks from my Unraid. Now I’m going to have to do some dumb disk juggling that will take forever 😩 On a side note, I’m liking how easy it is to setup the Synology though! And also how easy it is to swap drives from a hardware level and software.
|
# ? Feb 22, 2024 16:23 |
|
Hughlander posted:Because it’s part of the spec? It's this, the whole "3.3v present shuts the drive down" is WD abusing the spec for their own gains. I'm not sure if their primary goal was making life slightly harder for drive-shuckers, or a very lazy way to implement their USB-SATA controller telling the drive to spin down, but I suspect both are beneficial to them with no negatives when the drive is used as WD wants it to be.
|
# ? Feb 22, 2024 16:30 |
|
IOwnCalculus posted:It's this, the whole "3.3v present shuts the drive down" is WD abusing the spec for their own gains. I'm not sure if their primary goal was making life slightly harder for drive-shuckers, or a very lazy way to implement their USB-SATA controller telling the drive to spin down, but I suspect both are beneficial to them with no negatives when the drive is used as WD wants it to be. I thought the whole 3.3V issue exists because hobbyists are putting in drives that are generally used in Enterprise solutions where the 3.3V rail provides functionality needed there. Please correct me if I am wrong on this, I am not positing a statement with massive amounts of conviction here.
|
# ? Feb 22, 2024 17:07 |
TraderStav posted:I thought the whole 3.3V issue exists because hobbyists are putting in drives that are generally used in Enterprise solutions where the 3.3V rail provides functionality needed there. The 3.3V line was historically used as a sense line on the motherboard, and that's what it's nominally there for in enterprise storage nowadays - ie. if something is wrong with the enclosure, the disk won't try turning on if the sense signal is high. There is supposed to be some logic circuitry that keeps the signal low unless one thing out of a number of things is wrong, which fail-safes the signal to high. Still, it's stupid that it is that way, and not everyone does it - which is what leads to the confusion. BlankSystemDaemon fucked around with this message at 17:30 on Feb 22, 2024 |
|
# ? Feb 22, 2024 17:26 |
|
I am not a very good raider so pardon me if these are dumb questions. I set up an mdadm raid 1 mirror of two 18 TB disks. It uses xfs. It works fine but I am a bit confused by the scrubbing. By default fedora's raid-check.timer runs once a week. Is this overkill? The system is noticeable sluggish when it happens so I would like to do it less often, say once a month instead of once a week. That would mean if there is some integrity problem there would potentially be a longer period before it is detected. This is mostly static content like photos that are also backed up in the cloud so I think it is reasonable to scrub less but maybe I miss the point. Thoughts? Not really related, but just this week I got another of these 18 TB disks so now there are three. For no particular reason I am tempted to format this new disk with btrfs, copy the mdadm data to it (about 6 TB), and then dismantle the mdadm array to get all the disks in the new btrfs array. My poor understanding is btrfs would know to only scrub the 6 TB of actual data instead of the whole block device at the mdadm layer, so that would be a plus for the foreseeable future. And btrfs has compression which is cool, but 99% of this data is already compressed so that probably wouldn't matter. Also it is fun to try new things. If I add the third disk to the existing mdadm mirror array then the content of each disk is the same. I can lose two and still have all the data intact. But in btrfsland I am really not clear on what the equivalent would be. A "raid1c3" profile? https://btrfs.readthedocs.io/en/latest/mkfs.btrfs.html#profiles Otherwise the 'raid1' profile with three disks only gets you two copies of the data and you can only lose one drive. That does not align with my very basic understanding of what a raid1 setup is but I am not the expert.
|
# ? Feb 25, 2024 00:35 |
|
If you're running RAID1 with mdadm now, you can grow that to RAID5. docs Do not - I repeat - Do not bother with raid levels 5 or 6 with btrfs. E: I've usually done monthly scrubs on RAID5 / RAID6 arrays, but you can also tweak mdadm settings to reduce the performance impact. Wibla fucked around with this message at 01:35 on Feb 25, 2024 |
# ? Feb 25, 2024 01:33 |
|
I bought the third disk because it was on sale and I appreciate the added redundancy. For now at least I am not interested in maximizing space.
|
# ? Feb 25, 2024 10:09 |
|
What's the recommended practice to access a Synology NAS from the internet? The build in DDNS seem to work, but then I have to set up a port forwarding rule on the router for every service.
|
# ? Feb 25, 2024 13:46 |
|
Selklubber posted:What's the recommended practice to access a Synology NAS from the internet? The build in DDNS seem to work, but then I have to set up a port forwarding rule on the router for every service. Wire guard or some other vpn service so that you don’t have to open ports.
|
# ? Feb 25, 2024 13:50 |
|
https://tailscale.com/kb/1131/synology
|
# ? Feb 25, 2024 14:02 |
|
huh that was surprisingly easy to set up! i can finally watch my linux isos from the hotel! is there any stuff i should do to secure this more, or is standard setup ok?
|
# ? Feb 25, 2024 15:17 |
|
|
# ? May 29, 2024 11:09 |
|
If you have a use for it, you can configure remote access to other devices on your home network that you can't install Tailscale on. This will also allow you to use the same local IP adresses while home or away. I did step 3 here to be able to reach my thermostat, satellite receiver and ip cameras.
|
# ? Feb 25, 2024 15:46 |