Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

IOwnCalculus posted:

Are you setting yours up with a separate drive for the OS or no? I seem to have a never-ending supply of sub-100GB drives, which are still good for boot / swap disks - and it seems a lot easier to me, in the long run at least, to keep the OS completely separate from the data disks.

I agree, I would never put the operating system on a raid array meant for file storage. My current fileserver setup has 5x 500GB drives in a RAID-5 for storage and 2x 20GB fireballs in a RAID-1 for boot. If the storage array goes south, it's invaluable to still have a safely bootable system in order to diagnose things. Keeping boot and data separate is really a must in my opinion.

edit: I'm not sure that's what he's saying, though. I think he's suggesting keeping swap on redundant storage instead of striped over a RAID-0.

admiraldennis fucked around with this message at 18:14 on Mar 18, 2008

Adbot
ADBOT LOVES YOU

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

amanvell posted:

What is the best way to monitor a mdadm Raid-5 array under Linux?

Personally, I have a cron job run mdadm in monitor mode every 20 minutes; it will send me an email if somethings up:
code:
0,20,40 * * * * root 	mdadm --monitor -1 -m [email]user@domain.com[/email] -scan
Make sure you have postfix or another MTA properly configured first.

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

sharkytm posted:

I'll chime in with everything I've learned thusfar.
1: DAS is great, but a good NAS can come close to USB2's actual (not max) throughput with a decent gigabit switch, and good cabling. I've clocked our Terastation Live 2TB (1.4TB available through RAID5) at 35mb/s, which is better than my DAS USB2 ATA100 drive, which tops out at 30. I just transferred a 19GB file in an hour over a 100mbit card, connected to a gigabit network, while the computer was being heavily used. Pretty decent, if you ask me.

Yeah, absolutely. Speed is simply not a concern for NAS over Gigabit ethernet. On a good day I can pull just shy of 40MB/sec from my software RAID-5, and the network is not the bottleneck.

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

kapalama posted:

It seems like for home-brew solutions, this motherboard/CPU combo should be first post worthy:

http://www.silentpcreview.com/article780-page1.html

I guess it's nice if you want to build a tiny box and don't ever want to use gigabit ethernet.

2x SATA, 1x IDE, 1x PCI = little chance at a decent raid; 100/10 ethernet = little chance I'd ever use it for NAS

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

WindMinstrel posted:

In summary. :love: ZFS, :love: RAID-Z.

That's it; my next array is going to be a z. I gotta try this.

stephenm00 posted:

why isn't zfs and raid-z a more common option? Their must be some disadvantage for home users right?

Accessibility. ZFS was written by Sun as a major feature of Solaris. Solaris is a nice operating system, but its hardware compatibility is nowhere near as accommodating as that of linux or, say, FreeBSD.

Fortunately for us, the latest version of FreeBSD contains an experimental but working port of zfs.

admiraldennis fucked around with this message at 03:50 on Mar 19, 2008

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
Is there a way to prevent md from starting an array in degraded mode?

i.e. if i disconnected a drive used in a raid5 and booted the machine, the array would simply not start instead of starting and becoming degraded?

I found "md_mod.start_dirty_degraded=0" but that only prevents a degraded & dirty array from starting (and is the default), so that doesn't really help.

admiraldennis fucked around with this message at 21:08 on Mar 21, 2008

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
I'm looking at upgrading my old NAS (8x 1TB drives in RAID-6, ext3) to something modern and bigger using FreeNAS and ZFS.

Is RAID-Z2 reasonable for 8x 8TB drives? Or should I be looking at RAID-Z3 for large-capacity drives?

When I built my last NAS, all the talk was about how RAID-5 was deprecated, and RAID-6 was on it's way with the likelihood of failure during rebuilding. Have those concerns held up?

All my super crucial stuff will be backed up elsewhere, but I'm willing to shell out extra $ to prevent headaches, re-ripping Blu-Rays, etc (though I don't want to waste money either!).

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
That's great info.

To be clear - for my 8x 8TB thing, I'm talking about RAID-Z3 vs RAID-Z2 (not even considering RAID-Z1!). IIRC URE catastrophe was the main thing behind folks saying "RAID-6 will soon be deprecated!" back in the days of 1TB/2TB drives being the biggest out there.

If I'm really looking at a 0.16% or something percent chance of failure in the longrun with -Z2, I think I'm comfortable with -Z2 instead of -Z3 :)

admiraldennis fucked around with this message at 23:35 on Sep 6, 2017

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
Shucking report!

Bought 9x WD easystore 8TB external drives at Best Buy for $169.99/ea + tax during this week's sale. They've been as cheap as $159.99.



With some careful (but pretty quick and easy overall) disassembly...



treasure inside!



Keeping stuff safe for the next two years in case of RMA. But after that I have 9x drive enclosures + chipsets, 12V 1.5A power supplies, and USB 3.0 cables - not bad for extras.



Notes:

- These have a 2 year warranty instead of a 3 year warranty
- The enclosure serial is the same as the bare drive serial. The serial shows up as an easystore drive on WD's warranty checker. You'd have to send in the drive intact in the enclosure to receive warranty, I'm sure.
- Disassembly and reassembly can be done non-destructively , though there are easy things to break if you aren't careful.
- From research, there seem to be at least three drives currently possible:

1) WD80EFAX - WD Red 8TB label, 256MB cache, made in Thailand
2) WD80EFZX - WD Red 8TB label, 128MB cache, made in China
3) WD80EMAZ - White label, 256MB cache, made in Thailand

Supposedly #3 is exactly the same as #1 but with a white label, same firmware and TLER enabled and exact specs as the WD Red 8TB labeled one, just with a white label to indicate it wasn't sold bare.

It seems like, at least for the Thailand drives, the White Labels are replacing the Red Labels.

- Before buying, you can tell if it is Made in Thailand or Made in China as it's printed on the bottom of the box. I only saw one Made in China in the two Best Buys I went to for these.
- So far I've only shucked one of the drives, though I've plugged them all in to check their drive model # via SMART over USB. All of my drives are WD80EFAX (Red Label, 256) except one which is WD80EMAZ (White Label, 256).
- You can check the "warranty end date" on the serial on the box before buying - might be a clue to Red vs White label?


(Only the 09/01 one was a white label.)

- Also, FWIW, these drives are helium drives. Good/bad/neutral? It seems like this is/is going to be the new normal for high-capacity drives. I'll admit it scares me a bit from a 'new ways for things to die!' perspective (what if the helium leaks in 5 years?). But apparently HGST has been doing it for a while and drive manufacturers claim that it's more reliable. I could see that angle, if the sealing is effective long-term and if the air filter not being perfect (or some such air/air-hole related thing) is a point of failure on air drives. There's a SMART attribute for Helium level (22), which on my drives shows reading 100, with a pre-fail threshold of 25.

admiraldennis fucked around with this message at 04:45 on Sep 8, 2017

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

BobHoward posted:

Uggggggggggggggh

that is the opposite of keeping that board safe. Food storage bags are nowhere near ESD-safe.

Any RMA department which knew you had done that to something you were returning would be well within their rights to refuse the RMA. Not that I think you have to worry about that possibility, just making the point.

You're not wrong, though these little chipsets have to be worth about $1 each max :D (they also appear interchangeable - e.g. if one happens to fry I could just use a different one). I could look around to see if I have ESD bags... but they'd have to be the right shape to keep the delicate little plastic mounting bracket thing intact for reassembly. Maybe I'll try to buy some, since I've only shucked 1/9 drives so far. The possibility of refused RMA is definitely factored into my bet hedging here though.

admiraldennis fucked around with this message at 14:03 on Sep 8, 2017

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
Recommendation for FreeNAS 9 vs 11 for a new build? I was planning to go with 11 since it's STABLE (and the UI looks a lot nicer from the screenshots) but maybe I'm missing something. Most of my pre-build tinkering has been on 9, I find the UI... serviceable.

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
OK cool - I will plan on going with FreeNAS 11. Thanks everyone.

Final decision time for this NAS! I ended up picking up a few more drives. Trying to decide between:

RAID-Z3 of 11x 8TB drives or 2x RAID-Z2 of 6x 8TB drives

Both net the same usable pool of storage. This is mostly for single-user archival storage / backup / media files. Here's the pros/cons as I see them:

Pros of RAID-Z3 config:
- Better reliability, seemingly by an order of magnitude according to all the calculators (even though the Z2 reliability is still quite high)
- One less disk needed (can save some $)

Pros of RAID-Z2 config:
- Better paths to upgrading: can upgrade only 6 drives at once for more space in the future, or add a 3rd vdev
- More IOPS (moot for single-user?)
- Faster resilvers

My plan is to have a cold spare handy for either config, and to stay under 80% space utilization (part of my reasoning for buying a few more drives) at all times.

I'm leaning towards the Z3 because the order-of-magnitude reliability increase is enticing. My main worry is resilvering - will it be a nightmare? Anyone with vaguely similar setups want to chime in on their resilver times?

admiraldennis fucked around with this message at 22:00 on Sep 10, 2017

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

G-Prime posted:

I game via virtualization as well and love it, but I don't do it ON my NAS. I tried setting up iSCSI from the NAS to the VM, but it was agonizingly slow over gigabit. Seriously considering 10G cards now, because I'm broken inside and my NAS sits all of like 6 feet from my desktop.

I'm planning to try to set up 10G using eBay'd parts to run a fast line from my NAS to my gaming PC and (much more annoyingly) MacBook Pro. Likely going the SFP+ route, it seems cheaper for used gear.

1Gbps just feels crazy slow for wired networking in 2017 despite its ubiquitousness. Wireless (ac) is basically just as fast; heck, affordable home internet connections can come in 1Gbps these days. Where's the 10G proliferation!? (I want 10G everywhere for work reasons too; I work in game dev and pulling things down over 1Gbps sucks there too but nobody's going to pay for 10G to every workstation)</rant>

Combat Pretzel posted:

Depends on your workload. If you're just archiving videos and documents on the NAS, I guess yea. If you start doing stuff like installing apps and games to it, then you need IOPS.

I've never done the latter before - but maybe with a 10G line I'd want to. What do people use to set this kind of thing up? iSCSI?

admiraldennis fucked around with this message at 01:44 on Sep 11, 2017

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

Sniep posted:

Woop, I got in on this last week and did my 4-bay Synology - Took a bit of time to rebuild the RAID and reshape for the new free space on disks 3 and 4, but all good to go now!

Turns out I got four of the good Thailand 256MB cache WD Reds:



Was juuuuuuust a tad bit tight before out of 10.82 (? iirc?) free, now a bit of breathing room and a new partition on top since the 32bit CPU on this DS416j limits to 16TB max paritions:



Overall much better though - Thanks a ton for the WD Red / Best Buy info!

Nice! Glad you were able to take advantage of it.

One thing I've learned since that post:

The white label drives seem to have an important difference that I didn't notice at first: they don't spin up when plugged directly into PSUs. But - they work fine via Molex->SATA converters. I found this out when I was doing final cabling for my build; my single white label didn't spin up; I googled around and others have seen this too. Perhaps they detect the presence of the 3.3v pin and prevent themselves from spinning up as anti-shucking protection.

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

Paul MaudDib posted:

If your rigs are physically close together, InfiniBand QDR is definitely the way to go for the moment. You can pick up a pair of dual-port IB cards and some cables for less than a single 10GbE adapter card, and if you end up having to lose your $100 investment then oh well, you had 40 gbps networking for 2 years or w/e at $50 per year. Switches are cheap too, a 24- or 36-port switch should run you around $125.

Having a lot of 10 GbE switching capacity gets real expensive, it's kinda hard to justify in a homelab setting past a trunk connection to a NAS, and IB still does better IOPS there. I would go so far as to say that if you need longer than a 7m run it might be worth looking into a retarded setup like a 10 GbE card bridged to its own InfiniBand port to handle the longer runs between your switches or something. Right now, for computers that are physically close, the economics of Infiniband on a per-adapter basis are just fantastic.

Any recommendations for ebayable InfiniBand 40 GbE gear that works well in FreeNAS and Windows 10?

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

alecm posted:

Looks like these WD easystore 8TBs are again at Best Buy for $180 this week. (https://www.bestbuy.com/site/wd-easystore-8tb-external-usb-3-0-hard-drive-black/5792401.p?skuId=5792401) I'm interested in buying maybe eight and using them in a rackmount case with a backplane. If I get one or more of the white label Thai drives, is there any way to interrupt the 3.3v line to use them? Is cutting the trace on the drive itself the only option? :stonk:

You can maybe mod the backplane or the power supply to the backplane if you don't want to mod the drives... I also wouldn't be surprised if some backplanes don't even provide 3.3v

I devised a (probable) way to tell white vs red without opening the shrinkwrap, no idea how globally consistent it is but it's worked for me across shucking 14 total from various Best Buys near me. I'm hesitant to share this outside of these here forums to help slow eBay shuckers from buying up the last supply of red labels... Look up the warranty of the serial on the bottom of the box - if it ends on or after 09/01/19, and it's Made in Thailand, it's going to be a white label. The latest Red I got was a 08/26/2019 - if it's on or before this, in my experience at least, it's always a Red label. No idea if this date applies to Made in China boxes, or if there are Made in China white label ones at all (AFAIK I haven't heard evidence of one). The Made in China red label is actually the "standard" 128MB cache Red drive that's for retail sale (as opposed to the Made in Thailand with 256MB cache). A good buying rule would be to buy the earliest dated ones you can. Registering the drive with WD bumps the warranty date to +2 years from your purchase date, so no worries about less warranty.

You can also tell for sure without shucking the drive by using SMART over USB which correctly relays the model number of the drive.

Good luck and happy shucking



edit: also, let me know if you need any pointers - I've done a bunch of these now (all cases intact).

admiraldennis fucked around with this message at 00:01 on Sep 25, 2017

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
FreeNAS/zfs question:

When I last set up a fileserver (using md) I always made the RAID member partitions smaller than the drive itself (like 0.5% smaller than the rated sized) to account for variances in actual drive size if I needed to replace it with a different drive down the road. Is this still a thing that needs to be done? Does FreeNAS/zfs do this automatically? Or are drive sizes standardized these days?

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

alecm posted:

Thank you so much for this. I managed to find eight drives across two different Best Buys, all within the date ranges you mentioned and all red label Thai drives. Strangely, I saw no Chinese drives at all.

I found this PDF guide on a very straight forward way to open the case, and was able to do the first one in about 5 minutes. I couldn't find the original post to attribute credit, but it seems to have originated at the [H]ard forums. It's the most clear and concise breakdown of what to do I've seen. At the very least, it's better than watching some dude narrate his 15 minute struggle to open the case over the course of a YouTube video.

No problem! Glad it worked out for you.

That's a good shucking guide. I used full credit-card-sized-things to hit the tabs (I have a stack of old ones for stuff like this), and a thin + thick guitar pick (latter was actually from a toolkit) for the prying.

emocrat posted:

OK based on this post I just ran out and bought 2, both had the 08/26/2019 date and from Thailand. Best Buy says I got 15 days to return them, so I figured it also verify using the SMART thing you mentioned above before prying them open. So, since you offered pointers, can you briefly tell me how to do that? Is this with Crystaldisk or what? What info am I looking for? Thanks!

CrystalDiskInfo should be able to do this no problem - https://crystalmark.info/software/CrystalDiskInfo/index-e.html. Sometimes SMART over USB has driver requirements. If that doesn't show your model number, you may need to try installing a special driver.

FWIW, I did three with that warranty date - all Red label drives mfg 2017-06-18.

admiraldennis fucked around with this message at 22:39 on Sep 27, 2017

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

Paul MaudDib posted:

I think Infiniband is a big plus as far as IOPS is concerned but simply having a dedicated link between my desktop and my ZFS server is the best. It is incredibly annoying how gigabit bottlenecks you - that's literally just one HDD worth of throughput, let alone heavy IOPS or multiple systems all shoving data through. We literally have faster throughput in the USB standard nowadays, for gently caress's sake. For a NAS that could easily be running 2-4 drives?

Are folks using actual IB instead of 40GbE? FreeNAS's manual seems to indicate that it doesn't support IB at all. I just eBay'd two Mellanox 40GbE cards (IB not supported) since it seemed like that's all that FreeNAS supported... but maybe you aren't using FreeNAS, or have it working unofficially. Is IB a better route than 40GbE?

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
Is there any reason why this well priced eBay item wouldn't work for 40GbE between two ConnectX-3 cards? It says "Infiniband" and is also maybe too-good-of-deal

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
I'm all-of-a-sudden having intermittent connectivity issues between my FreeNAS box and my PC (setup: 40GbE, Mellanox ConnectX-3 on either end, 2x Finisar QSFP+ [FTL414QB2N-E5], 10M 12-core OM3).

Things were working fine for a few weeks, now suddenly I'm having problems. It looks like this on a ping:

code:
Pinging 10.1.1.1 with 32 bytes of data:
Reply from 10.1.1.1: bytes=32 time<1ms TTL=64
Reply from 10.1.1.1: bytes=32 time=176ms TTL=64
Request timed out.
Reply from 10.1.1.1: bytes=32 time<1ms TTL=64
Reply from 10.1.1.1: bytes=32 time=177ms TTL=64
Reply from 10.1.1.1: bytes=32 time=176ms TTL=64
Reply from 10.1.1.1: bytes=32 time<1ms TTL=64
Reply from 10.1.1.1: bytes=32 time=177ms TTL=64
Reply from 10.1.1.1: bytes=32 time=177ms TTL=64
Reply from 10.1.1.1: bytes=32 time=177ms TTL=64
Reply from 10.1.1.1: bytes=32 time=177ms TTL=64
Reply from 10.1.1.1: bytes=32 time=178ms TTL=64
Reply from 10.1.1.1: bytes=32 time=176ms TTL=64
Reply from 10.1.1.1: bytes=32 time=177ms TTL=64
Request timed out.
Request timed out.
Reply from 10.1.1.1: bytes=32 time<1ms TTL=64
Reply from 10.1.1.1: bytes=32 time<1ms TTL=64
Reply from 10.1.1.1: bytes=32 time<1ms TTL=64
Reply from 10.1.1.1: bytes=32 time<1ms TTL=64
Reply from 10.1.1.1: bytes=32 time<1ms TTL=64
(then it's fine for a while, repeat)
Anyone ever run into something similar? Nothing of note has changed with my setup, and this makes things pretty much unusable with games freezing up (my games are on my NAS) and explorer hanging. Everything is peachy on 1GbE (standard LAN) so the problem seems isolated to something with the fiber setup.

edit: rebooting FreeNAS seemed to do the trick, at least for now? I had restarted the Windows machine a bunch of times, assuming that was the culprit. I don't really want to be restarting the NAS box on a regular basis as it has a number of clients :(. Uptime was only 16 days on the box; I checked top and the CPU wasn't pegged or anything when the dropouts were occurring, all looked good.

admiraldennis fucked around with this message at 01:39 on Jan 3, 2018

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

redeyes posted:

Any time this happens it's been the switch but I am talking utterly generically with no 1st hand-knowledge of that high end poo poo.

No switch here, just a direct line.

And I'm beginning to suspect FreeBSD's Mellanox drivers? Though I'm hoping not, rebooting the FreeNAS box did seem to fix it in the immediate term (unplugging/replugging the QSFPs, reseating the cables, rebooting the Windows machine - all did not fix)... next time maybe I will try just reloading the driver.

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

necrobobsledder posted:

You already rebooted the box but I’d have been interested in TX error rates and packet fragmentation counts. Also would be curious about the sysctl flags for your system.

I can check that out if it happens again... how do I check these rates?

Here's my sysctl.conf:
code:
kern.metadelay=3
kern.dirdelay=4
kern.filedelay=5
kern.coredump=1
kern.sugid_coredump=1
vfs.timestamp_precision=3
net.link.lagg.lacp.default_strict_mode=0

# Force minimal 4KB ashift for new top level ZFS VDEVs.
vfs.zfs.min_auto_ashift=12

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

Harik posted:

Out of curiosity - anyone moving data off their NAS at better than Gig-E? Borderline question between here and the networking thread, but I figure the people here would be more relevant to what kind of performance you can expect from ebay'd gear. Specifically looking at 40gb infiniband gear because it's so cheap, and since the use is linux to linux use NFS/RDMA.

Yeah - I use a 40GbE link between my NAS and my gaming PC (pair of Mellanox ConnectX-3); and a 10GbE link between my NAS and my MacBook Pro (using Chelsio T520-CR - has great Mac drivers; there's no cheap used market for 40GbE cards for Mac). All eBay gear. I'm using Ethernet not Infiniband.

If you're just streaming media for consumption, Gigabit is totally sufficient, even for 4k. But if you want to access project files (e.g. video editing, video game assets, etc), run games, run VMs, etc directly from your NAS, 10GbE minimum is much nicer. I run all of my games (and large projects) directly off of the NAS, and it's pretty great overall.

My pool consists of 2x raid-z2 of 6x 8TB WD Red Drives. FWIW, CrystalDiskMark numbers over 40GbE look like this:


(SMB)


(iSCSI)

I access most of my stuff via SMB to gain the benefits of having the files directly on zfs. I keep a few things on an iSCSI NTFS volume that don't like being on a network share (certain games in particular). This mostly works well, except that Windows (10) doesn't always cleanly reconnect to the network share, particularly at startup. iSCSI is never a problem. Not sure why :(

admiraldennis fucked around with this message at 15:14 on Jan 18, 2018

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
The ol' Cyberpower UPS on my NAS PC is reaching EOL. Looking to replace it.

One thing I noticed in my research is that UPS devices tend to have less surge protection than higher-end dedicated surge protectors. This APC, for example, cites 1080 joules. But I can get a SurgeArrest with 4320 joules.

There is a lot of "information" out there about not daisy chaining surge protectors and UPS devices. (FWIW, I have been doing this for ages on my main PC setup without issues. There's barely any outlets in this house. Which also means the PCs are all on the same circuits as window A/Cs, etc, so good surge protection is especially prudent.)

Is this just a "dummy clause" against plugging too much poo poo into a single UPS/outlet/circuit and overloading?

The UPS will be well oversized for my usage, and the only current-drawing device will be the NAS PC. Is there some hidden danger if I were to go Wall <-- UPS <-- SurgeArrest <-- NAS for an extra layer of joule protection?

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

Munkeymon posted:

Did the CrystalMark homepage always have a bunch of really cringeworthy anime schoolgirl crap on it?

are you saying that you don't use the Shizuku Edition??

Adbot
ADBOT LOVES YOU

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

BabyFur Denny posted:

Hmm I stream 4k to my mobile device and gf's apartment without issues. 4k is what, like 50mb/s? Should be easy to handle by almost anything.

You'd think that, but here I am with my "the fastest they offer" "Gigabit" cable connection with.... 25Mbps upstream.

:sigh:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply