Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
H110Hawk
Dec 28, 2006

Atomizer posted:

That, uh, completely defeats the purpose of RAID1, but ok!

RAID is not backup. If you haven't touched enough computers to see a raid controller poo poo the bed and write garbage to all of your mirrored replicas then please just trust in the mantra of raid is not backup.

Adbot
ADBOT LOVES YOU

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

Atomizer posted:

That, uh, completely defeats the purpose of RAID1, but ok!

I use raid1 myself and it is good if a drive fails as it's easy to build a new mirror and keep everything running in the mean time. Also read performance from two disks is also very good, where my storage is primarily about reads.

vodkat
Jun 30, 2012



cannot legally be sold as vodka
If I'm away from home for a long period of time how feasible is it to manage and use a synology box 100% remote? and are there any potential security issues of doing this that I need to be aware of?

Atomizer
Jun 24, 2007



H110Hawk posted:

RAID is not backup. If you haven't touched enough computers to see a raid controller poo poo the bed and write garbage to all of your mirrored replicas then please just trust in the mantra of raid is not backup.

Yeah, I know, I wrote basically that about RAID on the last page. I've emphasized the importance of backups totally unrelated to the concept of RAID.

The point I was making was that RAID1 in particular is supposed to leave you with one good drive when the other goes bad (in a 2-drive setup of course,) so you can rebuild the mirror from the good drive. Obviously there's a worst-case scenario and the whole array might be destroyed, but it's probably not as common an occurrence as you're making it out to be. And yes, backup everything anyway, but if you have a RAID1, a drive dies, and you replace it but manually restore from backup, then that I guess defeats half the point of that array in the first place (the other being availability.)

Devian666 posted:

I use raid1 myself and it is good if a drive fails as it's easy to build a new mirror and keep everything running in the mean time. Also read performance from two disks is also very good, where my storage is primarily about reads.

This sounds like a normal experience; I think Hawk is just being super-pessimistic, which is understandable when you're talking about data integrity.

forbidden dialectics
Jul 26, 2005





H110Hawk posted:

RAID is not backup. If you haven't touched enough computers to see a raid controller poo poo the bed and write garbage to all of your mirrored replicas then please just trust in the mantra of raid is not backup.

I backup all of my critical files onto a striped 1000-drive 1.44MB floppy array. :shrug:

H110Hawk
Dec 28, 2006

forbidden dialectics posted:

I backup all of my critical files onto a striped 1000-drive 1.44MB floppy array. :shrug:

It technically counts. :getin:

Also I missed who you were and was being a bit hyperbolic on purpose. Hard to keep track of who has what dumb ideas on the internet.

caberham
Mar 18, 2009

by Smythe
Grimey Drawer
Oldie but goodie

https://youtu.be/gSrnXgAmK8k

2 bay RAID 1 is alright, but 4 bay RAID 5 is more ideal for main storage :smug:

caberham fucked around with this message at 04:23 on Nov 8, 2018

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
I still laugh my balls off over a stripe of 3 raid5 arrays. What a goddamn moron.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
That one never gets old lmao

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

Matt Zerella posted:

I still laugh my balls off over a stripe of 3 raid5 arrays. What a goddamn moron.

Well it is a good technique of increasing the probability of failure it's a top notch effort putting them all on one raid controller and have it fail.

Buffis
Apr 29, 2006

I paid for this
Fallen Rib
Hey thread, need some advice.

I have an old (2013 or so?) HP Microserver gen8, that I want to use partially as a NAS.
It runs the HP image of ESXI 6.0, and has the disked configured as Raid1 using the onboard B120i raid controller (software based).

Since I currently just have 10GB of (ECC) RAM in the device, and I run two linux servers on it, I'd prefer to not spend more than about 2GB RAM for the NAS VM.

People typically recommend FreeNAS for home NAS stuff, but I don't think it's a great fit for me due to both the RAM limit, and me already using Raid 1, since I think ZFS prefers having more knowledge of the underlying disk structure.

I would be ok with just using ext4 or something.

What is the preferred non-FreeNAS distro for doing home NAS stuff?

BlankSystemDaemon
Mar 13, 2009



forbidden dialectics posted:

I backup all of my critical files onto a striped 1000-drive 1.44MB floppy array. :shrug:
Can't use ZFS on floppies, it needs at least 64MB of free space for uber block allocation and other things. :smugbert:

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

D. Ebdrup posted:

Can't use ZFS on floppies, it needs at least 64MB of free space for uber block allocation and other things. :smugbert:

Use LVM to stripe sets of ~256 floppies and use those for ZFS.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

D. Ebdrup posted:

Can't use ZFS on floppies, it needs at least 64MB of free space for uber block allocation and other things. :smugbert:

Daisy chained parallel port zip disks then?

Sheep
Jul 24, 2003
Best Buy has 10TB Easystores for $180 right now.

BlankSystemDaemon
Mar 13, 2009



taqueso posted:

Use LVM to stripe sets of ~256 floppies and use those for ZFS.
Even with GEOM, I wouldn't do that. I remember how often a brand new set of floppies for MS-DOS and Win3.1x would fail - and that was am approximate factor of 10 less drives.

Methylethylaldehyde posted:

Daisy chained parallel port zip disks then?
Is it even possible to buy enough zip readers, nowadays?

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

D. Ebdrup posted:

Even with GEOM, I wouldn't do that. I remember how often a brand new set of floppies for MS-DOS and Win3.1x would fail - and that was am approximate factor of 10 less drives.

Is it even possible to buy enough zip readers, nowadays?

Go to enough storage wars style storage unit auctions and you're bound to run into one full of a mid 2000s era hoarded computer stuff. A huge pile random IO cards, a heaping pile of CRTs, maybe some VCRs or one of those big TVs on a cart!

DIEGETIC SPACEMAN
Feb 25, 2007

fuck a car
i'll do a mothafuckin' walk-by
Quick question: is a Core i3-4150 good enough to repurpose into a simple unRAID Plex/file server? All I need is something that can handle one or two streams at a time and host drives for a handful of nightly automated backups.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

DIEGETIC SPACEMAN posted:

Quick question: is a Core i3-4150 good enough to repurpose into a simple unRAID Plex/file server? All I need is something that can handle one or two streams at a time and host drives for a handful of nightly automated backups.

Absolutely. My i5 haswell handles way more like a champ.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
For those of you feeling the itch for more hard drives, BestBuy has the WD EasyStore 10TB for $180 as part of their early black friday deals. Not quite the deal the 8TB for <$140 were (and probably will be again), but it's a very good price for 10TB.

Baronash
Feb 29, 2012

So what do you want to be called?
$180 for 10TB is the equivalent of $144 for 8, with the added benefit of increased density if you're limited by bays.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I have a HP EX920 1TB NVMe drive. For a while, there's been a question about whether x2 actually significantly hurts consumer workloads.

Give me some iozone or fio command-line arguments. My proposed protocol here is that I will run the commands on my x4 slot, then on the x2, then again on the x4 just for drive load comparison.

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
I want to sell some of my old drives, is there an easy to use tool that'll allow me to permanently wipe the files?

EDIT: forgot to mention, im looking for a windows based solution

Incessant Excess fucked around with this message at 18:36 on Nov 12, 2018

SamDabbers
May 26, 2003



Incessant Excess posted:

I want to sell some of my old drives, is there an easy to use tool that'll allow me to permanently wipe the files?

What OS? In Linux it's as easy as
code:
sudo dd if=/dev/zero of=/dev/sdX bs=1M
Just replace "sdX" with the drive you want to wipe, which you can figure out with lsblk.

H110Hawk
Dec 28, 2006

Incessant Excess posted:

I want to sell some of my old drives, is there an easy to use tool that'll allow me to permanently wipe the files?

EDIT: forgot to mention, im looking for a windows based solution

DBAN one pass all 0's if they're rotational.

If they are SSD's you must use SATA/SCSI secure erase (it's a protocol level command issued to the drive, which then handles the whole thing) to guarantee all of the data is erased. If close enough is good enough for you, see above. I would do close enough.

Disconnect all disks (including things exposed on iscsi etc) you do not want erased.

Anything else is FUD.

BlankSystemDaemon
Mar 13, 2009



Don't read this recent change to FreeBSDs rm, if you don't wanna know way too much about disks, controllers, caches, filesystems, and handling of files, or how much systems programmers care about trying to do it right.

Rusty
Sep 28, 2001
Dinosaur Gum
Will a 200 watt power supply be fine for two 4TB drives? Do drives alone generate too much heat that a really small case would be an issue, one that isn't designed to hold two full sized drives? I have this I3 with 16GB of RAM that I don't do anything with so was thinking of adding two large drives and freeNAS (or unRAID).

This is what it looks like:

https://imgur.com/a/E4Dw5DC

Clark Nova
Jul 18, 2004

200w is fine for a few drives assuming you don’t also have a gaming gpu shoved in there as well

H110Hawk
Dec 28, 2006

Rusty posted:

Will a 200 watt power supply be fine for two 4TB drives? Do drives alone generate too much heat that a really small case would be an issue, one that isn't designed to hold two full sized drives? I have this I3 with 16GB of RAM that I don't do anything with so was thinking of adding two large drives and freeNAS (or unRAID).

This is what it looks like:

https://imgur.com/a/E4Dw5DC

It's the RAM you want to be worried about.

Atomizer
Jun 24, 2007



Rusty posted:

Will a 200 watt power supply be fine for two 4TB drives? Do drives alone generate too much heat that a really small case would be an issue, one that isn't designed to hold two full sized drives? I have this I3 with 16GB of RAM that I don't do anything with so was thinking of adding two large drives and freeNAS (or unRAID).

This is what it looks like:

https://imgur.com/a/E4Dw5DC

Yes, and probably not, respectively. The last time I was working with an external 3.5" HDD connected to my Kill-A-Watt it read about 10 W, IIRC. Everything else is fine with that PSU. I'd be more concerned about figuring out how you're gonna fit two drives in there than the amount of power/heat.

H110Hawk posted:

It's the RAM you want to be worried about.

:rolleyes:

Rusty
Sep 28, 2001
Dinosaur Gum

Atomizer posted:

Yes, and probably not, respectively. The last time I was working with an external 3.5" HDD connected to my Kill-A-Watt it read about 10 W, IIRC. Everything else is fine with that PSU. I'd be more concerned about figuring out how you're gonna fit two drives in there than the amount of power/heat.
Thanks you, yes, my thoughts as well, I have two full sized drives I can test first before I buy two large drives. Seems like it will fit, it has two drives in it now at the bottom, but obviously not fill sized and I think I can mount one in the DVD enclosure.

Atomizer
Jun 24, 2007



Rusty posted:

Thanks you, yes, my thoughts as well, I have two full sized drives I can test first before I buy two large drives. Seems like it will fit, it has two drives in it now at the bottom, but obviously not fill sized and I think I can mount one in the DVD enclosure.

Ah if you have a spare full-size 5.25" bay you can use something like this (and I have that exact one in a Shuttle XPC) to get a proper 3.5" mounting point (plus a couple of 2.5"!) They also make a 4x2.5" to 5.25" version if you have enough SATA ports to just use laptop-size HDDs and SSDs.

H110Hawk
Dec 28, 2006

What? Ram and cpus draw way more peak voltage than 2 spinning disks. As you said 20w for both disks vs easily 100w for the cpu+ ram. I don't think they have poor life choices in 16gb of ram.

Internet Explorer
Jun 1, 2005





H110Hawk posted:

What? Ram and cpus draw way more peak voltage than 2 spinning disks. As you said 20w for both disks vs easily 100w for the cpu+ ram. I don't think they have poor life choices in 16gb of ram.

You didn't say CPU and RAM, you said RAM.

Internet Explorer fucked around with this message at 06:58 on Nov 13, 2018

redeyes
Sep 14, 2002

by Fluffdaddy
I have a Netgear GS108T managed gigabit switch. 4 of its ports are on VLAN 1 which is sort of my default vlan and the other 4 ports on are VLAN 2 which is my guest network. For some reason VLAN2 only gets about 25MB/s performance while VLAN1 gets the full gigabit around 100MB/s . Does anyone know wtf might be going on? Crap switch?

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

H110Hawk posted:

What? Ram and cpus draw way more peak voltage than 2 spinning disks. As you said 20w for both disks vs easily 100w for the cpu+ ram. I don't think they have poor life choices in 16gb of ram.

Your 100W there breaks down to 95W for the CPU and 5W for the RAM; it's pretty safe to just worry about the former, as the spinning disks actually will draw more power than the RAM.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

D. Ebdrup posted:

Don't read this recent change to FreeBSDs rm, if you don't wanna know way too much about disks, controllers, caches, filesystems, and handling of files, or how much systems programmers care about trying to do it right.

People don't need to connect to the internet to get on the internet. So we replaced the eth0 up command to actually connect you to a loopback device that replies to ever single packet with a "nice!" and "Rick and Morty is a good show"

H110Hawk
Dec 28, 2006

EVIL Gibson posted:

People don't need to connect to the internet to get on the internet. So we replaced the eth0 up command to actually connect you to a loopback device that replies to ever single packet with a "nice!" and "Rick and Morty is a good show"

You joke. On Solaris you had to "plumb" interfaces before they would work.

Eletriarnation posted:

Your 100W there breaks down to 95W for the CPU and 5W for the RAM; it's pretty safe to just worry about the former, as the spinning disks actually will draw more power than the RAM.

Look at me I am wrong on the internet. I was remembering back to anecdotal evidence from 5? Years ago now where upgrading ram in a rack of servers caused them to blow breakers. We had added amps to the rack by doubling the dimm count. Guess it was a red herring or voltages/types have dropped dramatically in wattage. Could have also been that they were able to work harder so their cpus drew more power?

Now I know.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Enterprise workloads are so different from consumer or home NASes it’s not fair to compare them much I’d argue. The biggest issues in enterprise DC designs are related to power efficiency, memory throughput metrics (Facebook’s biggest issue according to Brendan Gregg at least), and raw network throughput issues at beyond 100 GbE speeds. The biggest issues for consumers in a home storage system may be power related but usually for cost reasons rather than density. Meanwhile, Google folks have been saying they’re running into electrical code issues where they can’t just add more racks, it’s not that power cost or cooling itself is the issue - they’ve hit a wall of bureaucracy / code which explains the expansion into alternative power beyond the liberal brownie points. At home, I’m more concerned about the Best Buy EasyStore sales than springing for some Gold drives that won’t bust on me (I treat my non-work hours as $0 / hr billable which is really inaccurate probably). Then again, at work we’re budget strapped in odd ways so the $25k / mo we spend on some bare metal critical for the business is scrutinized more than the $70k+ / mo we blow in AWS on a 80% overprovisioned environment.

Adbot
ADBOT LOVES YOU

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

H110Hawk posted:

Look at me I am wrong on the internet. I was remembering back to anecdotal evidence from 5? Years ago now where upgrading ram in a rack of servers caused them to blow breakers. We had added amps to the rack by doubling the dimm count. Guess it was a red herring or voltages/types have dropped dramatically in wattage. Could have also been that they were able to work harder so their cpus drew more power?

It's an understandable mixup considering that servers have a lot more RAM, RDIMMs (or FBDIMMs) use a lot more power, and if your old servers were old enough to use DDR2 or DDR1 then between higher current and higher voltage your wattage goes up substantially from that too.

Standard DDR3-1600 @ 1.5V is about 2.5-3W for an 8GB DIMM and DDR4 is going to be less though, from what I am seeing.

e:

necrobobsledder posted:

Google folks have been saying they’re running into electrical code issues where they can’t just add more racks, it’s not that power cost or cooling itself is the issue - they’ve hit a wall of bureaucracy / code which explains the expansion into alternative power beyond the liberal brownie points.

I work for a networking vendor with lots of labs full of 10kW+ boxes and was told several years ago when I started that at this location we were basically drawing as much power into our buildings (at least the ones which have large labs) as the local power company would allow. I cannot imagine this situation has improved much, considering how power density has increased per-RU.

The labs also run into occasional infrastructure issues with how much power they can deliver to a given area because they were designed several years ago around a lot less average draw per rack. I've tripped a breaker before by rebooting two full-rack chassis at once and causing all eight fan trays to spin up to full speed at the same time.

Eletriarnation fucked around with this message at 18:06 on Nov 13, 2018

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply