Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DoombatINC
Apr 20, 2003

Here's the thing, I'm a feminist.





rufius posted:

You might also try Goo Gone, but test it on a small spot first. Sometimes it’ll really eat into poo poo.

Yeah when I'm trying to get that rubbery sticky poo poo off plastics my usual order of operation is:

1) maybe it'll go willingly - plastic scraper, fingernails, the sticky side of some packing tape, a little isopropyl

2) it can be a little messed up so long as the rubber is gone - goo gone, goof off, lighter fluid, an old toothbrush

3) gently caress it just gently caress it - acetone nail polish remover, wire brush, regrettable enthusiasm

4) I give up - moonless night, shovel, second location

Adbot
ADBOT LOVES YOU

RoboBoogie
Sep 18, 2008
Question:


I have this motherboard (.https://www.newegg.com/p/N82E16813130695 ), is there a SATA expansion card that i should get? right now i have 1 SSD and 3 x 16 TB drives in a Z1 configuration. I plan to add 3 x 14 TB in Z1 in the same pool, and if the ability to expand a pool comes around, i would like to add one more 14 and 16 tb drive too the Z1 config.

Out of all the slots, i am only using 1 PCI with an nvidia graphics card that I was hoping to pass through so I can reclaim my desktop (might be a bad idea since its a Intel Pentium G3258 )

yoloer420
May 19, 2006

rufius posted:

You might also try Goo Gone, but test it on a small spot first. Sometimes it’ll really eat into poo poo.

What material should the container I use to put this poo poo in be made of? Like, what does it definitely not dissolve?

VelociBacon
Dec 8, 2009

yoloer420 posted:

What material should the container I use to put this poo poo in be made of? Like, what does it definitely not dissolve?

In my experience goo gone isn't that crazy, it comes in a plastic bottle so...

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.

RoboBoogie posted:

Question:


I have this motherboard (.https://www.newegg.com/p/N82E16813130695 ), is there a SATA expansion card that i should get? right now i have 1 SSD and 3 x 16 TB drives in a Z1 configuration. I plan to add 3 x 14 TB in Z1 in the same pool, and if the ability to expand a pool comes around, i would like to add one more 14 and 16 tb drive too the Z1 config.

Out of all the slots, i am only using 1 PCI with an nvidia graphics card that I was hoping to pass through so I can reclaim my desktop (might be a bad idea since its a Intel Pentium G3258 )

Lsi 9207-8i or equivalent (eg ibm, supermicro, cisco, etc branded) on ebay is like $25 including the cables you need.

Almost always already flashed to IT firmware which is what you want

Ineptitude
Mar 2, 2010

Heed my words and become a master of the Heart (of Thorns).
Buying another SSD for my PC to add more storage. Thinking 1TB is enough.

Are there any particular brands/productlines that are the bees knees now or can i just buy whatever?
I don't need ultra turbo performance, but longevity and durability would be nice.

My local store has Samsung 870 QVO, Kingston KC600 and A400 and WD Red SA500, in various sizes.

VelociBacon
Dec 8, 2009

Ineptitude posted:

Buying another SSD for my PC to add more storage. Thinking 1TB is enough.

Are there any particular brands/productlines that are the bees knees now or can i just buy whatever?
I don't need ultra turbo performance, but longevity and durability would be nice.

My local store has Samsung 870 QVO, Kingston KC600 and A400 and WD Red SA500, in various sizes.

I like the WD Blues or Samsung but anything by like Crucial is probably fine also, same with Kingston. Samsung is the gold standard to me but the last few drives I've bought were WD because it's no different really.

Kibner
Oct 21, 2008

Acguy Supremacy
I need some guidance in setting up my first home NAS. I mostly need help in figuring out an optimal ZFS configuration.

I primarily will be storing ripped movies and music as well as using something like LANCache to cache local downloads. My partner will also be using it as storage for image and possibly small video editing. The machine will also be a home lab (SQL, game servers, FoundryVTT, maybe a PiHole). It will be using the retired hardware from my previous PC.

CPU: 5950x
RAM: 32GB of ECC
GPU: 1070 (not really important since this will be running headless after initial OS install)
Motherboard: ASUS Pro WS X570-ACE (I have confirmed that this does report memory errors to the OS)
Case: Silverstone FT02
SSD: Optane 905p 480GB

If every PCIe slot in the motherboard is occupied, these are the speeds of all storage slots that are available:
  • PCIe x16_1: x8 (PCIe 4.0)
  • PCIe x16_2: x8 (PCIe 4.0)
  • PCIe x16_3: x8 (PCIe 4.0)
  • M.2_1: x4 (PCIe 4.0)
  • M.2_2: x2 (PCIe 4.0)
  • 4x SATA 6.0 Gb/s
  • 1x U.2 NVMe (or 4 SATA devices)

The case supports these physical mounting points:
  • 5x 5.25" (I plan on putting 3.5" hdd adapters in each of these)
  • 5x 3.5"
  • 1x 2.5"

My current plan is to get a 2TB (or maybe larger???) M.2 PCIe 4.0 drive for the OS, keep the Optane around for ZFS to use as metadata storage or some type of caching (not really sure what the best use is) and then 10 real high capacity HDDs.

Would there be a good use for a 2.5" SATA drive?

Should I try to get all 10 storage drives attached to PCIe lanes instead of using the SATA ports?

Should I use a single pool for the 10 storage drives and use ZRaid-3?

Any other considerations I should make?

e: crap, a friend linked me a diagram and I might need to change things up a bit so that the chip set's PCIe 4.0 x4 lanes aren't a bottleneck for thigns hanging off of it:



e2: looks like I want to put as much storage stuff on PCIe x16_1, PCIe x16_2, and M.2_1 as possible

Kibner fucked around with this message at 23:10 on Sep 12, 2023

BlankSystemDaemon
Mar 13, 2009



For what it's worth, PCIe 4.0 x4 is almost 63Gbps in bandwidth when the 128b/130b overhead of PCIe 3.x+ has been figured into it, and since spinning rust tends to max out at ~160Mbps, there isn't much need to worry - and even if you can only find a PCIe 3.0 SAS HBA, that's still over ~200 pieces of spinning rust.
Grabbing a single 9300-16i should let you plug in 16 disks in total using four SATA breakout cables seems like your best option.

For metadata storage in ZFS (allocation classes using the special vdev, which should consist of a minimum of two mirrored flash devices), the thing that's important is high writing endurance as well as capacity.
That typically means sticking with MLC over something like TLC or QLC, choosing a datacenter-focused SSD, or overprovisioning a consumer SSD (though you'll want to check that this can be done, as not every SSD has the option).
The Optane SSD could be an option, if you can find an extra one, because if it poofs then your data is effectively non-existent.

I'm not sure I see a point of using a 2.5" SATA drive for anything, unless you want a scratch drive to write the torrent data from the FreeBSD ISOs to, instead of setting up a special ZFS dataset.

A single pool with 10 disks and raidz3 gives you an uneven number of striped data disks, which some say is suboptimal (though I've never actually understood why) - but 11 disks for raidz3 is a very common setup, because it gets you a staggeringly low 24% space used for the distributed parity.
I know I've linked it before, but Ahrens' piece on raidz stripe width is worth reading.

BlankSystemDaemon fucked around with this message at 12:32 on Sep 13, 2023

Kibner
Oct 21, 2008

Acguy Supremacy
Thanks a bunch! I got some advice on another forum, as well, and between the two of you, I think I have enough information to proceed.

IOwnCalculus
Apr 2, 2003





BSD already hit all the big points, especially the fact that you'll never run into PCIe limitations on spinning disk unless you're trying very hard to cram everything into a single lane.

Depending on your plans for data growth, I would maybe consider doing two 5-disk vdevs instead one 10-disk vdev, which would allow you to replace five drives with larger ones in the future and then do a zpool expand to get the additional space, instead of having to replace all 10 before being able to do that. Of course while I say that, BSD's point about raidz3 has me very tempted to do 11-drive raidz3s on the restructure I'm doing on my server right now, because after I'm all done I'll have so many "extra" drives I won't need to expand for a very long time.

Either way, I would definitely choose a drive controller / adapter arrangement that makes it feasible for you to add a new drive without pulling one first. This makes future expansion or failed drive replacement much easier since you can pop the new drive in, do a zpool replace, and only remove the old drive after that's all settled.

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
Which os are you planning on using? With truenas at least, it is best to use a small ssd for the os, since it wont be used for anything else

You can buy things like a 32gb optane drive for like$10 now

Wibla
Feb 16, 2011

The last time I maxed out the bus with spinning rust was over 10 years ago, with a 3ware 9500 12 port pci-x controller in the wrong motherboard :haw:

Ineptitude
Mar 2, 2010

Heed my words and become a master of the Heart (of Thorns).

VelociBacon posted:

Samsung is the gold standard to me

Thanks, Samsung it is!

Kibner
Oct 21, 2008

Acguy Supremacy

Wild EEPROM posted:

Which os are you planning on using? With truenas at least, it is best to use a small ssd for the os, since it wont be used for anything else

You can buy things like a 32gb optane drive for like$10 now

TrueNAS Core. I have a 480gb 905p Optane and a 2TB Intel QLC nvme drive (can't remember the model#).

---

So, I found that I can use some drive cages to expand my storage drive capacity up to 13 drives. Would the play still be a raidz3 over 11 drives with, say, 2 hot spare drives? Or should I consider a pair of raidz2 over 6 drives vdevs with just a single hot spare? Are hot spares assigned to the pool or to the vdev?

If it matters, my network is limited to gigabit speeds.

BlankSystemDaemon
Mar 13, 2009



Hot spares are pool-wide, but there’s also the option of using draid with three disks worth of distributed parity and two disks worth of distributed spares, for much faster resilvering, while still getting 8 disks worth of data.

Raidz expansion won’t be available for that, though - and you also lose out on variable record sizes.

Kibner
Oct 21, 2008

Acguy Supremacy

BlankSystemDaemon posted:

Hot spares are pool-wide, but there’s also the option of using draid with three disks worth of distributed parity and two disks worth of distributed spares, for much faster resilvering, while still getting 8 disks worth of data.

Raidz expansion won’t be available for that, though - and you also lose out on variable record sizes.

Since 13 5.25" drives appears to be the physical limit of the amount of storage I can fit in my case and I don't believe I would want to mix a raidz with SSD storage, I don't think I would need to worry about raidz expansion, right?

I will have to read up what variable record sizes would get me.

BlankSystemDaemon
Mar 13, 2009



Kibner posted:

Since 13 5.25" drives appears to be the physical limit of the amount of storage I can fit in my case and I don't believe I would want to mix a raidz with SSD storage, I don't think I would need to worry about raidz expansion, right?

I will have to read up what variable record sizes would get me.
It's not a bad idea to leave yourself with the option of expanding later on, because you never know what kind of enclosure you'll be moving to next.
If you follow the tenants of the thread title, you'll need it sooner than later.

It's also worth noting that you don't need to take advantage of draid to make use of allocation classes - which was how people mistakenly thought it worked, initially.

Also, I hope you meant 3.5" disks, not 5.25" disks.

Kibner
Oct 21, 2008

Acguy Supremacy

BlankSystemDaemon posted:

It's not a bad idea to leave yourself with the option of expanding later on, because you never know what kind of enclosure you'll be moving to next.
If you follow the tenants of the thread title, you'll need it sooner than later.

It's also worth noting that you don't need to take advantage of draid to make use of allocation classes - which was how people mistakenly thought it worked, initially.

Also, I hope you meant 3.5" disks, not 5.25" disks.

Yeah, I did mean 3.5" disks, hah.

But, yeah, I'm obviously still learning about ZFS. Will look up more about draid vs raidz.

e: found this article: https://arstechnica.com/gadgets/2021/07/a-deep-dive-into-openzfs-2-1s-new-distributed-raid-topology/. it's a couple years old, but I think it gave me a good summary. I wonder if draid has been used enough for people to determine if it is actually useful for arrays smaller than the 90-disk ones draid was largely originally tested on.

Kibner fucked around with this message at 15:49 on Sep 14, 2023

BlankSystemDaemon
Mar 13, 2009



Kibner posted:

Yeah, I did mean 3.5" disks, hah.

But, yeah, I'm obviously still learning about ZFS. Will look up more about draid vs raidz.
The basic gist is that Intel prototyped it for one of the national laboratories that're currently using ZFS underneath Lustre to store their data on (Lawrence Livermore and Oak Ridge, can't remember which one), because they wanted the ability to have resilvering be much faster.

The basic idea is laid out in this presentation from one of the OpenZFS developer summits, and this article from Klara gives a bit of a higher overview in case you don't want too much detail.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Kibner posted:

I will have to read up what variable record sizes would get me.
Space efficiency.

Default record size is 128KB. If you have a file that's 128KB plus one byte more, without variable record sizes, it'd use 2x128KB. But since it can mix these, it can tailpack the second block and scale it down to the nearest power-of-2 that's same or larger than the ashift, which would typically be 4KB. So you'd save 124KB in this specific case. (The term for all this is slack space.)

Also, when you're using compression, you want variable record sizes, because here they're used extensively.

AFAIK, dRAID can still do variable record sizes. It's just that the minimum size increases based on width. If you have a dRAID with 8 data drives and ashift=12 (4KB sectors), the minimum record size would be 32KB. If you have a dataset with 1MB record sizes, it could still tailpack and compress.

Combat Pretzel fucked around with this message at 22:23 on Sep 14, 2023

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

BlankSystemDaemon posted:

For what it's worth, PCIe 4.0 x4 is almost 63Gbps in bandwidth when the 128b/130b overhead of PCIe 3.x+ has been figured into it, and since spinning rust tends to max out at ~160Mbps, there isn't much need to worry - and even if you can only find a PCIe 3.0 SAS HBA, that's still over ~200 pieces of spinning rust.

I just checked to make sure I'm not crazy and using a 14TB WD Elements external drive, I'm getting 226MBps sequential read and write speed in CrystalDiskMark. I assume you're thinking about older, slower drives and additionally mixing up 'b' and 'B' because that's close to 2Gbps, which means your PCIe 4.0 x4 is good for more like 32 pieces of current spinning rust and PCIe 3.0 would get you 16. Still more than you are likely to need, though.

Eletriarnation fucked around with this message at 22:58 on Sep 14, 2023

Yaoi Gagarin
Feb 20, 2014

Worth noting that the speed on a hard drive also depends on how far from the center the data is

SpartanIvy
May 18, 2007
Hair Elf
If anyone is looking for a cool case for a server with lots of drive spaces and is in the Dallas/Fort Worth area, this listing popped up on my Facebook Marketplace.

$125 but I bet they could be talked down
https://www.facebook.com/marketplace/item/1486476685519639/




Old Lian Li cases are so nice

SpartanIvy fucked around with this message at 02:39 on Sep 15, 2023

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

VostokProgram posted:

Worth noting that the speed on a hard drive also depends on how far from the center the data is

Remember when short stroking a hard drive was a thing?

Moey
Oct 22, 2010

I LIKE TO MOVE IT

fletcher posted:

Remember when short stroking a hard drive was a thing?

I missed that window for the enterprise game, but sometimes around 2008 I got to help a buddy short stroke 2x WD VelociRaptor drives in RAID 0.

Those things were hilarious. 2.5" 10k drives with a 3.5" form factor heatsink shell.

SSDs were just coming to the consumer market around the same time, but he wanted more space for OS + game installs.

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

VostokProgram posted:

Worth noting that the speed on a hard drive also depends on how far from the data center it is

Brain read this post like this.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

SpartanIvy posted:

If anyone is looking for a cool case for a server with lots of drive spaces and is in the Dallas/Fort Worth area, this listing popped up on my Facebook Marketplace.

$125 but I bet they could be talked down
https://www.facebook.com/marketplace/item/1486476685519639/




Old Lian Li cases are so nice

Very nice.

Thanks Ants
May 21, 2004

#essereFerrari


That's held its value amazingly well

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

fletcher posted:

Remember when short stroking a hard drive was a thing?

Ya - if you don’t optimize for this, you end up edging your hard drive. Works well for bigger loads on the drive.

BlankSystemDaemon
Mar 13, 2009



Eletriarnation posted:

I just checked to make sure I'm not crazy and using a 14TB WD Elements external drive, I'm getting 226MBps sequential read and write speed in CrystalDiskMark. I assume you're thinking about older, slower drives and additionally mixing up 'b' and 'B' because that's close to 2Gbps, which means your PCIe 4.0 x4 is good for more like 32 pieces of current spinning rust and PCIe 3.0 would get you 16. Still more than you are likely to need, though.
What's full, half, quarter stroke? What is seek times on the outer and inner edge of the spindle? What about transfer rates of outside, middle, and inside, and IOPS for random reads at multiples of 512b up to 1MB?

Because if it's anything like this 8TB WD external disk that got shucked, it's very different from what CDM is reporting:
pre:
# diskinfo -cit /dev/ada2
/dev/ada2
        512             # sectorsize
        8001563222016   # mediasize in bytes (7.3T)
        15628053168     # mediasize in sectors
        4096            # stripesize
        0               # stripeoffset
        15504021        # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        WDC WD80EDAZ-11TA3A0    # Disk descr.
        VGH4XM9G        # Disk ident.
        ahcich2         # Attachment
        id1,enc@n3061686369656d30/type@0/slot@3/elmdesc@Slot_02 # Physical path
        No              # TRIM/UNMAP support
        5400            # Rotation rate in RPM
        Not_Zoned       # Zone Mode

I/O command overhead:
        time to read 10MB block      0.069147 sec       =    0.003 msec/sector
        time to read 20480 sectors   1.312151 sec       =    0.064 msec/sector
        calculated command overhead                     =    0.061 msec/sector

Seek times:
        Full stroke:      250 iter in   6.780816 sec =   27.123 msec
        Half stroke:      250 iter in   4.220451 sec =   16.882 msec
        Quarter stroke:   500 iter in   6.305873 sec =   12.612 msec
        Short forward:    400 iter in   1.089289 sec =    2.723 msec
        Short backward:   400 iter in   1.875528 sec =    4.689 msec
        Seq outer:       2048 iter in   0.106992 sec =    0.052 msec
        Seq inner:       2048 iter in   0.160682 sec =    0.078 msec

Transfer rates:
        outside:       102400 kbytes in   0.462083 sec =   221605 kbytes/sec
        middle:        102400 kbytes in   0.575440 sec =   177951 kbytes/sec
        inside:        102400 kbytes in   1.176007 sec =    87074 kbytes/sec

Asynchronous random reads:
        sectorsize:      1087 ops in    3.389757 sec =      321 IOPS
        4 kbytes:         742 ops in    3.731055 sec =      199 IOPS
        32 kbytes:        711 ops in    3.660456 sec =      194 IOPS
        128 kbytes:       658 ops in    3.789022 sec =      174 IOPS
        1024 kbytes:      404 ops in    4.273654 sec =       95 IOPS
A 128kB write on the inside of the spindle while while 1MB random reads are also occurring is going to be a lot slower than those ~220MBps that CDM is reporting, even if you stick to quarter strokes.

As for mixing up bytes and bits: PCI Express 3.0 is 985MBps aka 7.88Gbps per lane, minus the 3% overhead that comes from having to send 230 bytes for every 228 bytes of data because of line encoding.
PCI Express 4.0 doubled that and didn't meaningfully change anything else.

SpartanIvy posted:

Old Lian Li cases are so nice
Heck yeah they are.

fletcher posted:

Remember when short stroking a hard drive was a thing?
Well, as you can see from above, stroking is still a thing! :science:

BlankSystemDaemon fucked around with this message at 20:09 on Sep 15, 2023

IOwnCalculus
Apr 2, 2003





rufius posted:

Ya - if you don’t optimize for this, you end up edging your hard drive. Works well for bigger loads on the drive.

brb feeding my NAS L-arginine

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

BlankSystemDaemon posted:

What's full, half, quarter stroke? What is seek times on the outer and inner edge of the spindle? What about transfer rates of outside, middle, and inside, and IOPS for random reads at multiples of 512b up to 1MB?

<snipped for brevity>

As for mixing up bytes and bits: PCI Express 3.0 is 985MBps aka 7.88Gbps per lane, minus the 3% overhead that comes from having to send 230 bytes for every 228 bytes of data because of line encoding.
PCI Express 4.0 doubled that and didn't meaningfully change anything else.

I was just responding to your assertion about where spinning rust "tends to max out" by saying it's clearly a lot higher than that because I can pull a random drive off the shelf and get a much higher bandwidth with a simple test over USB. Talking about all the situations where it's slower doesn't have anything to do with what the max is.

The bytes versus bits mixup wasn't when you were talking about PCIe lanes, it was when you were talking about how fast the hard drive is. Specifically, you typed "160Mbps" and I assume you were thinking "160MBps" because otherwise you're off by more than a factor of ten and that can't be explained by "I was talking about the inside of the platter" or whatever.

Eletriarnation fucked around with this message at 22:42 on Sep 15, 2023

BlankSystemDaemon
Mar 13, 2009



Eletriarnation posted:

I was just responding to your assertion about where spinning rust "tends to max out" by saying it's clearly a lot higher than that because I can pull a random drive off the shelf and get a much higher bandwidth with a simple test over USB. Talking about all the situations where it's slower doesn't have anything to do with what the max is.

The bytes versus bits mixup wasn't when you were talking about PCIe lanes, it was when you were talking about how fast the hard drive is. Specifically, you typed "160Mbps" and I assume you were thinking "160MBps" because otherwise you're off by more than a factor of ten and that can't be explained by "I was talking about the inside of the platter" or whatever.
What you can achieve with a benchmarking tool is very different from real-world workload scenarios.
And again, the entire point of that was not that it's one value or the other, it's that the range between the inside of the platter and the outside of the platter is so big, that it's basically meaningless to try and use the maximum value.

You're right, I did mistype 160MBps as 160Mbps - but in my defense, Mbps is the correct choice for unit of bandwidth; I should just have written it as 1.28Gbps.

BlankSystemDaemon fucked around with this message at 22:57 on Sep 15, 2023

el_caballo
Feb 26, 2001
I got a sorta unrelated question but I think you hard drive nerds probably have a simple answer. I rebuilt my Unraid server with the guts of my old desktop and was left with an unused old Adata SU800 250gb SSD. So I bought a $10 Sabrent enclosure with the idea of making this part of my travel firestick Kodi kit. FYI: in order for an un-rooted firestick to read a USB drive it needs to be FAT32 so that's what this portable SSD is now and the speed definitely helps with copying all those split RAR4 movie files.

My question is: does formatting an SSD to FAT32 and/or using it in a USB enclosure affect using TRIM? I did some searching that seemed to say TRIM doesn't work over USB and doesn't work in anything but NTFS on Windows, but also yes TRIM does work for all FAT file systems but also this Sabrent EC-USAP enclosure chipset which could be one of two different chipsets doesn't support TRIM and never will because it uses USB but also yes it does if you update the firmware with a Jmicron tool which I did.

Those last few mysteries are for me to figure out on my own and this is a cheap drive so who cares but this is my first portable SSD so I am just curious for all those portable SSDs yet unborn that I will own and love in the future and how they'll work with TRIM, FAT32, USB and Windows (Win 11). Crystal Disk Info does show TRIM as one of the features right now in Windows as a FAT32 drive but I don't know if that just means it supports it not necessarily that it is currently using it.

Trapick
Apr 17, 2006

My NAS died last night :smithcloud:

I powered off, unplugged, blew out dust, moved it ~5 feet, plugged in, tried to power on...nothing. Really hoping it's the PSU, I did the paper clip/jumper test and the fan didn't start, so here's hoping.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

Trapick posted:

My NAS died last night :smithcloud:

I powered off, unplugged, blew out dust, moved it ~5 feet, plugged in, tried to power on...nothing. Really hoping it's the PSU, I did the paper clip/jumper test and the fan didn't start, so here's hoping.

Condolences.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I sure hope iX will put a mariadb chart in their repo for the coming RC release of Cobia. Because currently there is none because :psyduck:, and I don't want to use TrueCharts.

YerDa Zabam
Aug 13, 2016



Couple of new Def Con videos that I thought you lot might enjoy.
The hard drive stats one in particular I enjoyed. (It's Backblaze btw)

https://www.youtube.com/watch?v=pY7S5CUqPxI



https://www.youtube.com/watch?v=YhWyaZ__fL8

YerDa Zabam fucked around with this message at 23:43 on Sep 15, 2023

Adbot
ADBOT LOVES YOU

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

BlankSystemDaemon posted:

What you can achieve with a benchmarking tool is very different from real-world workload scenarios.
And again, the entire point of that was not that it's one value or the other, it's that the range between the inside of the platter and the outside of the platter is so big, that it's basically meaningless to try and use the maximum value.
:what: It pains me to linger on what was intended to be a polite and minor correction but this conversation started with me responding to you making a statement on where "spinning rust tends to max out". I was never trying to defend the importance of that metric in any way.

I jumped in because you then used that erroneous metric to conclude that PCIe 3.0 x4 has enough bandwidth for 200 spinning disks and I thought "wow, that seems extremely wrong". It's still wrong even if you're talking about the inside of the platter because it mostly came from the bits versus bytes slip, which is of much greater magnitude, but hey, you acknowledged that much and if you had left it there I wouldn't have had a reason to make this post.

Eletriarnation fucked around with this message at 01:28 on Sep 16, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply