|
Cocoa Crispies posted:you still pay for features you aren't using So, I should compile a custom kernel without all that garbage red hat enables by default? edit: The more poo poo work that can be shoved off under an abstraction layer the better. 99% of programmers are going to look at barfs and say "Thank god, I never have to worry about that tedious boiler plate bullshit ever again. SYSV Fanfic fucked around with this message at 05:27 on Jan 9, 2015 |
# ? Jan 9, 2015 05:15 |
|
|
# ? Jun 6, 2024 07:10 |
|
mobe to bsd where we keep our system like the good ol' 70's Live Operating system museum
|
# ? Jan 9, 2015 05:28 |
|
keyvin posted:Is it because if it wasn't in SYS V it wasn't canonical unix? Like the star wars expanded universe or fanfic or something? Linux is SysV fanfic
|
# ? Jan 9, 2015 05:31 |
|
keyvin posted:So, I should compile a custom kernel without all that garbage red hat enables by default? in particular, you pay for filesystem features in latency, bandwidth, storage overhead, and correctness quote:What are the crash guarantees of rename? maybe it's okay for things like linux desktops or android phones where if you lose your animes you just restart the torrent and blame the glitchy rom you got from xda
|
# ? Jan 9, 2015 05:36 |
|
keyvin posted:So, I should compile a custom kernel without all that garbage red hat enables by default? your kernel should have stable APIs and leverage OO where it makes sense (say in drivers & storage management) so you don't even need to recompile if you want to use a feature, but if you don't it's not even loaded of course this would enable closed-source drivers so it's important for the kernel to change things up a little every now and then (like the order of properties in some data structures).
|
# ? Jan 9, 2015 06:03 |
|
maybe i am dumb but why would you want RAID in software ala ZFS or BTRFS all my servers have a hardware raid controller already
|
# ? Jan 9, 2015 06:08 |
|
because the cloud definitely doesnt exist and everyone runs everything on dells in a colo
|
# ? Jan 9, 2015 07:06 |
|
so you run your software raid on your ec2 micro instance or whatever which is backed by SANs already in some kind of RAID like configuration? so the benefit is?
|
# ? Jan 9, 2015 07:59 |
|
btrfs is worth it unfortunately you have to janitor it still. e.g. that freespace issue, and balancing, and scrubbing...
|
# ? Jan 9, 2015 09:43 |
|
I'm sorry, I still can't imagine a statistically significant cost in a hypothetical bizarro world where you were forced to have just the poo poo needed to boot on barfs. I mean, now that we figured this out, can we just say barfs isn't canon? eschaton posted:SysV fanfic Mods, can I get a user name change please?
|
# ? Jan 9, 2015 13:20 |
|
Captain Foo posted:why is xfs better than ext4 it scales better than ext4, has roughly equivalent performance, and is imo more robust. I'm not one to sperg out about filesystems, but a few years ago I had an ext4 filesystem that blew up, wasn't able to fully repair it, reformatted it to xfs and have had zero issues since.
|
# ? Jan 9, 2015 15:31 |
|
last time i used XFS it would reliably zero out every file that was open in the event of a system crash. then again this was back when kernel 2.4 was new, but it kind of left a sour taste in my mouth.
|
# ? Jan 9, 2015 15:57 |
|
You guys are doing a terrible job of selling me on linux.
|
# ? Jan 9, 2015 16:06 |
|
my stepdads beer posted:so you run your software raid on your ec2 micro instance or whatever which is backed by SANs already in some kind of RAID like configuration? so the benefit is? lol well before provisioned iops ebs and ssds, mdadm raid 10 was a gud way to get better disk performance for like databases and anyway there is plenty of benefit. even if the ebs drive is in a filer ive still seen them get nuked. and raids are useful for lvm etc
|
# ? Jan 9, 2015 16:08 |
|
and oracle literally sells a $250k filer that is completely handled by zfs. its crap but i think youre understating the usefulness. this isnt 1998 where running software raid is an enormous performance hit
|
# ? Jan 9, 2015 16:11 |
|
pram posted:lol well before provisioned iops ebs and ssds, mdadm raid 10 was a gud way to get better disk performance for like databases Pram has discovered one weird trick to boost your io, find out why adaptec hates him.
|
# ? Jan 9, 2015 16:16 |
|
A few days ago i bought a pair of 1TB laptop sized NAS disks (this is apparently a thing) to put into the small linux that lives under my couch turns out the small linux that lives under my couch can't use both of them at once, even though its mobo has two SATA ports. because i've also got an mSATA SSD installed in it and three SATA drives are just too much to handle man the other mini PCIe slot has a wifi card turns out wifi doesn't work very well inside a case made of metal, although i managed to snake an external antenna out of the case so it works ok now i guess mostly i keep it around as a monument to my shame Sapozhnik fucked around with this message at 16:20 on Jan 9, 2015 |
# ? Jan 9, 2015 16:18 |
|
btrfs is great, i use it on everything, having boot-time snapshots of everything has come in handy more than once software raid is great for single-machine filers, i really like the idea of rebuilding the data after a disk is replaced instead of rebuilding the entire block device containing a filesystem that might only be 30% used srs question: bit flips in reading data are rare but can and do happen, what do you do with a RAID 1 where the two copies disagree? i assume most RAID 1 implementations just read one of the two copies to give any data the OS asks for but if you do a consistency check or something, what happens with btrfs or zfs the answer is "the copy with the correct checksum is used to repair the other one" which is neato
|
# ? Jan 9, 2015 16:18 |
|
Tankakern posted:btrfs is worth it to be fair, scrubbing is just to detect hardware failures (e.g. unreadable sectors, sectors with bad checksums due to a flipped bit) earlier than you otherwise would
|
# ? Jan 9, 2015 16:20 |
|
kind-of equivalent to 'echo check >> /sys/block/mdN/md/sync_action'
|
# ? Jan 9, 2015 16:21 |
|
Mr Dog posted:last time i used XFS it would reliably zero out every file that was open in the event of a system crash. that was a long time ago tho
|
# ? Jan 9, 2015 16:22 |
|
jre posted:Pram has discovered one weird trick to boost your io, find out why adaptec hates him. I only allow my bytes to be replicated through purestrain lsi controllers
|
# ? Jan 9, 2015 17:26 |
|
pram posted:lol well before provisioned iops ebs and ssds, mdadm raid 10 was a gud way to get better disk performance for like databases so, uh, back when aws was a laughable choice and only total idiots tried to put databases in it, mdadm raid10 was useful?
|
# ? Jan 9, 2015 17:32 |
|
Notorious b.s.d. posted:so, uh, back when aws was a laughable choice and only total idiots tried to put databases in it, mdadm raid10 was useful?
|
# ? Jan 9, 2015 17:38 |
|
yeah sure why not bsd
|
# ? Jan 9, 2015 17:39 |
|
but it's still useful, anyway. I know ur just being a fat contrarian baby but how would you do lvm on cloud instances? Are you just going to stripe the data across the pvs without any redundancy? Is everyone just supposed to buy a netapp?
|
# ? Jan 9, 2015 17:47 |
|
pram posted:but it's still useful, anyway. I know ur just being a fat contrarian baby but how would you do lvm on cloud instances? Are you just going to stripe the data across the pvs without any redundancy? Is everyone just supposed to buy a netapp? mongodb sharting, duh
|
# ? Jan 9, 2015 17:49 |
|
In an age where tons of poo poo depends on software replication like gluster/drbd/hdfs/ocfs it seems like a retarded neckbeard anachronism to be against mdadm. But it's bsd were talking about and retarded anachronisms are the name of the game. I mean he's obviously not arguing in good faith since software raid is one of zfs' killer features and he's a Solaris fanboy
|
# ? Jan 9, 2015 18:04 |
|
it's me, i'm the person who puts one petabyte of SAN behind a single storage server managing a single logical volume
|
# ? Jan 9, 2015 20:50 |
|
Zettabyte File System
|
# ? Jan 9, 2015 21:33 |
|
please, it's the zebibyte file system
|
# ? Jan 9, 2015 21:37 |
|
checkin in to check the hopes for lunix on the desktop 2015
|
# ? Jan 9, 2015 22:05 |
|
courage, my friends, one day our time will come
|
# ? Jan 9, 2015 22:07 |
|
theadder posted:checkin in to check the hopes for lunix on the desktop 2015 Looking good. I am using linux on my computer to make sure I never want to use a computer at home. 5/5 stars, would recommend for this purpose.
|
# ? Jan 10, 2015 00:24 |
|
tfw when u press winkey notepad and see wine start up....... piss
|
# ? Jan 10, 2015 03:36 |
|
Smythe posted:tfw when u press winkey notepad and see wine start up....... piss why the gently caress would you ever deliberately try to start notepad, even if you were running windows
|
# ? Jan 10, 2015 13:37 |
|
Soricidus posted:why the gently caress would you ever deliberately try to start notepad, even if you were running windows Because they couldn't be bothered to install notepad++? Queue three page derail on text editors.
|
# ? Jan 10, 2015 16:44 |
|
keyvin posted:Because they couldn't be bothered to install notepad++? vi/vim supremacy
|
# ? Jan 10, 2015 16:47 |
|
Soricidus posted:why the gently caress would you ever deliberately try to start notepad, even if you were running windows voicemail
|
# ? Jan 10, 2015 18:27 |
|
|
# ? Jun 6, 2024 07:10 |
|
pram posted:In an age where tons of poo poo depends on software replication like gluster/drbd/hdfs/ocfs it seems like a retarded neckbeard anachronism to be against mdadm. But it's bsd were talking about and retarded anachronisms are the name of the game. I mean he's obviously not arguing in good faith since software raid is one of zfs' killer features and he's a Solaris fanboy software raid-1 is fine and works correctly the dumb part was striping dozens of ebs volumes together and praying for consistent performance
|
# ? Jan 12, 2015 00:44 |