Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

Furism posted:

Oh I believe you, I just want to understand why this limitation exists when it's a basic feature in any network firewall.
I think it's because that command does a different thing than inserting a line into a table. I think creating a zone and assigning the address 1.2.3.4/32 as a source it is the correct way to do what I want to do, but I don't fully understand firewalld -- like how I'm confused why it appears to be accepting traffic from newly-created zones by default.

Adbot
ADBOT LOVES YOU

other people
Jun 27, 2004
Associate Christ
A zone describes a set of allowed services/ports.

Packets are matched to a zone based on either their source address or their ingress interface.

hth

telcoM
Mar 21, 2009
Fallen Rib

anthonypants posted:

I wanted to set some firewall rules using firewalld to allow inbound SNMP traffic, but only from a /32. You can't just do sudo firewall-cmd --zone=public --add-port=161/tcp --add-source=10.200.204.200/32 --permanent, because you can't add both a source IP and a source port. So instead, I made a new zone, added the source IP and port to it, and then added the snmp service to it too. With iptables I would have to chain this into an ACCEPT table, so I started to look for how to do that, but then I noticed SNMP traffic was flowing. Am I good?

As far as I understand, a network interface can belong to just one zone at a time, so now your network interface might be accepting *only* SNMP traffic, or all traffic, depending on what the default for new zones is.

The zone rules are essentially per-interface: once you allow a specific service or port, it is allowed for all incoming traffic on any interface belonging to that zone.

For more fine-grained restrictions, you'll want to use rich rules, like this:

code:
sudo firewall-cmd --zone=whatever --add-rich-rule='rule family="ipv4" source address="10.200.204.200/32" service name="snmp" accept'
See paragraph 4.5.3.1.12 here:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html

More (and, in my opionion, better) documentation and examples on rich rules on Fedora's pages:
https://fedoraproject.org/wiki/Features/FirewalldRichLanguage

telcoM fucked around with this message at 07:47 on Jul 19, 2017

Furism
Feb 21, 2006

Live long and headbang
Is it the physical interface (the NIC itself) or the virtual interface/alias ? I'm about to redo all my network at home, with segmented sub-networks to split admin from services, and I wanted to use aliases for computers with just one NIC (like the Raspberry).

Like there would be two virtual interfaces (or whatever you call them), each on a different subnet and VLANs. Will I be able to put each in its own zone?

Methanar
Sep 26, 2013

by the sex ghost
So I've got this nodejs application running on bare metal. This machine has 28 physical cores and 28 HT cores. The CPU load on the machine looks strange and my current theory is that for some reason the HT cores are going unused by the app.

Is this normal and just how things are supposed to be? As far as I can tell NodeJS itself has no concept of whether or not a core is virtual or not.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Methanar posted:

So I've got this nodejs application running on bare metal. This machine has 28 physical cores and 28 HT cores. The CPU load on the machine looks strange and my current theory is that for some reason the HT cores are going unused by the app.

Is this normal and just how things are supposed to be? As far as I can tell NodeJS itself has no concept of whether or not a core is virtual or not.


Node doesn't care, and like any user-mode application, it isn't responsible for scheduling itself -- that's the kernel's job. The physical cores are always going to be at least as fast as the HT pseudo-cores, so on an HT-aware OS you should never see those lit up until the physical cores are oversubscribed. (There are certain cases where this isn't quite true, like when you have a many-threaded app that doesn't want to leave its NUMA home node. But since Node apps are coroutine-driven and rarely do work on more than a single thread, I'm fairly sure this doesn't apply to your situation.)

Methanar
Sep 26, 2013

by the sex ghost
Stupid question: as CPU load increase, does absolute time needed to complete a given task increase?

Why is it that as CPU load increases, performance may begin to degrade; sometimes very sharply. I understand that when htop says 70% load, it isn't saying the CPU is 30% idle, it's saying over the sampled time the CPU was busy 70% of the time. This smooths out the peaks and troughs and when looking over enough differently sampled times you get an idea of what the real load is.

What really happens as CPU load rises (yet be <100) that can cause performance degradation? Is it that hidden within that sampled time the CPU really is hitting 100% periodically? The more load the more L1/L2/L3 cache is presumably being used, and as those are overfilled data must be fetched from the far slower ram instead?

If HT cores were being used, would that be a cause for concern that the box may be undersized for it's load? Are there any rules of thumb for how to appropriately size hardware for a workload? Or is every workload completely different with utilization/performance ratios and points of diminishing returns needing to be empirically measured.

Horse Clocks
Dec 14, 2004


It could be excessive context switching between threads. Too many cpu-intensive threads running at once will cause the scheduler to switch between contexts. But thinking about it, this should mean your HT cores are being used too if that's the case.

Another issue is you could be having IO issues. If you're doing a lot of ipc, or network/disk activity, there might not be enough throughput on your IO to saturate the cpu.

If your app has a lot of IPC, One thing to check is how many file-descriptors your app has open, and can open. Sysctl will impose reasonable general-purpose limits out of the box, but these are too low for some applications. You may want to play with ulimit/sysctl to increase the FD limit.

If it's IO bound you need better hardware.

I'm not sure how node manages threads, or IO scheduling, or how our app is written, so these are just pie-in-the-sky guesses.

evol262
Nov 30, 2010
#!/usr/bin/perl

Methanar posted:

So I've got this nodejs application running on bare metal. This machine has 28 physical cores and 28 HT cores. The CPU load on the machine looks strange and my current theory is that for some reason the HT cores are going unused by the app.

Is this normal and just how things are supposed to be? As far as I can tell NodeJS itself has no concept of whether or not a core is virtual or not.


My theory would be that, since Node is single threaded, the developers have spun up 28 instances of it. One per core. And that the developers/admins have intentionally avoided the HT cores, even though it'd run perfectly fine with 28 more instances.



Methanar posted:

Stupid question: as CPU load increase, does absolute time needed to complete a given task increase?

Why is it that as CPU load increases, performance may begin to degrade; sometimes very sharply. I understand that when htop says 70% load, it isn't saying the CPU is 30% idle, it's saying over the sampled time the CPU was busy 70% of the time. This smooths out the peaks and troughs and when looking over enough differently sampled times you get an idea of what the real load is.

What really happens as CPU load rises (yet be <100) that can cause performance degradation? Is it that hidden within that sampled time the CPU really is hitting 100% periodically? The more load the more L1/L2/L3 cache is presumably being used, and as those are overfilled data must be fetched from the far slower ram instead?

If HT cores were being used, would that be a cause for concern that the box may be undersized for it's load? Are there any rules of thumb for how to appropriately size hardware for a workload? Or is every workload completely different with utilization/performance ratios and points of diminishing returns needing to be empirically measured.

HT cores aren't a sign it's undersized. Intel originally added them in P4 cores because the pipeline was hilariously long, and if there was a cache miss or branch prediction failure, the performance impact was severe. Adding HT mitigated that.

Some applications will scale almost linearly with cores (HT or not). Generally, heavily-threaded code or goroutines which is just crunching "dumb" data.

Data is always fetched from memory. Even with huge caches. This isn't a sign of bad code. You're relying on the CPU to optimistically fetch what it needs from main memory, avoid cache flushes of necessary data where possible, and for the OS to keep cache locality. Almost no real application can survive on registers only.

You should look into different schedulers and how the scheduler actually works if you're very concerned about workload and CPU scaling. Deadline is still reasonable.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Methanar posted:

Stupid question: as CPU load increase, does absolute time needed to complete a given task increase?

Why is it that as CPU load increases, performance may begin to degrade; sometimes very sharply. I understand that when htop says 70% load, it isn't saying the CPU is 30% idle, it's saying over the sampled time the CPU was busy 70% of the time. This smooths out the peaks and troughs and when looking over enough differently sampled times you get an idea of what the real load is.

What really happens as CPU load rises (yet be <100) that can cause performance degradation? Is it that hidden within that sampled time the CPU really is hitting 100% periodically? The more load the more L1/L2/L3 cache is presumably being used, and as those are overfilled data must be fetched from the far slower ram instead?

If HT cores were being used, would that be a cause for concern that the box may be undersized for it's load? Are there any rules of thumb for how to appropriately size hardware for a workload? Or is every workload completely different with utilization/performance ratios and points of diminishing returns needing to be empirically measured.
You're viewing the high CPU as being the cause of a slowdown in performance, but you may be inverting cause and effect somewhat. With a properly-tuned scheduler, even a fully-loaded system should retain some semblance of responsiveness. However, you can interpret the increase in CPU to a sign that the application is doing more of something, and eventually it's doing enough something where its performance bottoms out. Like others have said, that may be I/O wait, context switching, waiting on some kind of shared resource like a mutex or semaphore, or fighting a kernel that dumps all its IRQ handling onto one core.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
Is there a way I can program a hotkey which will open up a specific website in a new Firefox tab (or open a Firefox window if one isn't open)? I use 1Password to manage passwords on iOS, mac OS, and Windows and unfortunately they don't have a Linux client. Their new subscription plan has a webapp though so if I could just program (left)alt + \ to open up 1password.com then that would be workable. It doesn't have to log me in or anything, just open up the website.

e: Fedora 25 w/ GNOME so if I need to do this at the GNOME level that's fine too.

The Phlegmatist
Nov 24, 2003
Yeah, it's actually pretty easy.

Keyboard under Gnome settings, scroll to the bottom, add new keyboard shortcut. Command would be firefox 1password.com

That'll open it under a new tab and focus it if you have a current firefox instance running, or open a new one if you don't.

Polygynous
Dec 13, 2006
welp
Looks like my UPS needs replaced, which gave me an excuse to test this again. Is there any good / obvious reason a crappy old TV card should work on a 3.16 kernel and fail on 4.9?

"fail" in this case means running mplayer / mpv / I forget if I tried anything else and getting the helpful repeated message "v4l2: select timeout"

Did a bit of googling and didn't find anything promising. dmesg / logs didn't seem any different from what I remember but I can double check.

(side note: even on 3.16 video seems to freeze unless pavucontrol is open if you want to tackle that :iiam:)

relevant lspci -vv on 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2 (2017-04-30) x86_64 GNU/Linux

code:
04:02.0 Multimedia video controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder (rev 05)
        Subsystem: Avermedia Technologies Inc CX23880/1/2/3 PCI Video and Audio Decoder
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 64 (5000ns min, 13750ns max), Cache Line Size: 32 bytes
        Interrupt: pin A routed to IRQ 17
        Region 0: Memory at fd000000 (32-bit, non-prefetchable) [size=16M]
        Capabilities: [44] Vital Product Data
                No end tag found
        Capabilities: [4c] Power Management version 2
                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
        Kernel driver in use: cx8800
        Kernel modules: cx8800

04:02.1 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [Audio Port] (rev 05)
        Subsystem: Avermedia Technologies Inc CX23880/1/2/3 PCI Video and Audio Decoder [Audio Port]
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 64 (1000ns min, 63750ns max), Cache Line Size: 32 bytes
        Interrupt: pin A routed to IRQ 17
        Region 0: Memory at fc000000 (32-bit, non-prefetchable) [size=16M]
        Capabilities: [4c] Power Management version 2
                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
        Kernel driver in use: cx88_audio
        Kernel modules: cx88_alsa

04:02.2 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [MPEG Port] (rev 05)
        Subsystem: Avermedia Technologies Inc CX23880/1/2/3 PCI Video and Audio Decoder [MPEG Port]
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 64 (1500ns min, 22000ns max), Cache Line Size: 32 bytes
        Interrupt: pin A routed to IRQ 17
        Region 0: Memory at fb000000 (32-bit, non-prefetchable) [size=16M]
        Capabilities: [4c] Power Management version 2
                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
        Kernel driver in use: cx88-mpeg driver manager
        Kernel modules: cx8802
and some dmesg
code:
[    4.580308] cx88[0]: subsystem: 1461:c111, board: ASUS PVR-416 [card=12,autodetected], frontend(s): 0
...
[    5.057396] cx88[0]/1: CX88x/0: ALSA support for cx2388x boards
[    5.058659] cx88[0]/2: cx2388x 8802 Driver Manager
[    5.058777] cx88[0]/2: found at 0000:04:02.2, rev: 5, irq: 17, latency: 64, mmio: 0xfb000000
[    5.059073] cx88[0]/0: found at 0000:04:02.0, rev: 5, irq: 17, latency: 64, mmio: 0xfd000000
[    5.061770] cx88[0]/0: registered device video0 [v4l2]
[    5.061952] cx88[0]/0: registered device vbi0
[    5.062107] cx88[0]/0: registered device radio0
[    5.070301] [drm] Initialized radeon 2.39.0 20080528 for 0000:01:00.0 on minor 0
[    5.076307] cx2388x blackbird driver version 0.0.9 loaded
[    5.076313] cx88/2: registering cx8802 driver, type: blackbird access: shared
[    5.076317] cx88[0]/2: subsystem: 1461:c111, board: ASUS PVR-416 [card=12]
[    5.076362] cx88[0]/2: cx23416 based mpeg encoder (blackbird reference design)
[    5.076583] cx88[0]/2-bb: Firmware and/or mailbox pointer not initialized or corrupted
[    5.098441] cx88-mpeg driver manager 0000:04:02.2: firmware: direct-loading firmware v4l-cx2341x-enc.fw
[    5.143455] iTCO_vendor_support: vendor-support=0
[    5.151686] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
[    5.151755] iTCO_wdt: Found a ICH7 or ICH7R TCO device (Version=2, TCOBASE=0x0860)
[    5.151969] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
[    5.777744] EXT4-fs (sdb2): mounting ext3 file system using the ext4 subsystem
[    5.822028] EXT4-fs (sdb2): mounted filesystem with ordered data mode. Opts: (null)
[    7.392034] floppy0: no floppy controllers found
[    7.392054] work still pending
[    7.757367] cx88[0]/2-bb: Firmware upload successful.
[    7.765878] cx88[0]/2-bb: Firmware version is 0x02060039
[    7.785123] cx88[0]/2: registered device video1 [mpeg]

Methanar
Sep 26, 2013

by the sex ghost
iptables question:

I've got a load balancer that's doing SSL termination and forwarding traffic to some backends. I want to mirror all traffic inbound and outbound on this load balancer to a remote host for analysis and I need a sanity check.

I was looking at using iptables' TEE for this http://ipset.netfilter.org/iptables-extensions.man.html.

iptables -t mangle -A PREROUTING -i eth0 -j TEE --gateway 10.0.0.100
iptables -t mangle -A PREROUTING -o eth0 -j TEE --gateway 10.0.0.100

This means that all traffic is going to send to 10.0.0.100 twice, once in it's encrypted form and once while plaintext, right?

code:
#a user request comes in to the lb, matches -i  and is forwarded to the collector encrypted.
#a user request is sent out to a backend, matches -o and is forwarded to the collector unencrypted
backend <---plaintext-- LB <--encrypted-- Internet

#a backend response comes in to the lb, matches -i  and is forwarded to the collector unencrypted. 
#a backend response is send out to a user, matches -o and is forwarded to the collector encrypted
backend ---plaintext--> LB --encrypted--> Internet

telcoM
Mar 21, 2009
Fallen Rib

Methanar posted:

I've got a load balancer that's doing SSL termination and forwarding traffic to some backends. I want to mirror all traffic inbound and outbound on this load balancer to a remote host for analysis and I need a sanity check.

I was looking at using iptables' TEE for this http://ipset.netfilter.org/iptables-extensions.man.html.

iptables -t mangle -A PREROUTING -i eth0 -j TEE --gateway 10.0.0.100
iptables -t mangle -A PREROUTING -o eth0 -j TEE --gateway 10.0.0.100

This means that all traffic is going to send to 10.0.0.100 twice, once in it's encrypted form and once while plaintext, right?

I would really want to restrict the TEE rules to specific TCP ports, so that the system won't start resending to the collector any non-unicasts it might see. But hey, it's your network.

Incoming traffic should work as you expect. But outgoing traffic from the local LB process is not passed through the PREROUTING chain; it goes through OUTPUT instead.

Also, the -o <interface> option is only valid for FORWARD, OUTPUT and POSTROUTING chains, like the -i <interface> is only valid for INPUT, FORWARD and PREROUTING.

So I think you'd want
code:
iptables -t mangle -A PREROUTING -i eth0 -j TEE --gateway 10.0.0.100
iptables -t mangle -A OUTPUT -o eth0 -j TEE --gateway 10.0.0.100
instead.

Here are a few diagrams that might be helpful:
https://www.adminsehow.com/2011/09/iptables-packet-traverse-map/

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
My FreeBSD installation is only showing "hw.ncpu=1" on my quad-core machine. I tried recompiling and installing the GENERIC kernel conf, which does have the value "Option SMP" included, and it's showing I'm booting the new kernel on at the boot menu. Any idea how to get it to use all my cores?

SamDabbers
May 26, 2003



Post your dmesg output?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SamDabbers posted:

Post your dmesg output?

# sysctl hw.model hw.machine hw.ncpu
hw.model: Intel(R) Xeon(R) CPU W3565 @ 3.20GHz
hw.machine: amd64
hw.ncpu: 1



# dmesg | grep -i cpu
CPU: Intel(R) Xeon(R) CPU W3565 @ 3.20GHz (3200.07-MHz K8-class CPU)
taskqgroup_adjust failed cnt: 1 stride: 1 mp_ncpus: 1 smp_started: 0
taskqgroup_adjust failed cnt: 1 stride: 1 mp_ncpus: 1 smp_started: 0
cpu0: <ACPI CPU> on acpi0
est0: <Enhanced SpeedStep Frequency Control> on cpu0

evol262
Nov 30, 2010
#!/usr/bin/perl

Paul MaudDib posted:

# sysctl hw.model hw.machine hw.ncpu
hw.model: Intel(R) Xeon(R) CPU W3565 @ 3.20GHz
hw.machine: amd64
hw.ncpu: 1



# dmesg | grep -i cpu
CPU: Intel(R) Xeon(R) CPU W3565 @ 3.20GHz (3200.07-MHz K8-class CPU)
taskqgroup_adjust failed cnt: 1 stride: 1 mp_ncpus: 1 smp_started: 0
taskqgroup_adjust failed cnt: 1 stride: 1 mp_ncpus: 1 smp_started: 0
cpu0: <ACPI CPU> on acpi0
est0: <Enhanced SpeedStep Frequency Control> on cpu0


What about kern.smp.cpus or dmesg | grep -i smp

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

evol262 posted:

What about kern.smp.cpus or dmesg | grep -i smp

kern.smp.cpus: 1

No other dmesgs with smp other than the two in the previous post.

Volguus
Mar 3, 2009
FreeBSD should boot a SMP kernel by default, when the installer detects that you have SMP. Apparently it didn't and neither does when actually booting your own SMP kernel. Does the BIOS show you multiple cores? Are they enabled? If you boot a linux livecd/liveusb does that see multiple cores (cat /proc/cpuinfo)?

evol262
Nov 30, 2010
#!/usr/bin/perl
You should also check acpidump, because it may be some odd setting in the firmware.

If you build your own kernel, ensure that MAXCPUS is correct (you shouldn't need to build a kernel to get SMP now, though)

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I must have disabled it in the BIOS at some point :negative:

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Paul MaudDib posted:

I must have disabled it in the BIOS at some point :negative:

I love it when this sort of thing happens to someone besides me.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Redhat deprecated BTRFS on RHEL. That's gotta sting.

mystes
May 31, 2006

BTRFS sounds great on paper, but when I tried using it on a laptop around 2 years ago it seemed like if you lost power without shutting down cleanly you had like a 30% chance of losing all your data. After that I decided that I was quite happy with ext4.

Mao Zedong Thot
Oct 16, 2008


btrfs is the systemd of filesystems: cool poo poo implemented by naive idiots

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Is that the one with the guy who killed his wife? :chloe:

ext2/3/4 have been the default recommendation since forever, they're not the fanciest but they do the filesystem thing fairly well. ext4 should perhaps not be as standard as it is, it doesn't necessarily guarantee a flushed iop after fsync, which makes it kinda unsafe on systems that may lose power (RPis, laptops, machines not on a UPS, etc). Odds are good that you probably won't be writing anything you care about too much, and hopefully the FS itself doesn't get trashed. But in the era of SSDs do IOPS actually matter for a consumer machine, vs crash-safety?

ZFS has pretty much eclipsed most other "industrial-scale"/"full-featured" filesystems, there's very little else that's got as many hours of bulletproof service under huge corporate loads. Sorry HAMMER or whatever, it's not happening. LVM and/or softraid are probably the only other sensible logical layers.

Paul MaudDib fucked around with this message at 03:44 on Aug 2, 2017

mystes
May 31, 2006

Paul MaudDib posted:

Is that the one with the guy who killed his wife? :chloe:
Lol that was ReiserFS.

waffle iron
Jan 16, 2004
Facebook uses btrfs, but they can just spin up or redeploy thousands of redundant servers at the push of a button so they don't care if one dies a horrible death every 15 minutes.

Yaoi Gagarin
Feb 20, 2014

VOTE YES ON 69 posted:

btrfs is the systemd of filesystems: cool poo poo implemented by naive idiots

Systemd actually works tho

Volguus
Mar 3, 2009

VostokProgram posted:

Systemd actually works tho

When it does. When it doesn't ... god help you. Though, as a developer I must say that writing systemd service files is a shitton easier than init scripts.
As for filesystems, I was quite happy with XFS on a workstation that I had (at work) several years back. Until the inode bug bit me. I have a vague recollection of the specifics but it meant that suddenly my project would not compile., with the weirdest and most cryptic error messages (can't find this, cant link to that, all things that were there, etc.) It turns out that xfs had 64 bit inodes and gcc didn't (or didn't know what to do with them, or something). Copy the project into another folder ... we're back in business. Wasted 2 days on that, days that i'll never get back.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Volguus posted:

When it does. When it doesn't ... god help you. Though, as a developer I must say that writing systemd service files is a shitton easier than init scripts.
I found systemd to patch a lot of shortcomings in SysV. It works. it works very well. Wiring in pre/post-conditions that can operate independent of system scripts that may or may not be overwritten by packages, absolutely fantastic. Overriding OOM/nice scores? Beautiful. It's not perfect yet. It requires readaptation and still has some ways to go, but gently caress if it isn't a blessing over the previous pile of manure that persisted for 30+ years in most distros.

Edit: see also, sendmail -> Postfix.

nem fucked around with this message at 06:30 on Aug 2, 2017

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
On my laptop I quite often find myself trying to compile something or install a package and mess it up. Over time my system can often end up in a mess of partly installed packages and libraries.

I'd like to be able to take a snapshot, say every 30 minutes, on a cron job. Then, when I inevitably gently caress up installing something esoteric and experimental I can just undo the last half hour and leave my system exactly as before.

Could I achieve this using something like snapper.io on it's own, or should I also use btrfs? The vast majority of my important stuff is rsync'ed to my home server, so the data on my laptop isn't critical. I run fedora on the laptop but I'm willing to try something else if it has better integration with what I'd like to achieve.

evol262
Nov 30, 2010
#!/usr/bin/perl

Volguus posted:

When it does. When it doesn't ... god help you.

My experience is that systemd is much easier to diagnose, thanks to debug-shell, chvt, the journal, and the ability to generate trees of... everything.

Volguus
Mar 3, 2009

nem posted:

I found systemd to patch a lot of shortcomings in SysV. It works. it works very well. Wiring in pre/post-conditions that can operate independent of system scripts that may or may not be overwritten by packages, absolutely fantastic. Overriding OOM/nice scores? Beautiful. It's not perfect yet. It requires readaptation and still has some ways to go, but gently caress if it isn't a blessing over the previous pile of manure that persisted for 30+ years in most distros.

Edit: see also, sendmail -> Postfix.

If only Lennart would be ... reasonable (or sane or ... just loving normal). The way bugs are dealt with in systemd is beyond abysmal. The latest famous ones are those that he got a PWNIE prize for (https://www.theregister.co.uk/2017/07/28/black_hat_pwnie_awards/). To be so opposed to have a CVE filled? WTF? So defensive of his project that I'm not even sure is healthy.
A lot of systemd criticism is actually directed at developers and their behaviour. Being shoved down the throats of everyone doesn't help either.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

The Phlegmatist posted:

Yeah, it's actually pretty easy.

Keyboard under Gnome settings, scroll to the bottom, add new keyboard shortcut. Command would be firefox 1password.com

That'll open it under a new tab and focus it if you have a current firefox instance running, or open a new one if you don't.

This is great, thanks!

xzzy
Mar 5, 2009

I was spitting nails last night trying to get an unencrypted adhoc network set up on my raspberry. Doing it with four iw commands? Easy, had it working in two minutes.

Doing it "the right way" with netctl and systemd? gently caress off forever. Options in the man pages don't do what they say, and there's no documentation anywhere for nonstandard configurations.

End result is I gave up and wrote a systemd unit to run a script.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I'll always have a soft spot for ext4 because Theodore Ts'o took a lot of time to answer a lot of naive questions I emailed him years ago about how ext4 worked.

Adbot
ADBOT LOVES YOU

mike12345
Jul 14, 2008

"Whether the Earth was created in 7 days, or 7 actual eras, I'm not sure we'll ever be able to answer that. It's one of the great mysteries."





apropos man posted:

On my laptop I quite often find myself trying to compile something or install a package and mess it up. Over time my system can often end up in a mess of partly installed packages and libraries.

I'd like to be able to take a snapshot, say every 30 minutes, on a cron job. Then, when I inevitably gently caress up installing something esoteric and experimental I can just undo the last half hour and leave my system exactly as before.

Could I achieve this using something like snapper.io on it's own, or should I also use btrfs? The vast majority of my important stuff is rsync'ed to my home server, so the data on my laptop isn't critical. I run fedora on the laptop but I'm willing to try something else if it has better integration with what I'd like to achieve.

What about containerizing everything and making a snapshot of those?

e: something like this https://blog.jessfraz.com/post/ultimate-linux-on-the-desktop/

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply