Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Cingulate
Oct 23, 2012

by Fluffdaddy

Longinus00 posted:

Imagine if any user could just install a version of a library/program with security vulnerabilities (old bash/openssl for example) and have it apply system wide.
I wasn't asking for the ability to write to restricted locations at all. If I had root, I could just use apt-get in the first place. The question was about local installations.

Adbot
ADBOT LOVES YOU

Longinus00
Dec 29, 2005
Ur-Quan

Cingulate posted:

I wasn't asking for the ability to write to restricted locations at all. If I had root, I could just use apt-get in the first place. The question was about local installations.

I believe the usual solution to this on windows use completely self contained application binaries/packages (e.g. http://portableapps.com/) since you can just as easily be denied the ability to install applications in windows. I suppose this doesn't happen as much in linux because there's not nearly as much demand and the few people who want it can just do it themselves.

kujeger
Feb 19, 2004

OH YES HA HA
Some package managers support installing to a user home. Apt does not.

evol262
Nov 30, 2010
#!/usr/bin/perl

kujeger posted:

Some package managers support installing to a user home. Apt does not.

What are you talking about?

Go read the manpage. CTRL+F "--root" and/or "--instroot"

kujeger
Feb 19, 2004

OH YES HA HA

evol262 posted:

What are you talking about?

Go read the manpage. CTRL+F "--root" and/or "--instroot"

I was talking specifically about apt(-get), since that's what he was asking about. But reading the manpage for apt-get, I see now that you can pass on options to dpkg so never mind me!

Cingulate
Oct 23, 2012

by Fluffdaddy
So should I actually try apt-get with --instdir=~/foo and --force-not-root?

kujeger
Feb 19, 2004

OH YES HA HA
Actually testing this stuff, I'm going to go back to claiming that apt-get will not let you, without running as root, install packages to your home directory. But I would love to be proven wrong (evol262?).


You can, however, first download the package with "apt download", and then install it with dpkg --force-not-root --root=$HOME -i foo.deb and so on, but you'll have to do a bit a wranglig to get dpkg to accept this.

kujeger fucked around with this message at 22:02 on Jan 20, 2015

evol262
Nov 30, 2010
#!/usr/bin/perl

Cingulate posted:

So should I actually try apt-get with --instdir=~/foo and --force-not-root?

My honest answer is that you should requisition a container, open a ticket to have it installed, or build it from source. See below.

kujeger posted:

Actually testing this stuff, I'm going to go back to claiming that apt-get will not let you, without running as root, install packages to your home directory. But I would love to be proven wrong (evol262?).


You can, however, first download the package with "apt download", and then install it with dpkg --force-not-root --root=$HOME -i foo.deb and so on, but you'll have to do a bit a wranglig to get dpkg to accept this.

As noted in IRC, I'd probably file a bug against apt. If it doesn't support arbitrary dpkg options, it shouldn't give you the choice to pass them. :debian: (note: yum/dnf also don't support this, but at least we don't claim to)

I misinterpreted the earlier claim, in that apt isn't a package manager at all. I figured that passing options to dpkg (which does support it) would let apt do it. Guess not. Yay.

Cingulate, try wrapping a for loop around "apt-get install --dry-run", then "apt download'ing those packages and installing them with the dpkg options kujeger gave. This will work, sort of, but there are a million better ways to do this. Like getting a VM/container you can play with, or a dev server where you have root, or whatever.

Cingulate
Oct 23, 2012

by Fluffdaddy

evol262 posted:

Like getting a VM/container you can play with, or a dev server where you have root, or whatever.
I'd like to avoid VMs for performance and complexity reasons, the servers I'm interested in are our expensive power houses (for scientific calculations), our admin's response times are a bit on the slow side and I sometimes like to install arcane software.

Though VMs are probably actually a good idea. They look a bit scary (now I also need to take care of X, regular updates and drivers, just so I can execute a random one-shot science trick?), but it's a good suggestion.
I've also started putting together a script parasitic on apt to get dependencies and packages for local installations.

RFC2324
Jun 7, 2012

http 418

Cingulate posted:

I'd like to avoid VMs for performance and complexity reasons, the servers I'm interested in are our expensive power houses (for scientific calculations), our admin's response times are a bit on the slow side and I sometimes like to install arcane software.

Though VMs are probably actually a good idea. They look a bit scary (now I also need to take care of X, regular updates and drivers, just so I can execute a random one-shot science trick?), but it's a good suggestion.
I've also started putting together a script parasitic on apt to get dependencies and packages for local installations.

Why do regular updates and stuff? Make a copy of the hdd file, turn it on when you need it, and if you break it copy the file back overwriting the broken machine. You can keep it exactly the same testbed for as long as you need this way.

Cingulate
Oct 23, 2012

by Fluffdaddy

RFC2324 posted:

Why do regular updates and stuff? Make a copy of the hdd file, turn it on when you need it, and if you break it copy the file back overwriting the broken machine. You can keep it exactly the same testbed for as long as you need this way.
Woah, go easy on me. You're speaking to someone whose mouse has only one button.

evol262
Nov 30, 2010
#!/usr/bin/perl

Cingulate posted:

I'd like to avoid VMs for performance and complexity reasons, the servers I'm interested in are our expensive power houses (for scientific calculations), our admin's response times are a bit on the slow side and I sometimes like to install arcane software.

Though VMs are probably actually a good idea. They look a bit scary (now I also need to take care of X, regular updates and drivers, just so I can execute a random one-shot science trick?), but it's a good suggestion.
I've also started putting together a script parasitic on apt to get dependencies and packages for local installations.

For one-shot "science tricks" with arcane software, you should look at something like Docker or Vagrant

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

What are you talking about?

Go read the manpage. CTRL+F "--root" and/or "--instroot"
You know full well this is maybe a 10% solution for any system shy of a Debian full install, because most packages are not trivially relocatable and are not able to find dependent libraries or command-line tools without major rejiggering of the user environment.

evol262 posted:

For one-shot "science tricks" with arcane software, you should look at something like Docker or Vagrant
Vagrant is going to be an even larger overhead than just manually maintaining a VM for someone who's not maintaining the infrastructure and tooling to automatically provision their VMs. Docker's on the right track.

Vulture Culture fucked around with this message at 00:31 on Jan 21, 2015

evol262
Nov 30, 2010
#!/usr/bin/perl

Misogynist posted:

You know full well this is maybe a 10% solution for any system shy of a Debian full install, because most packages are not trivially relocatable and are not able to find dependent libraries or command-line tools without major rejiggering of the user environment.
To be fair, my only experience with this is pulling apart deboostrap, and at least the core packages do it. Multiverse/universe may be a lot iffier. I'm not a Debian person, so I'm just trusting their manpage.

Misogynist posted:

Vagrant is going to be an even larger overhead than just manually maintaining a VM for someone who's not maintaining the infrastructure and tooling to automatically provision their VMs. Docker's on the right track.
I think Vagrant is significantly easier than manually installing/managing/maintaining a VM, and maybe better than docker, which has a number of quirks and weird limitations for user software and isn't intended to be kept up to date without getting a new base image.

Vagrant's "config.vm.provision = shell..." is covered in the official howto, is plain shell, and can easily manage an "apt-get update && ..." when you "vagrant up".

captkirk
Feb 5, 2010
Man, I'm horrified I can't figure out this myself but my team leader asked me if I knew what was being highlighted when you turned on row highlighting in top. Anyone know what the significance of row highlighting is in top?

EDIT: Nevermind I get it now. From the manpage

y :Row-Highlight toggle
Changes highlighting for "running" tasks. For additional insight into this task state, see topic 3a.
DESCRIPTIONS of Fields, the 'S' field (Process Status).

Use of this provision provides important insight into your system's health. The only costs will be a
few additional tty escape sequences.

and


20. S -- Process Status
The status of the task which can be one of:
D = uninterruptible sleep
R = running
S = sleeping
T = traced or stopped
Z = zombie

captkirk fucked around with this message at 17:53 on Jan 21, 2015

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

How do I get google drive to sync my files because google hates linux.

Also why does google hate linux?

SurgicalOntologist
Jun 17, 2004

It's not great, but...

https://github.com/astrada/google-drive-ocamlfuse

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I had never considered OCaml as a consumer for services like that, but the matcher code is actually pretty terse and readable. Neat!

evol262
Nov 30, 2010
#!/usr/bin/perl

Misogynist posted:

I had never considered OCaml as a consumer for services like that, but the matcher code is actually pretty terse and readable. Neat!

Not a consumer for services like this, but rwmjones loves OCaml, and a shocking amount of the guestfish/libguestfs tools are in OCaml

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

Not a consumer for services like this, but rwmjones loves OCaml, and a shocking amount of the guestfish/libguestfs tools are in OCaml
When I interviewed at Jane Street a few years back, they were dangerously close to writing their own configuration management system in OCaml instead of using Puppet. (Thankfully, they didn't.)

waffle iron
Jan 16, 2004

peepsalot posted:

How do I get google drive to sync my files because google hates linux.

Also why does google hate linux?

On the same day Google Drive was announced multiple Google product managers said that a Linux client was coming soon. That was 2 years ago.

I did see recently that the next major version of GNOME 3 wants to have Google Drive support.

mod sassinator
Dec 13, 2006
I came here to Kick Ass and Chew Bubblegum,
and I'm All out of Ass
It sucks royally, but just install Dropbox. Ridiculous that Google can't or won't make a Linux client for syncing drive files. They have a stupid command line tool but it's a manually run thing to send files up and down to drive.

Powered Descent
Jul 13, 2008

We haven't had that spirit here since 1969.

Dropbox works. Copy.com also has a pretty decent Linux client.

If you're security paranoid, you can set up that folder as an encfs or ecryptfs volume. That way the cloud service only ever sees encrypted files, and even if they get completely compromised, your stuff is still safe. I prefer encfs because setup is easier. encfs /path/to/dropboxfolder /path/to/plaintextview for either initial setup or mounting the volume later, then just use your files in /path/to/plaintextview normally. Your dropbox folder gets only the scrambled files.

I'm sure the cloud storage people aren't fond of this because it wouldn't play well with whatever compression and de-duplication they do, but eh, whatever. There are probably about nine of us on the planet who bother to do this at all, and we don't take up much space.

The joy of :tinfoil:.

Celexi
Nov 25, 2006

Slava Ukraini!
I have been using insinc as i have a bunch of space in google drive and not much in dropbox. it does work fine, unfortunately its not free

SamDabbers
May 26, 2003



Check out SpiderOak. The Linux client works very well, and everything gets encrypted and deduplicated before it's sent off to their servers.

ToxicFrog
Apr 26, 2008


Yeah, but those of us who have 1TB of space in Drive and 1GB in Dropbox or similar would rather use Drive. :(

I may check out that command line tool, I mostly just want Drive for offsite backups of stuff so running it nightly in a cron job wouldn't be a big deal.

Death Vomit Wizard
May 8, 2006
Bottom Feeder
I need to install a program from source on my CentOS server. Isn't there something I can use that will track what 'make install' did and let me manage the installed program as though I had installed it with yum? I can't remember where I read about it or what it is called...

spankmeister
Jun 15, 2008






Yeah it's called writing a spec file and building an rpm. ;)

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Is there a way to see read/write bandwidth per file on a samba server?

For example, my file server is seeing around 800 mbit/s traffic at the moment while several computers are reading and writing to it. I'd like to see, on the server, which files are being read/wrote to and how much bandwidth of that 800 mbit/s is for each file.

evol262
Nov 30, 2010
#!/usr/bin/perl

Thermopyle posted:

Is there a way to see read/write bandwidth per file on a samba server?

For example, my file server is seeing around 800 mbit/s traffic at the moment while several computers are reading and writing to it. I'd like to see, on the server, which files are being read/wrote to and how much bandwidth of that 800 mbit/s is for each file.

If you're using in-kernel smbfs, and you're running on a systems where you can put a systemtap-enabled kernel on, then I may be able to put something together.

Otherwise, I/O accounting is handled per-process, and it's extremely likely that smbd has multiple open descriptors per file

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

evol262 posted:

If you're using in-kernel smbfs, and you're running on a systems where you can put a systemtap-enabled kernel on, then I may be able to put something together.

Otherwise, I/O accounting is handled per-process, and it's extremely likely that smbd has multiple open descriptors per file

I don't know what most of that first sentence means. Well, I guess I know what it means, because I can read words, I just don't know if they apply to me. I'm running Ubuntu 14.04 Server with however that is set up by default.

Sounds like it's more trouble then its worth as it's really just idle curiosity at this point.

Longinus00
Dec 29, 2005
Ur-Quan

Death Vomit Wizard posted:

I need to install a program from source on my CentOS server. Isn't there something I can use that will track what 'make install' did and let me manage the installed program as though I had installed it with yum? I can't remember where I read about it or what it is called...

http://www.asic-linux.com.mx/~izto/checkinstall/

The best way is to do as spankmeister suggested however.

Docjowles
Apr 9, 2009

FPM ("loving Package Management") is a nice wrapper around the package build process if you don't want to care about spec files. You can just point it at a directory and tell it "make this poo poo into an RPM (or deb, or gem, or whatever)" and it will do it.

fatherdog
Feb 16, 2005

spankmeister posted:

Yeah it's called writing a spec file and building an rpm. ;)

This is actually surprisingly easy once you get used to the macros.

evol262
Nov 30, 2010
#!/usr/bin/perl

fatherdog posted:

This is actually surprisingly easy once you get used to the macros.

You don't really need any macros to install from a source tarball, other than %bindir, I guess

This should practically fulfill doing it with a copy+paste and a little editing.

Thermopyle posted:

I don't know what most of that first sentence means. Well, I guess I know what it means, because I can read words, I just don't know if they apply to me. I'm running Ubuntu 14.04 Server with however that is set up by default.

Sounds like it's more trouble then its worth as it's really just idle curiosity at this point.

The kernel supports smbfs mounting natively, but the tools are mostly userspace. It doesn't need to be a kernel driver, though. So basically, if you're using Linux clients with the kernel driver (which is probably the default) on a distro which supports systemtap (Ubuntu/Debian probably doesn't, though I haven't checked in a long time), this is doable.

E: Ubuntu says they support it, though I'm not sure how well it works.

Systemtap probes are basically Dtrace-ish for Linux, and they're trivial to write. Find the part of the kernel you want to monitor. Add a little code (if it's not already there). Reload it. Then you have a probe which says "every time this function is called, do something" ("something" is usually updating a counter in a struct systemtap looks at). You can do some really, really neat things with it which are usually in the realm of "impossible", like tracking exactly what every NFS client is doing, what their IP address is, how much bandwidth they're consuming, when they issue reads/unlinks/etc to the server, et al.

This could be a basic skeleton, with a couple of tweaks, mostly doable from the systemtap examples page. Hook a PID (or every smbd PID) and watch the IO per file. Easily. Trivially, even. If systemtap works. Can anyone confirm whether this actually works on Ubuntu?

Death Vomit Wizard
May 8, 2006
Bottom Feeder

Death Vomit Wizard posted:

I need to install a program from source on my CentOS server. Isn't there something I can use that will track what 'make install' did and let me manage the installed program as though I had installed it with yum? I can't remember where I read about it or what it is called...

spankmeister posted:

Yeah it's called writing a spec file and building an rpm. ;)

Longinus00 posted:

http://www.asic-linux.com.mx/~izto/checkinstall/

The best way is to do as spankmeister suggested however.

Docjowles posted:

FPM ("loving Package Management") is a nice wrapper around the package build process if you don't want to care about spec files. You can just point it at a directory and tell it "make this poo poo into an RPM (or deb, or gem, or whatever)" and it will do it.

fatherdog posted:

This is actually surprisingly easy once you get used to the macros.

evol262 posted:

You don't really need any macros to install from a source tarball, other than %bindir, I guess

This should practically fulfill doing it with a copy+paste and a little editing.

Wow, thanks for the homework. I love you all!

xtal
Jan 9, 2011

by Fluffdaddy
I use a Raspberry Pi to back up my Google Drive. It uses gdrivefs, a FUSE driver for Google driver, and syncs it to a btrfs filesystem every hour (which snapshots itself every day.) You can use gdrivefs on your local computer and, if the ridiculous latency bothers you, mount it in a hidden directory and use rsync to keep it in sync with a directory on your storage drive. It's not as seamless as Dropbox, but it works well. Google Drive's superior integration with my Chromebook and Android phone make it worthwhile.

Death Vomit Wizard
May 8, 2006
Bottom Feeder
I have successfully set up a bridge for my VM using Network Manager in Fedora21 using this guide. My question is, what do I do for my next VM? Add something to my bridge0? Follow the guide again and make a bridge1?

evol262
Nov 30, 2010
#!/usr/bin/perl

Death Vomit Wizard posted:

I have successfully set up a bridge for my VM using Network Manager in Fedora21 using this guide. My question is, what do I do for my next VM? Add something to my bridge0? Follow the guide again and make a bridge1?

Add it to br0 or whatever you named it. Linux bridges are simple ARP things, but they can have practically unlimited slaves (practically unlimited in that you'll never hit the limit in practice)

Adbot
ADBOT LOVES YOU

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope
I'm not sure if this is a Linux question or an Ubuntu question, but here goes: I'm making a continuous integration server for our company using Jenkins. One of the jobs I'm doing is supposed to migrate our old MySQL database to a new structure. One of the steps is dumping the current contents to text files.

At first I tried using the jenkins job workspace for the dumps: /var/lib/jenkins/jobs/migrate_production_db/workspace but I got this error:
"mysqldump: Got error: 1: Can't create/write to file <path to file> (Errcode: 13) when executing 'SELECT INTO OUTFILE'"

Okay, I googled that and it's probably AppArmor's fault.

I managed to get the job working by using
sudo aa-complain /usr/sbin/mysqld
and by having the jenkins job create a directory /tmp/migration_work and then using chmod 777 on the directory (because the directory is owned by jenkins:jenkins and mysql runs under mysql:mysql). This is not a huge problem as long as I'm messing around at home my own server, but I'd like to put Jenkins and the jobs onto an internet-facing server and I feel like disabling AppArmor would be pretty dumb in that case. Also, I'd like to fix the configuration on my own machine as well, since this was a "just loving work, goddammit" fix.

Anyway, what would be a less dumb way of doing this? I think using /tmp/migration_work is not a huge problem, but I think I'd prefer to use the jenkins workspace instead.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply