|
Longinus00 posted:Imagine if any user could just install a version of a library/program with security vulnerabilities (old bash/openssl for example) and have it apply system wide.
|
# ? Jan 20, 2015 19:43 |
|
|
# ? May 24, 2024 21:08 |
|
Cingulate posted:I wasn't asking for the ability to write to restricted locations at all. If I had root, I could just use apt-get in the first place. The question was about local installations. I believe the usual solution to this on windows use completely self contained application binaries/packages (e.g. http://portableapps.com/) since you can just as easily be denied the ability to install applications in windows. I suppose this doesn't happen as much in linux because there's not nearly as much demand and the few people who want it can just do it themselves.
|
# ? Jan 20, 2015 20:11 |
|
Some package managers support installing to a user home. Apt does not.
|
# ? Jan 20, 2015 20:15 |
|
kujeger posted:Some package managers support installing to a user home. Apt does not. What are you talking about? Go read the manpage. CTRL+F "--root" and/or "--instroot"
|
# ? Jan 20, 2015 20:23 |
|
evol262 posted:What are you talking about? I was talking specifically about apt(-get), since that's what he was asking about. But reading the manpage for apt-get, I see now that you can pass on options to dpkg so never mind me!
|
# ? Jan 20, 2015 20:42 |
|
So should I actually try apt-get with --instdir=~/foo and --force-not-root?
|
# ? Jan 20, 2015 20:48 |
|
Actually testing this stuff, I'm going to go back to claiming that apt-get will not let you, without running as root, install packages to your home directory. But I would love to be proven wrong (evol262?). You can, however, first download the package with "apt download", and then install it with dpkg --force-not-root --root=$HOME -i foo.deb and so on, but you'll have to do a bit a wranglig to get dpkg to accept this. kujeger fucked around with this message at 22:02 on Jan 20, 2015 |
# ? Jan 20, 2015 20:59 |
|
Cingulate posted:So should I actually try apt-get with --instdir=~/foo and --force-not-root? My honest answer is that you should requisition a container, open a ticket to have it installed, or build it from source. See below. kujeger posted:Actually testing this stuff, I'm going to go back to claiming that apt-get will not let you, without running as root, install packages to your home directory. But I would love to be proven wrong (evol262?). As noted in IRC, I'd probably file a bug against apt. If it doesn't support arbitrary dpkg options, it shouldn't give you the choice to pass them. :debian: (note: yum/dnf also don't support this, but at least we don't claim to) I misinterpreted the earlier claim, in that apt isn't a package manager at all. I figured that passing options to dpkg (which does support it) would let apt do it. Guess not. Yay. Cingulate, try wrapping a for loop around "apt-get install --dry-run", then "apt download'ing those packages and installing them with the dpkg options kujeger gave. This will work, sort of, but there are a million better ways to do this. Like getting a VM/container you can play with, or a dev server where you have root, or whatever.
|
# ? Jan 20, 2015 22:51 |
|
evol262 posted:Like getting a VM/container you can play with, or a dev server where you have root, or whatever. Though VMs are probably actually a good idea. They look a bit scary (now I also need to take care of X, regular updates and drivers, just so I can execute a random one-shot science trick?), but it's a good suggestion. I've also started putting together a script parasitic on apt to get dependencies and packages for local installations.
|
# ? Jan 20, 2015 22:56 |
|
Cingulate posted:I'd like to avoid VMs for performance and complexity reasons, the servers I'm interested in are our expensive power houses (for scientific calculations), our admin's response times are a bit on the slow side and I sometimes like to install arcane software. Why do regular updates and stuff? Make a copy of the hdd file, turn it on when you need it, and if you break it copy the file back overwriting the broken machine. You can keep it exactly the same testbed for as long as you need this way.
|
# ? Jan 20, 2015 23:01 |
|
RFC2324 posted:Why do regular updates and stuff? Make a copy of the hdd file, turn it on when you need it, and if you break it copy the file back overwriting the broken machine. You can keep it exactly the same testbed for as long as you need this way.
|
# ? Jan 20, 2015 23:04 |
|
Cingulate posted:I'd like to avoid VMs for performance and complexity reasons, the servers I'm interested in are our expensive power houses (for scientific calculations), our admin's response times are a bit on the slow side and I sometimes like to install arcane software. For one-shot "science tricks" with arcane software, you should look at something like Docker or Vagrant
|
# ? Jan 20, 2015 23:46 |
|
evol262 posted:What are you talking about? evol262 posted:For one-shot "science tricks" with arcane software, you should look at something like Docker or Vagrant Vulture Culture fucked around with this message at 00:31 on Jan 21, 2015 |
# ? Jan 21, 2015 00:28 |
|
Misogynist posted:You know full well this is maybe a 10% solution for any system shy of a Debian full install, because most packages are not trivially relocatable and are not able to find dependent libraries or command-line tools without major rejiggering of the user environment. Misogynist posted:Vagrant is going to be an even larger overhead than just manually maintaining a VM for someone who's not maintaining the infrastructure and tooling to automatically provision their VMs. Docker's on the right track. Vagrant's "config.vm.provision = shell..." is covered in the official howto, is plain shell, and can easily manage an "apt-get update && ..." when you "vagrant up".
|
# ? Jan 21, 2015 02:08 |
|
Man, I'm horrified I can't figure out this myself but my team leader asked me if I knew what was being highlighted when you turned on row highlighting in top. Anyone know what the significance of row highlighting is in top? EDIT: Nevermind I get it now. From the manpage y :Row-Highlight toggle Changes highlighting for "running" tasks. For additional insight into this task state, see topic 3a. DESCRIPTIONS of Fields, the 'S' field (Process Status). Use of this provision provides important insight into your system's health. The only costs will be a few additional tty escape sequences. and 20. S -- Process Status The status of the task which can be one of: D = uninterruptible sleep R = running S = sleeping T = traced or stopped Z = zombie captkirk fucked around with this message at 17:53 on Jan 21, 2015 |
# ? Jan 21, 2015 17:49 |
|
How do I get google drive to sync my files because google hates linux. Also why does google hate linux?
|
# ? Jan 22, 2015 21:56 |
|
It's not great, but... https://github.com/astrada/google-drive-ocamlfuse
|
# ? Jan 22, 2015 22:22 |
|
SurgicalOntologist posted:It's not great, but...
|
# ? Jan 22, 2015 22:28 |
|
Misogynist posted:I had never considered OCaml as a consumer for services like that, but the matcher code is actually pretty terse and readable. Neat! Not a consumer for services like this, but rwmjones loves OCaml, and a shocking amount of the guestfish/libguestfs tools are in OCaml
|
# ? Jan 22, 2015 23:08 |
|
evol262 posted:Not a consumer for services like this, but rwmjones loves OCaml, and a shocking amount of the guestfish/libguestfs tools are in OCaml
|
# ? Jan 22, 2015 23:20 |
|
peepsalot posted:How do I get google drive to sync my files because google hates linux. On the same day Google Drive was announced multiple Google product managers said that a Linux client was coming soon. That was 2 years ago. I did see recently that the next major version of GNOME 3 wants to have Google Drive support.
|
# ? Jan 23, 2015 01:10 |
|
It sucks royally, but just install Dropbox. Ridiculous that Google can't or won't make a Linux client for syncing drive files. They have a stupid command line tool but it's a manually run thing to send files up and down to drive.
|
# ? Jan 23, 2015 03:47 |
|
Dropbox works. Copy.com also has a pretty decent Linux client. If you're security paranoid, you can set up that folder as an encfs or ecryptfs volume. That way the cloud service only ever sees encrypted files, and even if they get completely compromised, your stuff is still safe. I prefer encfs because setup is easier. encfs /path/to/dropboxfolder /path/to/plaintextview for either initial setup or mounting the volume later, then just use your files in /path/to/plaintextview normally. Your dropbox folder gets only the scrambled files. I'm sure the cloud storage people aren't fond of this because it wouldn't play well with whatever compression and de-duplication they do, but eh, whatever. There are probably about nine of us on the planet who bother to do this at all, and we don't take up much space. The joy of .
|
# ? Jan 23, 2015 04:30 |
|
I have been using insinc as i have a bunch of space in google drive and not much in dropbox. it does work fine, unfortunately its not free
|
# ? Jan 23, 2015 06:53 |
|
Check out SpiderOak. The Linux client works very well, and everything gets encrypted and deduplicated before it's sent off to their servers.
|
# ? Jan 23, 2015 16:23 |
|
Yeah, but those of us who have 1TB of space in Drive and 1GB in Dropbox or similar would rather use Drive. I may check out that command line tool, I mostly just want Drive for offsite backups of stuff so running it nightly in a cron job wouldn't be a big deal.
|
# ? Jan 23, 2015 16:33 |
|
I need to install a program from source on my CentOS server. Isn't there something I can use that will track what 'make install' did and let me manage the installed program as though I had installed it with yum? I can't remember where I read about it or what it is called...
|
# ? Jan 24, 2015 14:53 |
|
Yeah it's called writing a spec file and building an rpm.
|
# ? Jan 24, 2015 16:01 |
|
Is there a way to see read/write bandwidth per file on a samba server? For example, my file server is seeing around 800 mbit/s traffic at the moment while several computers are reading and writing to it. I'd like to see, on the server, which files are being read/wrote to and how much bandwidth of that 800 mbit/s is for each file.
|
# ? Jan 24, 2015 17:24 |
|
Thermopyle posted:Is there a way to see read/write bandwidth per file on a samba server? If you're using in-kernel smbfs, and you're running on a systems where you can put a systemtap-enabled kernel on, then I may be able to put something together. Otherwise, I/O accounting is handled per-process, and it's extremely likely that smbd has multiple open descriptors per file
|
# ? Jan 24, 2015 18:20 |
|
evol262 posted:If you're using in-kernel smbfs, and you're running on a systems where you can put a systemtap-enabled kernel on, then I may be able to put something together. I don't know what most of that first sentence means. Well, I guess I know what it means, because I can read words, I just don't know if they apply to me. I'm running Ubuntu 14.04 Server with however that is set up by default. Sounds like it's more trouble then its worth as it's really just idle curiosity at this point.
|
# ? Jan 24, 2015 18:23 |
|
Death Vomit Wizard posted:I need to install a program from source on my CentOS server. Isn't there something I can use that will track what 'make install' did and let me manage the installed program as though I had installed it with yum? I can't remember where I read about it or what it is called... http://www.asic-linux.com.mx/~izto/checkinstall/ The best way is to do as spankmeister suggested however.
|
# ? Jan 24, 2015 19:04 |
|
FPM ("loving Package Management") is a nice wrapper around the package build process if you don't want to care about spec files. You can just point it at a directory and tell it "make this poo poo into an RPM (or deb, or gem, or whatever)" and it will do it.
|
# ? Jan 24, 2015 20:01 |
|
spankmeister posted:Yeah it's called writing a spec file and building an rpm. This is actually surprisingly easy once you get used to the macros.
|
# ? Jan 24, 2015 20:29 |
|
fatherdog posted:This is actually surprisingly easy once you get used to the macros. You don't really need any macros to install from a source tarball, other than %bindir, I guess This should practically fulfill doing it with a copy+paste and a little editing. Thermopyle posted:I don't know what most of that first sentence means. Well, I guess I know what it means, because I can read words, I just don't know if they apply to me. I'm running Ubuntu 14.04 Server with however that is set up by default. The kernel supports smbfs mounting natively, but the tools are mostly userspace. It doesn't need to be a kernel driver, though. So basically, if you're using Linux clients with the kernel driver (which is probably the default) on a distro which supports systemtap (Ubuntu/Debian probably doesn't, though I haven't checked in a long time), this is doable. E: Ubuntu says they support it, though I'm not sure how well it works. Systemtap probes are basically Dtrace-ish for Linux, and they're trivial to write. Find the part of the kernel you want to monitor. Add a little code (if it's not already there). Reload it. Then you have a probe which says "every time this function is called, do something" ("something" is usually updating a counter in a struct systemtap looks at). You can do some really, really neat things with it which are usually in the realm of "impossible", like tracking exactly what every NFS client is doing, what their IP address is, how much bandwidth they're consuming, when they issue reads/unlinks/etc to the server, et al. This could be a basic skeleton, with a couple of tweaks, mostly doable from the systemtap examples page. Hook a PID (or every smbd PID) and watch the IO per file. Easily. Trivially, even. If systemtap works. Can anyone confirm whether this actually works on Ubuntu?
|
# ? Jan 25, 2015 02:05 |
|
Death Vomit Wizard posted:I need to install a program from source on my CentOS server. Isn't there something I can use that will track what 'make install' did and let me manage the installed program as though I had installed it with yum? I can't remember where I read about it or what it is called... spankmeister posted:Yeah it's called writing a spec file and building an rpm. Longinus00 posted:http://www.asic-linux.com.mx/~izto/checkinstall/ Docjowles posted:FPM ("loving Package Management") is a nice wrapper around the package build process if you don't want to care about spec files. You can just point it at a directory and tell it "make this poo poo into an RPM (or deb, or gem, or whatever)" and it will do it. fatherdog posted:This is actually surprisingly easy once you get used to the macros. evol262 posted:You don't really need any macros to install from a source tarball, other than %bindir, I guess Wow, thanks for the homework. I love you all!
|
# ? Jan 25, 2015 03:41 |
|
I use a Raspberry Pi to back up my Google Drive. It uses gdrivefs, a FUSE driver for Google driver, and syncs it to a btrfs filesystem every hour (which snapshots itself every day.) You can use gdrivefs on your local computer and, if the ridiculous latency bothers you, mount it in a hidden directory and use rsync to keep it in sync with a directory on your storage drive. It's not as seamless as Dropbox, but it works well. Google Drive's superior integration with my Chromebook and Android phone make it worthwhile.
|
# ? Jan 25, 2015 04:08 |
|
I have successfully set up a bridge for my VM using Network Manager in Fedora21 using this guide. My question is, what do I do for my next VM? Add something to my bridge0? Follow the guide again and make a bridge1?
|
# ? Jan 25, 2015 09:08 |
|
Death Vomit Wizard posted:I have successfully set up a bridge for my VM using Network Manager in Fedora21 using this guide. My question is, what do I do for my next VM? Add something to my bridge0? Follow the guide again and make a bridge1? Add it to br0 or whatever you named it. Linux bridges are simple ARP things, but they can have practically unlimited slaves (practically unlimited in that you'll never hit the limit in practice)
|
# ? Jan 25, 2015 17:37 |
|
|
# ? May 24, 2024 21:08 |
|
I'm not sure if this is a Linux question or an Ubuntu question, but here goes: I'm making a continuous integration server for our company using Jenkins. One of the jobs I'm doing is supposed to migrate our old MySQL database to a new structure. One of the steps is dumping the current contents to text files. At first I tried using the jenkins job workspace for the dumps: /var/lib/jenkins/jobs/migrate_production_db/workspace but I got this error: "mysqldump: Got error: 1: Can't create/write to file <path to file> (Errcode: 13) when executing 'SELECT INTO OUTFILE'" Okay, I googled that and it's probably AppArmor's fault. I managed to get the job working by using sudo aa-complain /usr/sbin/mysqld and by having the jenkins job create a directory /tmp/migration_work and then using chmod 777 on the directory (because the directory is owned by jenkins:jenkins and mysql runs under mysql:mysql). This is not a huge problem as long as I'm messing around at home my own server, but I'd like to put Jenkins and the jobs onto an internet-facing server and I feel like disabling AppArmor would be pretty dumb in that case. Also, I'd like to fix the configuration on my own machine as well, since this was a "just loving work, goddammit" fix. Anyway, what would be a less dumb way of doing this? I think using /tmp/migration_work is not a huge problem, but I think I'd prefer to use the jenkins workspace instead.
|
# ? Jan 25, 2015 19:06 |