|
evol262 posted:Does this mean iMac G5s? Yes, it's an iMac G5. The link you posted was very insightful, but I still couldn't get it to work for me, even when I used Puppy Linux on another computer to follow the dd steps. I tried formatting the drive as HFS+ using GParted next, but the iMacs still won't recognize any boot drive I try. I guess it's not worth the effort, because this is taking way too much.
|
# ? Jun 4, 2014 19:28 |
|
|
# ? May 10, 2024 06:05 |
|
Why doesn't Chromium view PDFs "inline" like Chrome? edit: Solution found and it works: https://chrome.google.com/webstore/detail/pdf-viewer/oemmndcbldboiebfnladdacbdfmadadm/related the fucked around with this message at 19:49 on Jun 4, 2014 |
# ? Jun 4, 2014 19:47 |
|
The PDF viewer is licensed from Foxit Software. That said, they recently managed to get it open-source as Pdfium, so you might see it in Chromium soon.
|
# ? Jun 4, 2014 19:49 |
|
I'm using Debian sid and KDE. How do I completely disable paste-on-middle-mouse-button, without disabling the MMB in general? I'm not sure whether it's in Xorg, GTK, QT, several of those, or what- I just know I want it gone. (Apart from being horribly annoying it's actually a security issue in my view. A webpage receives a paste event when I middle-click it, so I just have to mis-click a link by a pixel and Joe's Random Site receives whatever I last selected. What if it was a password?) I've heard that Wayland might put an end to this default behaviour, but until then, how do I kill it with fire?
|
# ? Jun 4, 2014 21:33 |
|
telcoM posted:I've seen something like that before, when dealing with Optimus. Thank you for your help. I tried "echo $DISPLAY" and it does gives me :0. But "ls -lA /tmp/.X11-unix/" is giving me just this: code:
|
# ? Jun 5, 2014 00:32 |
|
Seven Round Things posted:I'm using Debian sid and KDE. How do I completely disable paste-on-middle-mouse-button, without disabling the MMB in general? I'm not sure whether it's in Xorg, GTK, QT, several of those, or what- I just know I want it gone. It's done by every toolkit individually. To turn it off in GTK+ you can disable the "enable-primary-paste" setting. $ echo 'gtk-enable-primary-paste = false' >> ~/.config/gtk-3.0/settings.ini
|
# ? Jun 5, 2014 00:38 |
|
Seven Round Things posted:Wayland might put an end to this default behaviour I think I middle-click paste on average every 30 minutes or so, and I'd be extremely unhappy if this is the case. Good to know that it's done by each toolkit. Lysidas fucked around with this message at 03:04 on Jun 5, 2014 |
# ? Jun 5, 2014 03:02 |
|
Lysidas posted:I think I middle-click paste on average every 30 minutes or so, and I'd be extremely unhappy if this is the case. Good to know that it's done by each toolkit. Wayland does not have multiple independent clipboards. We can support middle mode click, but it will be the same as Ctrl+V.
|
# ? Jun 5, 2014 03:31 |
|
Suspicious Dish posted:Wayland does not have multiple independent clipboards. We can support middle mode click, but it will be the same as Ctrl+V. Awww man. Am i the only person that actually likes independent clipboards as a feature??
|
# ? Jun 5, 2014 04:26 |
|
nitrogen posted:Awww man. Am i the only person that actually likes independent clipboards as a feature?? No. I use this all the time.
|
# ? Jun 5, 2014 04:36 |
|
nitrogen posted:Awww man. Am i the only person that actually likes independent clipboards as a feature?? No I love them as well and use the highlight-copy, middle-click paste all the time. I feel a bit lost without it when on a Windows PC.
|
# ? Jun 5, 2014 05:05 |
|
It's such a controversial feature that we might add it back. We've talked about it before. We're not sure yet.
|
# ? Jun 5, 2014 09:50 |
|
Suspicious Dish posted:It's such a controversial feature that we might add it back. We've talked about it before. We're not sure yet. I hope so, it is something I use constantly and miss terribly whenever I'm on a system that does not have it.
|
# ? Jun 5, 2014 10:41 |
|
Say I'm working half the time on my PC and half the time on my laptop - what's the best way to keep things organised? It seems like I can either use all my devices to ssh into a single development server and do all my work on there, or I can recreate the same development environment on each device and work on them directly, then sync files between them. Is there a more standard approach? How feasible is it to sync my entire Home directory between my PC and laptop and VPS? Is that a thing people do? Basically I want the same .bashrc, .vimrc and .screenrc wherever I'm working.
|
# ? Jun 5, 2014 11:50 |
|
Suspicious Dish posted:It's such a controversial feature that we might add it back. We've talked about it before. We're not sure yet.
|
# ? Jun 5, 2014 12:03 |
|
Ema Nymton posted:Yes, it's an iMac G5. The link you posted was very insightful, but I still couldn't get it to work for me, even when I used Puppy Linux on another computer to follow the dd steps. I tried formatting the drive as HFS+ using GParted next, but the iMacs still won't recognize any boot drive I try. I'm relatively noobish when it comes to linux, and i might be completely wrong here, but have you tried Yellowdog Linux? I found a guide on my phone but as it's pdf, I can't read it. The blurb thing sounds like it might be what you're after. http://www.fixstars.com/files/linux/ydl6.0_apple_guide.pdf
|
# ? Jun 5, 2014 12:07 |
|
fuf posted:Say I'm working half the time on my PC and half the time on my laptop - what's the best way to keep things organised? Pick one of these: http://dotfiles.github.io
|
# ? Jun 5, 2014 14:43 |
|
fuf posted:Say I'm working half the time on my PC and half the time on my laptop - what's the best way to keep things organised? I use Unison to keep stuff in sync between my laptop and desktop. It's usually pretty smart about dealing with conflicts and works over ssh.
|
# ? Jun 5, 2014 17:20 |
|
Angelwolf posted:I'm relatively noobish when it comes to linux, and i might be completely wrong here, but have you tried Yellowdog Linux? I found a guide on my phone but as it's pdf, I can't read it. The blurb thing sounds like it might be what you're after. I heard about Yellow Dog in my searches, but it hasn't been updated in a while. I wanted to start with a distro which still has some support.
|
# ? Jun 5, 2014 17:36 |
|
fuf posted:Say I'm working half the time on my PC and half the time on my laptop - what's the best way to keep things organised? If by "entire home directory" you mean configuration files, you could just use git (or another dvcs). Stick the relevant files in, push them to a central location (the VPS would be good, but you can simply set up multiple remote repositories), pull from that location on the systems. Done. The advantage of using something like git is that it doesn't really matter on which system you change something, since the next pull on another system will grab all of the changes since your pull there. The advantage is that this scales to an arbitrary number of systems. If you were to get another laptop, another VPS, work on somebody else's computer etc., simply fetch everything from your remote repository and off you go. If you need access beyond that, you could also use bitbucket (https://bitbucket.org/) which offers an unlimited amount of free private repositories that are only limited insofar as the number of colaborators is limited, which isn't really an issue if you are the only person using the repository. If you need to push around binary files as well, something like loose-fish's answer probably works (I don't know that program), or you could just use rsync, which diffs everything you send, only sends what actually changed and works over ssh. edit: I just saw Unison actually seems to use rsync or at least rsync-like behaviour, so that's pretty much the same thing just with a gui. Hollow Talk fucked around with this message at 19:18 on Jun 5, 2014 |
# ? Jun 5, 2014 19:15 |
How do I escape this command correctly so it actually runs?code:
|
|
# ? Jun 5, 2014 19:34 |
|
Try wrapping the "debian-sys-maint@localhost" in double quotes...? Should work I'd think.
|
# ? Jun 5, 2014 19:38 |
revmoo posted:Try wrapping the "debian-sys-maint@localhost" in double quotes...? Should work I'd think. code:
|
|
# ? Jun 5, 2014 19:46 |
|
Suspicious Dish posted:It's such a controversial feature that we might add it back. We've talked about it before. We're not sure yet. I use this feature so much and want it on my mac and pc. Please don't take it away
|
# ? Jun 5, 2014 19:57 |
|
mysql -u root -p -e "GRANT ALL PRIVILEGES on *.* TO 'debian-sys-maint'@localhost IDENTIFIED BY 'tmp123' WITH GRANT OPTION; FLUSH PRIVILEGES;" ^^ This works for me
|
# ? Jun 5, 2014 20:02 |
Woohoo! Thanks revmoo!
|
|
# ? Jun 5, 2014 20:05 |
|
Aquila posted:I use this feature so much and want it on my mac and pc. Please don't take it away Remember, right click paste will still work, it will just be the same as Ctrl+V.
|
# ? Jun 5, 2014 20:08 |
|
Illusive gently caress Man posted:Exactly, there's some tpm extension-like stuff going on. The system runs a modified u-boot which grabs the kernel/ramdisk over tftp. There is no persistent storage attached. At some later point, the system needs to be able to perform some attestation-like stuff in which hashes of what u-boot loaded are signed. pseudorandom name posted:Why transmit an ext2 filesystem when you could just use a cpio archive? So I've been trying this for a little while, and the problem now is that the cpio file format includes device and inode numbers of the input files (Documentation says "These are used by programs that read cpio archives to determine when two entries refer to the same file.") Hypothetically, I could make my own cpio generator (or a tool to parse/patch existing cpio files) which just sets the inode numbers in the archive incrementally or something.... but this sounds like another one of my idiotic hacks.
|
# ? Jun 5, 2014 20:59 |
|
Illusive gently caress Man posted:So I've been trying this for a little while, and the problem now is that the cpio file format includes device and inode numbers of the input files (Documentation says "These are used by programs that read cpio archives to determine when two entries refer to the same file.") /dev should be dynamically populated. Is there a reason you can't just exclude /proc, /sys, /dev, /tmp, and /var/tmp?
|
# ? Jun 5, 2014 21:17 |
|
evol262 posted:/dev should be dynamically populated. Is there a reason you can't just exclude /proc, /sys, /dev, /tmp, and /var/tmp? That's already on the todo list when I get around to fixing up our init stuff, but I don't see how that helps when the actual files placed in the archive will have different inode numbers on different systems. edit: what I'm saying is, if I mkdir asdf; ( cd asdf; find . | cpio -oc > ../asdf.cpio ) mkdir fdsa; ( cd fdsa; find . | cpio -oc > ../fdsa.cpio ) asdf.cpio and fdsa.cpio will differ in the 'char c_ino[8];' field of the cpio header for the archived directory "." If identical files were placed in these directories, this would also be true for all of the archived files' headers. Illusive Fuck Man fucked around with this message at 21:32 on Jun 5, 2014 |
# ? Jun 5, 2014 21:25 |
|
Illusive gently caress Man posted:That's already on the todo list when I get around to fixing up our init stuff, but I don't see how that helps when the actual files placed in the archive will have different inode numbers on different systems. That's intended behavior. You should be verifying gpg keys from your distribution that the archives are signed with, not md5sums or other hashes of the archives. If you want end-users to be able to modify and distribute verified images, use tar.
|
# ? Jun 5, 2014 21:40 |
|
Ema Nymton posted:I heard about Yellow Dog in my searches, but it hasn't been updated in a while. I wanted to start with a distro which still has some support. Looks like OpenSUSE has the current release available for PPC: http://download.opensuse.org/ports/ppc/ I haven't run a PPC install in several years, but I ran an 11.x OpenSUSE install on a G4 and it did pretty nicely. At the same time I also roadtested Xubuntu PPC on a similar machine, and it did fine, too. The main thing you'll be missing is going to be Flash support - I'm not sure if Chrome would be a workaround for that or not, although I suspect it wouldn't be. You should be able to burn the ISO to disk, and then option-boot from it, unless that is disabled by having an administrator user set up. To get around that, I'm not sure if zapping the PRAM would remove it - it's been about 5 years since I spent any time working with Macs, so my memory is rusty.
|
# ? Jun 5, 2014 21:56 |
|
evol262 posted:That's intended behavior. You should be verifying gpg keys from your distribution that the archives are signed with, not md5sums or other hashes of the archives. There is no distribution involved here. Or I guess we're the distributor, but our model is that the end user doesn't need to place any trust in us. It would probably make more sense if I described the whole boot / attestation system but I'm really not supposed to :/ For now I guess I'll attempt something stupid and hacky, then once the patents are filed I can post it in the coding horrors thread.
|
# ? Jun 5, 2014 22:08 |
|
evol262 posted:Pick one of these: Looks like these are just other people's config files? loose-fish posted:I use Unison to keep stuff in sync between my laptop and desktop. It's usually pretty smart about dealing with conflicts and works over ssh. Hollow Talk posted:If by "entire home directory" you mean configuration files, you could just use git (or another dvcs). Stick the relevant files in, push them to a central location (the VPS would be good, but you can simply set up multiple remote repositories), pull from that location on the systems. Done. The advantage of using something like git is that it doesn't really matter on which system you change something, since the next pull on another system will grab all of the changes since your pull there. Thanks for these responses. By home directory I mean config files, but also source files for projects I'm working on. I think my best option will be to use rsync and run it with cron or incrond when something changes.
|
# ? Jun 6, 2014 12:58 |
|
fuf posted:Looks like these are just other people's config files? They're also frameworks. Of course, a post-checkout hook and a simple git repo would work if you want to do it yourself
|
# ? Jun 6, 2014 15:08 |
|
I got roped into some linux support stuff and could use some tips on the best way to proceed. There are four sites (three remote and one local) each with a web server running SUSE on a raid 10. If something fails, they want to be able to ship a replacement server. They say the IP maps are the same for each site, but the database and php is slightly different. What they want to do is have a backup server setup (probably not raid 10) so if one of the sites goes down they can ship it and replace it until the main server is fixed, but before they ship it create another backup first. Each night the remote sites will send database backups to the local server. They only have one repo for the php so I don't know how they are going to do version control on the site that's different (I think the plan is to update at some point), but there will be a backup of each site's code as well. They want to basically have a replica of the local server which they can then set up with minimal effort before they ship to whichever site went down. I don't know the best way of getting the server configured with the specific site's backups. Just run a script to copy the correct php and database backup? Create partitions for each site's data that gets updated when the nightly backups are done? Also, is the best way to clone the server to first install linux on the destination server and copy the files, or is there a more automated way? I hope this makes sense - I've been doing research but I'm a windows programmer with almost zero linux experience in the past 15 years. My job will be to come up with a plan and scripts for disaster recovery that they can follow. I can write that up as soon as I confirm the process.
|
# ? Jun 6, 2014 16:07 |
|
This is where aomething like puppet or chef comes in, imho. It's more work setting it up but you can then provision servers with the push of a button.
|
# ? Jun 6, 2014 16:20 |
|
Thanks, I'll look at those.
|
# ? Jun 6, 2014 16:26 |
|
Is anyone aware of a way to set up some logic in pxelinux (or ipxe, I have that working in our lab) to examine a host's local disks for an MBR, and then handle actions accordingly? I'd like to be able to have a freshly built VM automatically get a lease, boot from pxe, and then start anaconda and pull down a kickstart image if there's an unconfigured, unformatted disk present. If there's already a configured disk (and thus an MBR and partition table) present, then prompt for action, or boot from local disk. Or am I tackling this at the wrong angle and this is logic that should be handled elsewhere?
|
# ? Jun 6, 2014 20:07 |
|
|
# ? May 10, 2024 06:05 |
|
opie posted:I got roped into some linux support stuff and could use some tips on the best way to proceed. This is some kind of scheme. Fixing a server is probably going to be a lot quicker than shipping anything anywhere for most providers. It sounds like what they really want is a high-availability active-passive setup using 2 servers and a DRBD volume syncing data between the two. You can use something like heartbeat with pacemaker to automate failovers if the site goes down. (hire me)
|
# ? Jun 6, 2014 21:32 |