Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Ema Nymton
Apr 26, 2008

the place where I come from
is a small town
Buglord

evol262 posted:

Does this mean iMac G5s?

If so, it's probably not worth the effort to keep it running. But read this. Even though there would be endianness problems trying to create a hpfs+ partition on an x86 machine, dd does not have these problems. dd an ISO to a flash drive. Boot, install be merry.

They're not going to be fast enough to do anything you want to do, really, and most of the software you have won't work at all (though stuff from repos will be fine), and you're generally better off replacing them with cheap Atoms.

Alternatively, boot them into single-user (it's still UNIX) and change the password for the admin account.

Yes, it's an iMac G5. The link you posted was very insightful, but I still couldn't get it to work for me, even when I used Puppy Linux on another computer to follow the dd steps. I tried formatting the drive as HFS+ using GParted next, but the iMacs still won't recognize any boot drive I try.

I guess it's not worth the effort, because this is taking way too much. :(

Adbot
ADBOT LOVES YOU

the
Jul 18, 2004

by Cowcaster
Why doesn't Chromium view PDFs "inline" like Chrome?

edit: Solution found and it works: https://chrome.google.com/webstore/detail/pdf-viewer/oemmndcbldboiebfnladdacbdfmadadm/related

the fucked around with this message at 19:49 on Jun 4, 2014

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
The PDF viewer is licensed from Foxit Software. That said, they recently managed to get it open-source as Pdfium, so you might see it in Chromium soon.

Seven Round Things
Mar 22, 2010
I'm using Debian sid and KDE. How do I completely disable paste-on-middle-mouse-button, without disabling the MMB in general? I'm not sure whether it's in Xorg, GTK, QT, several of those, or what- I just know I want it gone.

(Apart from being horribly annoying it's actually a security issue in my view. A webpage receives a paste event when I middle-click it, so I just have to mis-click a link by a pixel and Joe's Random Site receives whatever I last selected. What if it was a password?)

I've heard that Wayland might put an end to this default behaviour, but until then, how do I kill it with fire?

Elias_Maluco
Aug 23, 2007
I need to sleep

telcoM posted:

I've seen something like that before, when dealing with Optimus.

You essentially have two displays in one: the "real" display that actually produces the output, and a virtual display that does the rendering and passes the completed bitmaps to the real display.

The virtual display has no real concept of "resolution", so any tools querying it are defaulting to 640x480.

Run "echo $DISPLAY" on a terminal window. If it says something other than ":0", your set-up is probably configured to use the virtual display as a default one. To adjust your display resolution, you'll need to access the real display instead.

To see your current resolution, make sure the x11-utils package has been installed.
Then run this command:
code:
DISPLAY=:0 xdpyinfo |less
It should display a lot of output about your display, including the resolution.

Find out how to start the settings utility (or the "Display Configuration" settings screen) from the command line, then start it the same way as the xdpyinfo command, prefixing it with the DISPLAY=:0 variable.

If "echo $DISPLAY" returns :0, your set-up is configured to use an alternative display number on the real display and :0 for the virtual one. So you'll first need to identify the real display number. In that case, try running "ls -lA /tmp/.X11-unix/". It should list two files: typically X0 and X<something else>. In that case, <something else> is the display number you'll want to use when starting any resolution configuration tools.

Thank you for your help.

I tried "echo $DISPLAY" and it does gives me :0.

But "ls -lA /tmp/.X11-unix/" is giving me just this:
code:
total 0
srwxrwxrwx 1 root root 0 Jun  3 01:11 X0
No X<something else>.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Seven Round Things posted:

I'm using Debian sid and KDE. How do I completely disable paste-on-middle-mouse-button, without disabling the MMB in general? I'm not sure whether it's in Xorg, GTK, QT, several of those, or what- I just know I want it gone.

(Apart from being horribly annoying it's actually a security issue in my view. A webpage receives a paste event when I middle-click it, so I just have to mis-click a link by a pixel and Joe's Random Site receives whatever I last selected. What if it was a password?)

I've heard that Wayland might put an end to this default behaviour, but until then, how do I kill it with fire?

It's done by every toolkit individually. To turn it off in GTK+ you can disable the "enable-primary-paste" setting.

$ echo 'gtk-enable-primary-paste = false' >> ~/.config/gtk-3.0/settings.ini

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug

Seven Round Things posted:

Wayland might put an end to this default behaviour

I think I middle-click paste on average every 30 minutes or so, and I'd be extremely unhappy if this is the case. Good to know that it's done by each toolkit.

Lysidas fucked around with this message at 03:04 on Jun 5, 2014

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Lysidas posted:

I think I middle-click paste on average every 30 minutes or so, and I'd be extremely unhappy if this is the case. Good to know that it's done by each toolkit.

Wayland does not have multiple independent clipboards. We can support middle mode click, but it will be the same as Ctrl+V.

nitrogen
May 21, 2004

Oh, what's a 217°C difference between friends?

Suspicious Dish posted:

Wayland does not have multiple independent clipboards. We can support middle mode click, but it will be the same as Ctrl+V.

Awww man. Am i the only person that actually likes independent clipboards as a feature??

evol262
Nov 30, 2010
#!/usr/bin/perl

nitrogen posted:

Awww man. Am i the only person that actually likes independent clipboards as a feature??

No. I use this all the time.

Varkk
Apr 17, 2004

nitrogen posted:

Awww man. Am i the only person that actually likes independent clipboards as a feature??

No I love them as well and use the highlight-copy, middle-click paste all the time. I feel a bit lost without it when on a Windows PC.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
It's such a controversial feature that we might add it back. We've talked about it before. We're not sure yet.

kujeger
Feb 19, 2004

OH YES HA HA

Suspicious Dish posted:

It's such a controversial feature that we might add it back. We've talked about it before. We're not sure yet.

I hope so, it is something I use constantly and miss terribly whenever I'm on a system that does not have it.

fuf
Sep 12, 2004

haha
Say I'm working half the time on my PC and half the time on my laptop - what's the best way to keep things organised?

It seems like I can either use all my devices to ssh into a single development server and do all my work on there, or I can recreate the same development environment on each device and work on them directly, then sync files between them. Is there a more standard approach?

How feasible is it to sync my entire Home directory between my PC and laptop and VPS? Is that a thing people do? Basically I want the same .bashrc, .vimrc and .screenrc wherever I'm working.

Seven Round Things
Mar 22, 2010

Suspicious Dish posted:

It's such a controversial feature that we might add it back. We've talked about it before. We're not sure yet.
Perhaps it could be made an easy global setting somewhere? It doesn't feel like something that should be wired as deep as it is at the moment. Either way thanks for your work, really looking forward to Wayland!

a dmc delorean
Jul 2, 2006

Live the dream

Ema Nymton posted:

Yes, it's an iMac G5. The link you posted was very insightful, but I still couldn't get it to work for me, even when I used Puppy Linux on another computer to follow the dd steps. I tried formatting the drive as HFS+ using GParted next, but the iMacs still won't recognize any boot drive I try.

I guess it's not worth the effort, because this is taking way too much. :(

I'm relatively noobish when it comes to linux, and i might be completely wrong here, but have you tried Yellowdog Linux? I found a guide on my phone but as it's pdf, I can't read it. The blurb thing sounds like it might be what you're after.

http://www.fixstars.com/files/linux/ydl6.0_apple_guide.pdf

evol262
Nov 30, 2010
#!/usr/bin/perl

fuf posted:

Say I'm working half the time on my PC and half the time on my laptop - what's the best way to keep things organised?

It seems like I can either use all my devices to ssh into a single development server and do all my work on there, or I can recreate the same development environment on each device and work on them directly, then sync files between them. Is there a more standard approach?

How feasible is it to sync my entire Home directory between my PC and laptop and VPS? Is that a thing people do? Basically I want the same .bashrc, .vimrc and .screenrc wherever I'm working.

Pick one of these:

http://dotfiles.github.io

loose-fish
Apr 1, 2005

fuf posted:

Say I'm working half the time on my PC and half the time on my laptop - what's the best way to keep things organised?

It seems like I can either use all my devices to ssh into a single development server and do all my work on there, or I can recreate the same development environment on each device and work on them directly, then sync files between them. Is there a more standard approach?

How feasible is it to sync my entire Home directory between my PC and laptop and VPS? Is that a thing people do? Basically I want the same .bashrc, .vimrc and .screenrc wherever I'm working.

I use Unison to keep stuff in sync between my laptop and desktop. It's usually pretty smart about dealing with conflicts and works over ssh.

Ema Nymton
Apr 26, 2008

the place where I come from
is a small town
Buglord

Angelwolf posted:

I'm relatively noobish when it comes to linux, and i might be completely wrong here, but have you tried Yellowdog Linux? I found a guide on my phone but as it's pdf, I can't read it. The blurb thing sounds like it might be what you're after.

http://www.fixstars.com/files/linux/ydl6.0_apple_guide.pdf

I heard about Yellow Dog in my searches, but it hasn't been updated in a while. I wanted to start with a distro which still has some support.

Hollow Talk
Feb 2, 2014

fuf posted:

Say I'm working half the time on my PC and half the time on my laptop - what's the best way to keep things organised?

It seems like I can either use all my devices to ssh into a single development server and do all my work on there, or I can recreate the same development environment on each device and work on them directly, then sync files between them. Is there a more standard approach?

How feasible is it to sync my entire Home directory between my PC and laptop and VPS? Is that a thing people do? Basically I want the same .bashrc, .vimrc and .screenrc wherever I'm working.

If by "entire home directory" you mean configuration files, you could just use git (or another dvcs). Stick the relevant files in, push them to a central location (the VPS would be good, but you can simply set up multiple remote repositories), pull from that location on the systems. Done. The advantage of using something like git is that it doesn't really matter on which system you change something, since the next pull on another system will grab all of the changes since your pull there.

The advantage is that this scales to an arbitrary number of systems. If you were to get another laptop, another VPS, work on somebody else's computer etc., simply fetch everything from your remote repository and off you go. If you need access beyond that, you could also use bitbucket (https://bitbucket.org/) which offers an unlimited amount of free private repositories that are only limited insofar as the number of colaborators is limited, which isn't really an issue if you are the only person using the repository.

If you need to push around binary files as well, something like loose-fish's answer probably works (I don't know that program), or you could just use rsync, which diffs everything you send, only sends what actually changed and works over ssh.

edit: I just saw Unison actually seems to use rsync or at least rsync-like behaviour, so that's pretty much the same thing just with a gui.

Hollow Talk fucked around with this message at 19:18 on Jun 5, 2014

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
How do I escape this command correctly so it actually runs?

code:
mysql -u root -p -e 'GRANT ALL PRIVILEGES on *.* TO debian-sys-maint@localhost IDENTIFIED BY PASSWORD("abc123") WITH GRANT OPTION; FLUSH PRIVILEGES;'
I don't think it likes the hyphens in the username, I tried a bunch of different ways and couldn't get it to run

revmoo
May 25, 2006

#basta
Try wrapping the "debian-sys-maint@localhost" in double quotes...? Should work I'd think.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

revmoo posted:

Try wrapping the "debian-sys-maint@localhost" in double quotes...? Should work I'd think.

code:
$ mysql -u root -p -e 'GRANT ALL PRIVILEGES on *.* TO "debian-sys-maint@localhost" IDENTIFIED BY PASSWORD("abc123") WITH GRANT OPTION; FLUSH PRIVILEGES;'
ERROR 1470 (HY000) at line 1: String 'debian-sys-maint@localhost' is too long for user name (should be no longer than 16)
$ mysql -u root -p -e 'GRANT ALL PRIVILEGES on *.* TO "debian-sys-maint"@localhost IDENTIFIED BY PASSWORD("abc123") WITH GRANT OPTION; FLUSH PRIVILEGES;'
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for
   the right syntax to use near '("abc123") WITH GRANT OPTION' at line 1
$ mysql -u root -p -e 'GRANT ALL PRIVILEGES on *.* TO "debian-sys-maint"@"localhost" IDENTIFIED BY PASSWORD("abc123") WITH GRANT OPTION; FLUSH PRIVILEGES;'
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for
   the right syntax to use near '("abc123") WITH GRANT OPTION' at line 1
:(

Aquila
Jan 24, 2003

Suspicious Dish posted:

It's such a controversial feature that we might add it back. We've talked about it before. We're not sure yet.

I use this feature so much and want it on my mac and pc. Please don't take it away :(

revmoo
May 25, 2006

#basta
mysql -u root -p -e "GRANT ALL PRIVILEGES on *.* TO 'debian-sys-maint'@localhost IDENTIFIED BY 'tmp123' WITH GRANT OPTION; FLUSH PRIVILEGES;"

^^ This works for me

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Woohoo! Thanks revmoo!

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Aquila posted:

I use this feature so much and want it on my mac and pc. Please don't take it away :(

Remember, right click paste will still work, it will just be the same as Ctrl+V.

Illusive Fuck Man
Jul 5, 2004
RIP John McCain feel better xoxo 💋 🙏
Taco Defender

Illusive gently caress Man posted:

Exactly, there's some tpm extension-like stuff going on. The system runs a modified u-boot which grabs the kernel/ramdisk over tftp. There is no persistent storage attached. At some later point, the system needs to be able to perform some attestation-like stuff in which hashes of what u-boot loaded are signed.

Ideally, I'd like for anybody to be able to use this tool I'm working on to build an identical kernel/ramdisk from source so they can verify what's running.


pseudorandom name posted:

Why transmit an ext2 filesystem when you could just use a cpio archive?

So I've been trying this for a little while, and the problem now is that the cpio file format includes device and inode numbers of the input files (Documentation says "These are used by programs that read cpio archives to determine when two entries refer to the same file.")

Hypothetically, I could make my own cpio generator (or a tool to parse/patch existing cpio files) which just sets the inode numbers in the archive incrementally or something.... but this sounds like another one of my idiotic hacks.

evol262
Nov 30, 2010
#!/usr/bin/perl

Illusive gently caress Man posted:

So I've been trying this for a little while, and the problem now is that the cpio file format includes device and inode numbers of the input files (Documentation says "These are used by programs that read cpio archives to determine when two entries refer to the same file.")

Hypothetically, I could make my own cpio generator (or a tool to parse/patch existing cpio files) which just sets the inode numbers in the archive incrementally or something.... but this sounds like another one of my idiotic hacks.

/dev should be dynamically populated. Is there a reason you can't just exclude /proc, /sys, /dev, /tmp, and /var/tmp?

Illusive Fuck Man
Jul 5, 2004
RIP John McCain feel better xoxo 💋 🙏
Taco Defender

evol262 posted:

/dev should be dynamically populated. Is there a reason you can't just exclude /proc, /sys, /dev, /tmp, and /var/tmp?

That's already on the todo list when I get around to fixing up our init stuff, but I don't see how that helps when the actual files placed in the archive will have different inode numbers on different systems.

edit: what I'm saying is, if I
mkdir asdf; ( cd asdf; find . | cpio -oc > ../asdf.cpio )
mkdir fdsa; ( cd fdsa; find . | cpio -oc > ../fdsa.cpio )

asdf.cpio and fdsa.cpio will differ in the 'char c_ino[8];' field of the cpio header for the archived directory "."

If identical files were placed in these directories, this would also be true for all of the archived files' headers.

Illusive Fuck Man fucked around with this message at 21:32 on Jun 5, 2014

evol262
Nov 30, 2010
#!/usr/bin/perl

Illusive gently caress Man posted:

That's already on the todo list when I get around to fixing up our init stuff, but I don't see how that helps when the actual files placed in the archive will have different inode numbers on different systems.

edit: what I'm saying is, if I
mkdir asdf; ( cd asdf; find . | cpio -oc > ../asdf.cpio )
mkdir fdsa; ( cd fdsa; find . | cpio -oc > ../fdsa.cpio )

asdf.cpio and fdsa.cpio will differ in the 'char c_ino[8];' field of the cpio header for the archived directory "."

If identical files were placed in these directories, this would also be true for all of the archived files' headers.

That's intended behavior. You should be verifying gpg keys from your distribution that the archives are signed with, not md5sums or other hashes of the archives.

If you want end-users to be able to modify and distribute verified images, use tar.

CaptainSarcastic
Jul 6, 2013



Ema Nymton posted:

I heard about Yellow Dog in my searches, but it hasn't been updated in a while. I wanted to start with a distro which still has some support.

Looks like OpenSUSE has the current release available for PPC: http://download.opensuse.org/ports/ppc/

I haven't run a PPC install in several years, but I ran an 11.x OpenSUSE install on a G4 and it did pretty nicely. At the same time I also roadtested Xubuntu PPC on a similar machine, and it did fine, too.

The main thing you'll be missing is going to be Flash support - I'm not sure if Chrome would be a workaround for that or not, although I suspect it wouldn't be.

You should be able to burn the ISO to disk, and then option-boot from it, unless that is disabled by having an administrator user set up. To get around that, I'm not sure if zapping the PRAM would remove it - it's been about 5 years since I spent any time working with Macs, so my memory is rusty.

Illusive Fuck Man
Jul 5, 2004
RIP John McCain feel better xoxo 💋 🙏
Taco Defender

evol262 posted:

That's intended behavior. You should be verifying gpg keys from your distribution that the archives are signed with, not md5sums or other hashes of the archives.

If you want end-users to be able to modify and distribute verified images, use tar.

There is no distribution involved here. Or I guess we're the distributor, but our model is that the end user doesn't need to place any trust in us. It would probably make more sense if I described the whole boot / attestation system but I'm really not supposed to :/

For now I guess I'll attempt something stupid and hacky, then once the patents are filed I can post it in the coding horrors thread.

fuf
Sep 12, 2004

haha

Looks like these are just other people's config files?

loose-fish posted:

I use Unison to keep stuff in sync between my laptop and desktop. It's usually pretty smart about dealing with conflicts and works over ssh.

Hollow Talk posted:

If by "entire home directory" you mean configuration files, you could just use git (or another dvcs). Stick the relevant files in, push them to a central location (the VPS would be good, but you can simply set up multiple remote repositories), pull from that location on the systems. Done. The advantage of using something like git is that it doesn't really matter on which system you change something, since the next pull on another system will grab all of the changes since your pull there.

The advantage is that this scales to an arbitrary number of systems. If you were to get another laptop, another VPS, work on somebody else's computer etc., simply fetch everything from your remote repository and off you go. If you need access beyond that, you could also use bitbucket (https://bitbucket.org/) which offers an unlimited amount of free private repositories that are only limited insofar as the number of colaborators is limited, which isn't really an issue if you are the only person using the repository.

If you need to push around binary files as well, something like loose-fish's answer probably works (I don't know that program), or you could just use rsync, which diffs everything you send, only sends what actually changed and works over ssh.

edit: I just saw Unison actually seems to use rsync or at least rsync-like behaviour, so that's pretty much the same thing just with a gui.

Thanks for these responses. By home directory I mean config files, but also source files for projects I'm working on. I think my best option will be to use rsync and run it with cron or incrond when something changes.

evol262
Nov 30, 2010
#!/usr/bin/perl

fuf posted:

Looks like these are just other people's config files?

They're also frameworks.

Of course, a post-checkout hook and a simple git repo would work if you want to do it yourself

opie
Nov 28, 2000
Check out my TFLC Excuse Log!
I got roped into some linux support stuff and could use some tips on the best way to proceed.

There are four sites (three remote and one local) each with a web server running SUSE on a raid 10. If something fails, they want to be able to ship a replacement server. They say the IP maps are the same for each site, but the database and php is slightly different. What they want to do is have a backup server setup (probably not raid 10) so if one of the sites goes down they can ship it and replace it until the main server is fixed, but before they ship it create another backup first.

Each night the remote sites will send database backups to the local server. They only have one repo for the php so I don't know how they are going to do version control on the site that's different (I think the plan is to update at some point), but there will be a backup of each site's code as well.

They want to basically have a replica of the local server which they can then set up with minimal effort before they ship to whichever site went down. I don't know the best way of getting the server configured with the specific site's backups. Just run a script to copy the correct php and database backup? Create partitions for each site's data that gets updated when the nightly backups are done?

Also, is the best way to clone the server to first install linux on the destination server and copy the files, or is there a more automated way?

I hope this makes sense - I've been doing research but I'm a windows programmer with almost zero linux experience in the past 15 years. My job will be to come up with a plan and scripts for disaster recovery that they can follow. I can write that up as soon as I confirm the process.

spankmeister
Jun 15, 2008






This is where aomething like puppet or chef comes in, imho. It's more work setting it up but you can then provision servers with the push of a button.

opie
Nov 28, 2000
Check out my TFLC Excuse Log!
Thanks, I'll look at those.

Cidrick
Jun 10, 2001

Praise the siamese
Is anyone aware of a way to set up some logic in pxelinux (or ipxe, I have that working in our lab) to examine a host's local disks for an MBR, and then handle actions accordingly? I'd like to be able to have a freshly built VM automatically get a lease, boot from pxe, and then start anaconda and pull down a kickstart image if there's an unconfigured, unformatted disk present. If there's already a configured disk (and thus an MBR and partition table) present, then prompt for action, or boot from local disk.

Or am I tackling this at the wrong angle and this is logic that should be handled elsewhere?

Adbot
ADBOT LOVES YOU

Salt Fish
Sep 11, 2003

Cybernetic Crumb

opie posted:

I got roped into some linux support stuff and could use some tips on the best way to proceed.

There are four sites (three remote and one local) each with a web server running SUSE on a raid 10. If something fails, they want to be able to ship a replacement server. They say the IP maps are the same for each site, but the database and php is slightly different. What they want to do is have a backup server setup (probably not raid 10) so if one of the sites goes down they can ship it and replace it until the main server is fixed, but before they ship it create another backup first.

Each night the remote sites will send database backups to the local server. They only have one repo for the php so I don't know how they are going to do version control on the site that's different (I think the plan is to update at some point), but there will be a backup of each site's code as well.

They want to basically have a replica of the local server which they can then set up with minimal effort before they ship to whichever site went down. I don't know the best way of getting the server configured with the specific site's backups. Just run a script to copy the correct php and database backup? Create partitions for each site's data that gets updated when the nightly backups are done?

Also, is the best way to clone the server to first install linux on the destination server and copy the files, or is there a more automated way?

I hope this makes sense - I've been doing research but I'm a windows programmer with almost zero linux experience in the past 15 years. My job will be to come up with a plan and scripts for disaster recovery that they can follow. I can write that up as soon as I confirm the process.

This is some kind of scheme. Fixing a server is probably going to be a lot quicker than shipping anything anywhere for most providers. It sounds like what they really want is a high-availability active-passive setup using 2 servers and a DRBD volume syncing data between the two. You can use something like heartbeat with pacemaker to automate failovers if the site goes down. (hire me)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply