|
Megaman posted:Chromium is open source!
|
# ? May 26, 2014 14:30 |
|
|
# ? Jun 12, 2024 22:58 |
|
spankmeister posted:Your big red text is apt. That it is. That. It. Is.
|
# ? May 26, 2014 14:47 |
|
wooger posted:RPM Fusion does a reasonable job for the most important proprietary / patented stuff on Fedora, but the much more worrisome bit for me is that the packaged version of Chromium is wwwaaayyy outdated, and the only popular, recent repo is much less reputable than RPMfusion. It has *Russian* in the name FFS. The project essentially creates a "cleaned" chromium version that compiles without some of the proprietary parts. The chromium-ffmpegsumo package is a thinned out ffmpeg version that only includes free/opensource codecs, with the "full" chromium-ffmpeg coming from the external RPM Fusion equivalent (Packman) if the user so chooses. The builds also offer stable, beta, alpha, and dev channels for chromium, as is the case with chrome. I don't know how Fedora is set up for these kind of splits, but this seems to be the best compromise that people have found so far, even after a short period of building chromium entirely on Packman, i.e. externally. edit: I think this build additionally also excludes NaCl on the OBS and only pulls those in and builds them on Packman. Hollow Talk fucked around with this message at 15:01 on May 26, 2014 |
# ? May 26, 2014 14:58 |
|
wooger posted:RPM Fusion does a reasonable job for the most important proprietary / patented stuff on Fedora, but the much more worrisome bit for me is that the packaged version of Chromium is wwwaaayyy outdated, and the only popular, recent repo is much less reputable than RPMfusion. It has *Russian* in the name FFS. I don't necessarily agree with this policy (without going back into the "let applications include all their dependencies" bit from a few days ago), but Chromium is not packaged because they ship libraries already present and don't work well with upstream. Rpmfusion keeps the same packaging policies as Fedora, and this is a big no-no. They've actually rejected carrying chromium. When the Fedora engineering manager builds it in his free time, you know there are problems with this policy.
|
# ? May 26, 2014 15:57 |
|
wooger posted:RPM Fusion does a reasonable job for the most important proprietary / patented stuff on Fedora, but the much more worrisome bit for me is that the packaged version of Chromium is wwwaaayyy outdated, and the only popular, recent repo is much less reputable than RPMfusion. It has *Russian* in the name FFS. Doesn't Google maintain their own yum repos for things like Chrome and Google-Earth? Just checked and if you install the official RPM from Google for Chrome then it does setup a new yum repo, annoyingly it also does the same for Google Earth creating a new repo for each Google product you install rather than one central repo for all Google products. Varkk fucked around with this message at 22:35 on May 26, 2014 |
# ? May 26, 2014 22:28 |
|
Varkk posted:Doesn't Google maintain their own yum repos for things like Chrome and Google-Earth? Chrome isn't free, which is a major issue for some
|
# ? May 26, 2014 23:57 |
|
Hollow Talk posted:openSUSE does the same thing. I presume (not a lawyer etc.) the point is that if the user decides to install these codecs, that's their decision and in many cases and depending on jurisdiction perfectly fine for personal use. Additionally, even if this is a legal grey area, going after individual users is a pain. If the company chooses to bundle them, however, they make for a better target for a lawsuit, especially if it's somebody like RH or SuSE. It's probably just a case of better safe than sorry. And since copyright laws differ vastly between countries, Fedora (based in the US) can't do a lot of things Ubuntu (based on the Isle of Man) can, like adding patented codecs with one click during installation.
|
# ? May 27, 2014 17:04 |
|
There's probably a really easy answer, but can anyone tell my why this command works at the command line but gives a different output when run as a script? $ echo $[100-$(vmstat -n 1 2|tail -1|awk '{print $15}')] 53 when a write a file called test with the following: #!/bin/bash echo $[100-$(vmstat -n 1 2|tail -1|awk '{print $15}')] and run it: $ sh test $[100-88] It doesn't do that last bit of math that it does when not run as a script. I'm just taking the idle CPU percent output from the second line of vmstat and subtracting it from 100. edit: I fixed it by changing it to this: echo `expr 100 - $(vmstat -n 1 2|tail -1|awk '{print $15}')` But I still have no idea why it would give a different output when run as a script Naffer fucked around with this message at 19:18 on May 27, 2014 |
# ? May 27, 2014 19:04 |
|
Naffer posted:There's probably a really easy answer, but can anyone tell my why this command works at the command line but gives a different output when run as a script? Run your file as "bash test" and be amazed. The $[ .. ] structure is a bashism for arithmetic expansion. When using "sh" you're using a POSIX shell (which can be bash in a special POSIX compatibility mode, or a different shell altogether), and that syntax is not supported. The equivalent modern-POSIX syntax is $(( .. )), so you can write your test file as: code:
|
# ? May 27, 2014 20:00 |
|
I have eth0 which is ethernet with its own IP and then [eth0:0, eth0:1, eth0:2, eth0:3, eth0:4] that all have their own IPs. Only eth0 comes up on boot. I have the auto keyword set in /etc/network/interfaces for each interface. I can bring up the interfaces manually with ifup but if I use ifconfig [eth0:1] up I get "SIOCSIFFLAGS: Cannot assign requested address" which I assume is the reason these interfaces don't come up on boot. What am I missing? I've got another box with basically the same config and I don't have these issues so I think I just missed a step somewhere.
|
# ? May 28, 2014 01:00 |
Is this the right way to create a user that is only allowed to tunnel their web browser traffic over SSH?code:
code:
code:
code:
|
|
# ? May 28, 2014 01:26 |
|
I done hosed up and shut down my computer, forgetting I had a Kazam Screencast running, and I'm left with a .movie.mux and a .movie file. It's encoded in h264/mp4. Anyone know how to recover that to an mp4 file using gstreamer? Tried googling but I just don't understand the series of command suggestions I'm seeing, and how they all compose together. http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-mp4mux.html
|
# ? May 28, 2014 13:50 |
|
Maluco Marinero posted:I done hosed up and shut down my computer, forgetting I had a Kazam Screencast running, and I'm left with a .movie.mux and a .movie file. It's encoded in h264/mp4. Anyone know how to recover that to an mp4 file using gstreamer? Tried googling but I just don't understand the series of command suggestions I'm seeing, and how they all compose together. It may be as simple as this, if I'm reading the manpage right: gst-launch filesrc location=.movie.mux ! decodebin ! mp4mux ! filesink location=out.mp4 No idea if it will work with a broken file though, or if some other options are needed. vlc might play it if that's all you're interested in. I did find this which claims to be able to fix an mp4.
|
# ? May 28, 2014 16:02 |
|
I'd use ffmpeg personally but hey, whatever works.
|
# ? May 28, 2014 16:07 |
|
Or mplayer/mencoder. Unless I'm just failing to find a file that breaks when truncated.
|
# ? May 28, 2014 16:29 |
Is it wise to use nginx to cache npm, pypi, and aptitude packages? From my googling it seems doable, and it'd be nice to only have to rely on nginx for this. http://eng.yammer.com/a-private-npm-cache/ https://gist.github.com/dctrwatson/5785638 http://yeupou.wordpress.com/2014/01/28/caching-debianetc-apt-repositories-on-your-local-server-with-nginx-and-dsniff/
|
|
# ? May 28, 2014 20:01 |
|
spoon0042 posted:It may be as simple as this, if I'm reading the manpage right: I used that untrunc program but ended up with a 'atoms not found' message. That led me on the path to this perl script that can reconstitute a prematurely cut video's metadata given you provide it the correct resolution and framerate. Given I knew those it was trivial to alter the perl script to cover my resolution and framerate to the ones I used in the screen cast, so now all is well. Cheers for the suggestions.
|
# ? May 29, 2014 00:14 |
|
revmoo posted:I have eth0 which is ethernet with its own IP and then [eth0:0, eth0:1, eth0:2, eth0:3, eth0:4] that all have their own IPs. Only eth0 comes up on boot. I have the auto keyword set in /etc/network/interfaces for each interface. I can bring up the interfaces manually with ifup but if I use ifconfig [eth0:1] up I get "SIOCSIFFLAGS: Cannot assign requested address" which I assume is the reason these interfaces don't come up on boot. What am I missing? I've got another box with basically the same config and I don't have these issues so I think I just missed a step somewhere. Assiming this is ubuntu/debian, add teh following (or post your whole aliases file) code:
|
# ? May 29, 2014 04:54 |
|
How easy is it to swap distributions? Let's say I have a full install of Ubuntu and I wanted to try Linux Mint.
|
# ? May 29, 2014 20:44 |
|
Mint is pretty much Ubuntu with a different desktop environment and some different software installed by default. They probably also have some of their own repos but mostly pull from Ubuntu's. Rather than changing distros you could get a nealy identical experience by installing the Cinnamon desktop environment. That will involve a lot less work. Multiple desktop environments can happily coexist for the most part. You install those you want and select which to use on the login screen. Edit: To actually answer your question. Back up your home directory then flatten and reinstall with a new distro the same way you installed your first. Mint uses the same installer Ubuntu does. Sauer fucked around with this message at 21:05 on May 29, 2014 |
# ? May 29, 2014 21:01 |
|
jre posted:As in ? Got it to work using code:
Turn out where I was going wrong was kbps actually means K Bytes
|
# ? May 30, 2014 18:12 |
|
I recently installed Mint 16 on my laptop and, as usual, there was some fighting involved to install and configure my display drivers and bumblebee (my laptop uses nvidia optimus). Is all working now, even games in steam and all, but Ive just noticed that my screen resolution is kinda wrong and the "Display Configuration" section on the system settings is just not working anymore: the only resolution avaliable to my screen is 640x480. It was before, so I guess something got messed up when installing and reinstalling drivers. What can I do to fix it?
|
# ? Jun 2, 2014 22:24 |
|
If you've got a fairly recent Intel driver, there should be a shell command, intel-virtual-output that you can use to properly bind things together when bumblebee is running. If you're stuck on 640 480 though, there might be something else going on though, as surely the lower end card would be outputting the native laptop resolution. I'd imagine your first port of call is Intel drivers. Edit: I've got a thinkpad w530 with all externals hooked up to the nvidia side of an optimus card, so I've done a lot of this dance and it can get pretty frustrating. All set up now, but yeah, you'll spend a lot of time swapping around drivers and configs to get the right setup.
|
# ? Jun 2, 2014 22:40 |
|
Maluco Marinero posted:If you've got a fairly recent Intel driver, there should be a shell command, intel-virtual-output that you can use to properly bind things together when bumblebee is running. If you're stuck on 640 480 though, there might be something else going on though, as surely the lower end card would be outputting the native laptop resolution. I guess my explanation was a little confusing. Im not at 640x480. The resolution is just a little stretched, so I know its wrong. But its certainly not 640x480. Actually I dont know what exact resolution Im using right now because the "Display Configuration" settings screen is not working. It shows my monitor as if it was at 640x480 and does not allows me to change it (its the only option available). Apart from that (the display settings screen), everything is fine. I can even play CK2 in steam, and it runs fine. Probably some settings file got messed up or something. EDIT: that's what happens at the "Display Configuration" screen. The "Display Settings" screen is just blank, as if I didnt even had a screen at all. Elias_Maluco fucked around with this message at 00:31 on Jun 3, 2014 |
# ? Jun 3, 2014 00:12 |
|
Any downside to using ndiswrapper? I got an ac USB adapter with no linux drivers. Installed the xp ones and it seems fine.
|
# ? Jun 3, 2014 05:31 |
|
Last night my hackintosh install died. I came here because I'm debating the purchase of a new computer when I'm not sure I have to. I primarily use an iPad Air and iPhone 5. I'm very comfortable using linux. Especially tomato usb and bash shells. Can someone recommend me a distribution for a Dell mini 9 with 7.3gb of drive space ? It must do the following tasks: 1) Print from PDF files to a brother HL-2250d network printer. Running an airprint server would be a nice bonus. 2) Run a browser compatible with google docs --- I could just get a Bluetooth keyboard 3) Because I had such a small drive Iand an atom processor, I rarely put music on iPhone, iPod, and iPad this looks promising, anyone use it? http://www.libimobiledevice.org 4) Manage photo collection, I never used iPhoto... but I need to true claim some space A few years back I was running lubuntu, but one day the automatic update took up all the remaining space
|
# ? Jun 3, 2014 06:10 |
|
Elias_Maluco posted:Im not at 640x480. The resolution is just a little stretched, so I know its wrong. But its certainly not 640x480. I've seen something like that before, when dealing with Optimus. You essentially have two displays in one: the "real" display that actually produces the output, and a virtual display that does the rendering and passes the completed bitmaps to the real display. The virtual display has no real concept of "resolution", so any tools querying it are defaulting to 640x480. Run "echo $DISPLAY" on a terminal window. If it says something other than ":0", your set-up is probably configured to use the virtual display as a default one. To adjust your display resolution, you'll need to access the real display instead. To see your current resolution, make sure the x11-utils package has been installed. Then run this command: code:
Find out how to start the settings utility (or the "Display Configuration" settings screen) from the command line, then start it the same way as the xdpyinfo command, prefixing it with the DISPLAY=:0 variable. If "echo $DISPLAY" returns :0, your set-up is configured to use an alternative display number on the real display and :0 for the virtual one. So you'll first need to identify the real display number. In that case, try running "ls -lA /tmp/.X11-unix/". It should list two files: typically X0 and X<something else>. In that case, <something else> is the display number you'll want to use when starting any resolution configuration tools.
|
# ? Jun 3, 2014 07:15 |
|
Maybe this has been addressed, but with the forums search still broken I can't find much. My workplace has three 2005 iMacs which we don't have the money to replace. They are used for unimportant jobs, but I'd like to keep them, and in a way that's non-destructive to the original OSX. If they were PCs booting a linux Live CD or USB would be super easy. But these are PowerPC iMacs so it's hard. I've googled around for help, and tried a few solutions, but I can't get any to work. Yesterday I made a DVD with a PowerPC version of Lunbuntu, and it even started to boot once! Then it failed, showed a bunch of error messages, and it never worked again. Now it won't even try to boot on any of the three, even with a new DVD. But why did it try that one time? Furthermore, the iMacs boot into a user which doesn't have privileges to run Terminal or disk utilities. I do not have a better account and password, so I only have Windows for burning ISOs. Getting a USB drive formatted in a way that a PowerPC can boot will be very difficult. I do, however, know how to get the iMacs to boot into a prompt I can type into. Not that I can do much with it. If I can't get some alternative OS working on these iMacs, they will have to be thrown away.
|
# ? Jun 3, 2014 14:59 |
|
Ema Nymton posted:Maybe this has been addressed, but with the forums search still broken I can't find much. Does this mean iMac G5s? If so, it's probably not worth the effort to keep it running. But read this. Even though there would be endianness problems trying to create a hpfs+ partition on an x86 machine, dd does not have these problems. dd an ISO to a flash drive. Boot, install be merry. They're not going to be fast enough to do anything you want to do, really, and most of the software you have won't work at all (though stuff from repos will be fine), and you're generally better off replacing them with cheap Atoms. Alternatively, boot them into single-user (it's still UNIX) and change the password for the admin account.
|
# ? Jun 3, 2014 16:02 |
|
Does anyone here have a strong understanding of the ext2 filesystem? I'm trying to create a script to build from source a root filesystem/ramdisk which will result in a byte-identical file every time. All my binaries are compiling nicely/identically now, but I'm discovering that compilation is the easy part and the actual file system is a little harder to build deterministically. I've managed to build an empty filesystem identically by faking some timestamps and overwriting the 'directory hash seed', but as soon as I mount and actually copy files, everything goes to poo poo. Is there some kind of randomness to where blocks/inodes are allocated? Is this a hopeless/dumb struggle? I could, instead, just make to a tool to verify that the hashes of the contents of a provided ramdisk match the contents of a ramdisk built from source, but I'm worried that a hypothetically malicious ramdisk builder could manipulate filesystem metadata in some kind of hypothetically evil way, while still satisfying this contents-checking tool.
|
# ? Jun 3, 2014 17:00 |
|
Illusive gently caress Man posted:Is this a hopeless/dumb struggle? Illusive gently caress Man posted:I could, instead, just make to a tool to verify that the hashes of the contents of a provided ramdisk match the contents of a ramdisk built from source, but I'm worried that a hypothetically malicious ramdisk builder could manipulate filesystem metadata in some kind of hypothetically evil way, while still satisfying this contents-checking tool. Let's take a step back, what's your actual scenario and what are your objectives? Sounds like you're trying to implement some kind of verified boot strategy.
|
# ? Jun 3, 2014 18:28 |
|
ExcessBLarg! posted:Let's take a step back, what's your actual scenario and what are your objectives? Sounds like you're trying to implement some kind of verified boot strategy. Exactly, there's some tpm extension-like stuff going on. The system runs a modified u-boot which grabs the kernel/ramdisk over tftp. There is no persistent storage attached. At some later point, the system needs to be able to perform some attestation-like stuff in which hashes of what u-boot loaded are signed. Ideally, I'd like for anybody to be able to use this tool I'm working on to build an identical kernel/ramdisk from source so they can verify what's running.
|
# ? Jun 3, 2014 19:10 |
|
Why does the filesystem have to be identical? Why is verifying the files alone not good enough?
|
# ? Jun 3, 2014 19:13 |
|
Why transmit an ext2 filesystem when you could just use a cpio archive?
|
# ? Jun 3, 2014 19:21 |
|
pseudorandom name posted:Why transmit an ext2 filesystem when you could just use a cpio archive? Because I'm an idiot and I didn't know you could do that. I think this will work. brb.
|
# ? Jun 3, 2014 19:27 |
|
Illusive gently caress Man posted:Exactly, there's some tpm extension-like stuff going on. The system runs a modified u-boot which grabs the kernel/ramdisk over tftp. There is no persistent storage attached. At some later point, the system needs to be able to perform some attestation-like stuff in which hashes of what u-boot loaded are signed.
|
# ? Jun 3, 2014 19:45 |
|
This is a bit of a weird question, but maybe someone has run into it before. I have been using zsh and oh-my-zsh for my shell, on 14.04 64-bit. I set up a chroot to do some builds with 12.04 32-bit. I enter the chroot using schroot. Basically, following this guide: https://wiki.ubuntu.com/DebootstrapChroot However when I use the chroot, the prompt gets screwed up, especially when I am using tab-completion or things like that. It's like it's sending too many backspaces or something. The chroot is using my home directory, so it's the same dotfiles. I manually upgraded the versions of zsh on each to be the same (5.0.3). Out of the chroot it's fine.
|
# ? Jun 4, 2014 00:51 |
|
evensevenone posted:This is a bit of a weird question, but maybe someone has run into it before. When you don't explicitly provide a command, chroot runs /bin/sh under the new root. Run chroot /whatever /bin/zsh.
|
# ? Jun 4, 2014 04:14 |
|
I'm using zsh in both shells. edit: actually, it seems to be unicode issue of some sort.
|
# ? Jun 4, 2014 06:48 |
|
|
# ? Jun 12, 2024 22:58 |
|
evensevenone posted:I'm using zsh in both shells. Take a screenshot
|
# ? Jun 4, 2014 14:23 |