|
spankmeister posted:Yeah, but do you really want to? Especially if your processor supports it anyhow. Besides, processors that lack virtualization support will be pretty old or low end. Stealthgerbil posted:I am a total newbie to linux virtualization, does xen require a special processor for it to work? I have been trying to set up CentOS 6.4 with xen and cloudmin and I am getting some error about it not supporting virtualization. spankmeister posted:KVM over Xen? How does that work? And what benefit does it give you?
|
# ? Jun 11, 2013 07:22 |
|
|
# ? Jun 11, 2024 15:39 |
|
Suspicious Dish posted:How are you setting up VNC, and which variant are you using? I remember one version of VLC leaking a bunch of X windows all over the place, which would balloon into mutter tracking into a new window and backing pixmap every frame drawn. vncserver :1, using the tigervnc that's current with Fedora 18 (so I assume a reasonably up to date version). I can't recall ever having such problems with tigervnc before, either on centos or older Fedora/tigervnc builds.
|
# ? Jun 11, 2013 07:22 |
|
Doctor w-rw-rw- posted:KVM is integrated into the kernel, and has been for 5-6 years. I don't know the specific advantages, but in my personal experience I had a lot more difficulty configuring Xen than KVM. I do know that Red Hat switched from Xen to KVM, and I can only assume that their decision to build on KVM rather than Xen was a pragmatic one. Maybe Suspicious Dish can enlighten us?
|
# ? Jun 11, 2013 07:46 |
|
KVM also gives you Windows and the BSDs, all of which have virtio drivers of some sort (FreeBSD and Windows require extra installation, Net/OpenBSD work OOB)
|
# ? Jun 11, 2013 08:12 |
|
text editor posted:KVM also gives you Windows and the BSDs, all of which have virtio drivers of some sort (FreeBSD and Windows require extra installation, Net/OpenBSD work OOB)
|
# ? Jun 11, 2013 09:21 |
|
eXXon posted:Well, I killed the vncserver before I could do that. But curiously enough, for a fresh vncserver it's still using 200m, half of which is: But note that the Resident Segment Size (RSS, or the part that is actually in memory right now) of the locales is only 64 kB. The RSS is the third column in the pmap -x output. What is happening here is that the process has mapped the locale-archive into its virtual address space. This is essentially just a promise from the kernel to the process: "you wanted this file, now any parts of it you actually use will be there." The memory management subsystem flags that part of the process's private virtual address space. At this point, this part of the virtual address space does not have any real RAM associated with it. As soon as the process attempts to access some part of it, the processor jumps to the kernel's page fault handler, which identifies the part of the file the process needs and loads only that 4 kB-sized page, then allows the process to access it. From the point-of-view of the process, the file is accessible in memory, it's just that the process seems to space out for a tiny while whenever it touches a particular 4kB chunk of it for the first time. From the kernel POV, a memory hog process that would have loaded a 102580 kB file has been satisfied by using just 64 kB of real RAM so far.
|
# ? Jun 11, 2013 09:54 |
|
text editor posted:KVM also gives you Windows and the BSDs, all of which have virtio drivers of some sort (FreeBSD and Windows require extra installation, Net/OpenBSD work OOB) Xen HVM is virtually indistinguishable from KVM from a guest perspective other than QEMU device names, in that all the same guests mostly work. Solaris support is much better on HVM than KVM, though. This is not a reasonable argument for preferring KVM to Xen, unless you only use Xen PV guests.
|
# ? Jun 11, 2013 16:37 |
|
telcoM posted:As soon as the process attempts to access some part of it, the processor jumps to the kernel's page fault handler, which identifies the part of the file the process needs and loads only that 4 kB-sized page, then allows the process to access it. The other part of this is that since a lot of apps using gettext mmap the locale-archive as read-only, the kernel can share the actual memory block between all processes. Accurately counting memory "used by a process" is complex, as processes can share actual memory blocks, and processes can have memory that they think is available to them but actually isn't.
|
# ? Jun 11, 2013 17:07 |
|
Doctor w-rw-rw- posted:KVM is integrated into the kernel, and has been for 5-6 years. I don't know the specific advantages, but in my personal experience I had a lot more difficulty configuring Xen than KVM. I do know that Red Hat switched from Xen to KVM, and I can only assume that their decision to build on KVM rather than Xen was a pragmatic one. Maybe Suspicious Dish can enlighten us? Oh I thought you meant running a Xen Dom0 inside of KVM or some weird poo poo like that.
|
# ? Jun 11, 2013 17:27 |
|
Suspicious Dish posted:Accurately counting memory "used by a process" is complex, as processes can share actual memory blocks, and processes can have memory that they think is available to them but actually isn't.
|
# ? Jun 11, 2013 18:17 |
|
eXXon posted:vncserver :1, using the tigervnc that's current with Fedora 18 (so I assume a reasonably up to date version). I can't recall ever having such problems with tigervnc before, either on centos or older Fedora/tigervnc builds. OK, so VNC doesn't actually support hardware acceleration. What's happening is that it's like you turned off acceleration; your software rasterizer (on a recent F18, that would be llvmpipe) is kicking in and then the VNC server process is taking all changes on the root window and sending compressed pixel blocks across the wire. To composite the windows, gnome-shell uses an OpenGL extension called texture from pixmap, which basically allows gnome-shell to say "give me a GL texture that corresponds to this window". llvmpipe's implementation of TFP, unfortunately, isn't zero-copy, and will copy the backing pixmap into gnome-shell's memory space, and that means that if you have some windows open at 1920x1200 resolutions, if you do the math (1920 width * 1200 height * 4 windows * 32 bits per pixel), it already gets close to 2G of memory used. I asked our resident llvmpipe expert to see if he can fix that if you're under memory pressure, but it's hard. We're dealing with a system from the 80s here, keep in mind.
|
# ? Jun 11, 2013 18:57 |
|
Not sure where you're getting the 2G from there, 1920*1200*4 bytes is 8.8 megabytes.
|
# ? Jun 12, 2013 10:22 |
|
So, sometimes when I open a video file from my server in VLC, VLC becomes unresponsive. I close the window via the "X" control and try again but VLC remains unresponsive. The Videos application in Ubuntu (I think that's Totem, right?) won't play it at this point either. If I then copy the video file to my local desktop, VLC plays it fine. Other times the same video file will play just fine over the network. The biggest issue to me right now is that I can't kill the VLC process. After trying to kill it ps shows it's state code as Sl...Interruptible sleep and multithreaded. The problem is that it prevents my screensaver from coming on and turning off the monitors. How do I get rid of this process short of rebooting? Anything I can do about the occasional network issue that leads to this happening?
|
# ? Jun 12, 2013 19:41 |
|
Did you try kill -9 <pid of VLC> ?
|
# ? Jun 12, 2013 19:49 |
|
Suspicious Dish posted:OK, so VNC doesn't actually support hardware acceleration. What's happening is that it's like you turned off acceleration; your software rasterizer (on a recent F18, that would be llvmpipe) is kicking in and then the VNC server process is taking all changes on the root window and sending compressed pixel blocks across the wire. Ok, thanks. I'm not sure I completely followed that, though - like pseudorandom name said, 1920x1200x32 bits x 2 VNC sessions should only be a few tens of MB. Is it that much per window in each VNC session or something? I thought all VNC did was send a few compressed images of the desktop every second, so why would it matter how many windows are open in the background? I'm not really memory limited but I just found it odd to see such enormous memory usage. Right now it's back to 2-300 MB, which is still substantial but not ridiculous.
|
# ? Jun 12, 2013 19:56 |
|
spankmeister posted:Did you try kill -9 <pid of VLC> ? Yeah. Oh, I forgot to mention that in my post. Doing that turns it into a zombie, but screesaver/monitor off still won't happen until after a reboot.
|
# ? Jun 12, 2013 19:58 |
|
I'm trying to write a (supposedly) easy script to launch Serviio from my desktop. The bin file has serviio.sh (what starts the service) and serviio-console.sh (what launches the GUI). I haven't done this in a long time and it looks something like this at the moment: cd /home/greenpuddin/.serviio/bin ./serviio.sh ./serviio-bin.sh exit (It's a bit rough because I'm not at home with the actual script in front of me) I think what happens is that when it launches serviio.sh, it hangs on that process. It can't seem to get to the -bin process. Ideally I'd like to have it execute ./serviio.sh first, wait a second or two, then execute ./serviio-bin.sh to get the console up. Then I can just kill the entire thing when need be via the GUI. Any ideas here?
|
# ? Jun 12, 2013 20:11 |
|
Well, does serviio.sh ever terminate? I mean does it run some other processes and finish in a second or two, or does the script keep running until you kill it? In the latter case, you probably want something like: ./serviio.sh > serviio.log 2>servioo.err & or nohup ./serviio.sh if you want it to keep running after you log out.
|
# ? Jun 12, 2013 20:16 |
|
eXXon posted:Well, does serviio.sh ever terminate? I mean does it run some other processes and finish in a second or two, or does the script keep running until you kill it? If you only want serviio-bin.sh to run if serviio.sh succeeds: code:
code:
|
# ? Jun 12, 2013 20:25 |
|
eXXon posted:Ok, thanks. I'm not sure I completely followed that, though - like pseudorandom name said, 1920x1200x32 bits x 2 VNC sessions should only be a few tens of MB. Gah, I was misremembering. I was confusing it with how much data would be transmitted it would be if all the windows stream data out at 60FPS, which is a similar number. eXXon posted:Is it that much per window in each VNC session or something? I thought all VNC did was send a few compressed images of the desktop every second, so why would it matter how many windows are open in the background? In order to do alpha compositing and other fancy features, we need the pixels drawn from every single window. Usually this is kept in the X server or in GPU memory, but llvmpipe pulls it across into gnome-shell's memory space as well. It's possible to get rid of this and make llvmpipe more like other hardware accelerated driver, but we there just aren't enough people on llvmpipe at the moment, and our resident llvmpipe expert is working on PowerPC support at the moment. eXXon posted:I'm not really memory limited but I just found it odd to see such enormous memory usage. Right now it's back to 2-300 MB, which is still substantial but not ridiculous. 200-300MB is about normal for gnome-shell usage. We still try to profile and reduce the footprint, but there's a lot going on in there. gnome-shell doesn't just contain a window manager.
|
# ? Jun 12, 2013 21:38 |
|
Suspicious Dish posted:Gah, I was misremembering. I was confusing it with how much data would be transmitted it would be if all the windows stream data out at 60FPS, which is a similar number. Oh, right, four windows dirtying their entire contents at 60 FPS would be around 2 gigabytes per second. And since there's no way for X apps running under a compositor to determine if they're visible to the user and should continue rendering, this is actually possible.
|
# ? Jun 12, 2013 21:44 |
|
pseudorandom name posted:Oh, right, four windows dirtying their entire contents at 60 FPS would be around 2 gigabytes per second. Right, but that's data volume sent over the pipe. Almost none of that 2GB should be kept in resident memory, unless there's a pixmap leak somewhere. pseudorandom name posted:And since there's no way for X apps running under a compositor to determine if they're visible to the user and should continue rendering, this is actually possible. We've toyed around with letting windows know they were completely obscured so that they stopped drawing (especially with the new frame timing stuff), but computers are so fast nowadays that it's not worth the effort anymore.
|
# ? Jun 12, 2013 21:54 |
|
It isn't a matter of performance, it's a matter of battery life.
|
# ? Jun 12, 2013 22:00 |
|
Yeah, serviio.sh doesn't terminate and I think the issue is that when you launch something like that, say in the terminal, it will only focus on that one application/process until that terminal is closed. It's a process that needs to be ran first before the console, and just hangs out in the backround. It's a DLNA server if that helps make more sense. I'll have to try out what contrapants said when I get home from work tonight though so I'll check in later. E: Actually now I'm reading through some other forums and some people have gotten the process to start just by using "start serviio", will check it out. Green Puddin fucked around with this message at 23:29 on Jun 12, 2013 |
# ? Jun 12, 2013 22:58 |
|
Thermopyle posted:Yeah. Oh, I forgot to mention that in my post. Is the file you're trying to play on a smb share mounted over gvfs? There is something funky going on in gvfs and I find that I have to mount with cifs if I want don't want random hangs over smb. See if disconnecting the network share takes care of those leftover VLC processes. Longinus00 fucked around with this message at 23:10 on Jun 12, 2013 |
# ? Jun 12, 2013 23:04 |
|
The gvfs smb bug has been fixed upstream. It's just that giosrc does a lot of seeking, which was sort of broken. https://bugzilla.gnome.org/show_bug.cgi?id=675181
|
# ? Jun 12, 2013 23:19 |
|
Longinus00 posted:Is the file you're trying to play on a smb share mounted over gvfs? There is something funky going on in gvfs and I find that I have to mount with cifs if I want don't want random hangs over smb. Yeah, this is it. Unmounting it got rid of the VLC process. Suspicious Dish posted:The gvfs smb bug has been fixed upstream. It's just that giosrc does a lot of seeking, which was sort of broken. Yeah, this looks to be the issue I was experiencing. It seems to happen if I seek in a video file immediately after opening it.
|
# ? Jun 12, 2013 23:41 |
|
Suspicious Dish posted:The gvfs smb bug has been fixed upstream. It's just that giosrc does a lot of seeking, which was sort of broken. Do you happen to know what version of gvfs has the fix applied to it?
|
# ? Jun 13, 2013 00:20 |
|
Longinus00 posted:Do you happen to know what version of gvfs has the fix applied to it? The fix first appears in gvfs 1.17.0
|
# ? Jun 13, 2013 00:37 |
|
disregard, it doesn't work either way.
Cole fucked around with this message at 02:24 on Jun 13, 2013 |
# ? Jun 13, 2013 00:53 |
|
Suspicious Dish posted:The fix first appears in gvfs 1.17.0 Actually, it looks like those also landed in 1.16.1 which is the version Ubuntu is shipping in raring (assuming those commits at the end the bug report are what fixes it)... Thermopyle, what version of ubuntu are you running again? (I think I remember you saying you were using Ubuntu)
|
# ? Jun 13, 2013 01:30 |
|
Oh well, gently caress making that script work. Just have a folder named serviio in my home folders I will go to now when I want to launch it. A lot of Serviio instructions talk about how to get it to start at launch but this being a laptop that goes onto multiple networks a day, you can see how a DLNA server might piss off an admin or two.
|
# ? Jun 13, 2013 14:33 |
|
A question about open ports, with the massive caveat that I'm a bioinformatician that knows enough Linux & sysadmin to just get by. With that said: I'm developing a webapp for work (in Django for interests sake) and, as I've done many times before, during development I'm running it out of my account on port 8080. However, this time I'm doing it on a new server freshly delivered from IT and it's set up differently to what I expect and 8080 isn't opened. Symptoms / clues: * I can tunnel to the server to see port 8080 from my PC * But it's not visible to the wider world, i.e. browsing to <server_addr>:8080 shows nothing * The standard port 80 works - i.e. browsing to <server_addr> shows the Apache setup page * I tested if the ports were open by telneting from another machine - port 80 connects, 8080 does not. * The web software running on the machine never sees anything from the outside world - the logfile shows no requests. * iptables looks like this: code:
|
# ? Jun 13, 2013 15:47 |
|
Try this:code:
|
# ? Jun 13, 2013 16:01 |
|
It sounds like the webserver you are running on 8080 might only be binding to the localhost interface? Thats the default behaviour of the django test server, which I'm assuming you are using.code:
|
# ? Jun 13, 2013 16:08 |
|
Please don't use the django debugging server for anything other than local development. It is not meant for production use, and can and will crash and burn horribly in other deployments.
|
# ? Jun 13, 2013 16:39 |
|
Suspicious Dish posted:Please don't use the django debugging server for anything other than local development. It is not meant for production use, and can and will crash and burn horribly in other deployments. I've idly wondered in the past if it would be good enough for those "local" webapps that people have. Like something that runs on your machine and only has a web interface, or maybe runs on your home server and is only accessed locally by a person or two.
|
# ? Jun 13, 2013 17:52 |
|
eXXon posted:Ok, thanks. I'm not sure I completely followed that, though - like pseudorandom name said, 1920x1200x32 bits x 2 VNC sessions should only be a few tens of MB. Is it that much per window in each VNC session or something? I thought all VNC did was send a few compressed images of the desktop every second, so why would it matter how many windows are open in the background? Along these lines, is there any way to remotely play say a video game on a gigabit network with no discernible lag or image degradation, or is that still a pipe dream?
|
# ? Jun 13, 2013 20:24 |
|
Dead Inside Darwin posted:Along these lines, is there any way to remotely play say a video game on a gigabit network with no discernible lag or image degradation, or is that still a pipe dream? Short answer: no. You can sort-of do it with VMware View and a Quadro on your ESXi server, but the only games I've seen tested with it are pretty dated. If the "1920x1200x32" = 500MB/s vs. the ~125MB/s theoretical cap of GigE didn't clarify it, sending uncompressed, prerendered pixmaps at 60+FPS is plainly inefficient, and you need to compress the image in realtime to get it even sort-of working, which was OnLive's business model. Doing it at home is a pipe dream. RemoteFX (Microsoft) and HDX (Citrix) compete in the same space, but that space is pretty much "workstation level graphics cards and thin clients play a very limited subset of games" at the moment. For appropriately low input latency to a game, you'd be looking at something like VNC+IPoIB.
|
# ? Jun 13, 2013 21:36 |
|
|
# ? Jun 11, 2024 15:39 |
|
I suppose you could use a second pc to encode to x264 on the fly but the lag would be terrible. Onlive's main issue was just that, latency.
|
# ? Jun 13, 2013 21:41 |