|
cruft posted:X11 was designed before anybody knew what was going to unfold when it came to graphics on computers. Wayland has the benefit of seeing what decades of user interfaces had in common. I love reading about this kind of stuff. Thank you for sharing
|
# ? Jan 3, 2024 08:24 |
|
|
# ? May 19, 2024 17:48 |
|
The best part of X in the 90s is security didn't exist yet so you could telnet into your friend's workstation, run 'xhost +' and fire up xterms in a while loop and render his desktop useless.
|
# ? Jan 3, 2024 14:05 |
|
Even better, you could run xspy that my coworker wrote, and log every keystroke.
|
# ? Jan 3, 2024 15:52 |
|
X11 is clearly about radical transparency* *: As long as it's not true transparency
|
# ? Jan 3, 2024 15:55 |
|
cruft posted:the ultimate post / username combo Oh yeah that's the stuff. I very much agree with the perspective that it doesn't matter what the devs like, if the user experience is worse. But my experience with Wayland has been that everything is improving at a pretty rapid pace. My experience with X was that stuff that bugged me in 2003 was still annoying in 2023. So I'm firmly of the belief that Wayland will be a better user experience than X in all cases sometime soon-ish. Of course it helps that I don't have a lot of need for remote desktop or video capture so I'm not dealing with permissions popups all the time. cruft posted:It probably seems bonkers to modern desktop users that you'd want to launch an app on a remote system and have it see your local fonts and widget preferences (theme), but that's what X was designed to do. You could even run an X server on low-cost desktop hardware and run the window manager and everything else on a big beefy server. It is bonkers, and I actually *did* that. When I was in college, in the year 2000, the compsci assignments all had to be done on sun workstations in the lab rooms. I installed suse and figured out remote X desktop, so I could do them from my dorm. In related events, I became a computer toucher instead of a programmer.
|
# ? Jan 3, 2024 16:42 |
|
Tad Naff posted:Any Synergy users here? First day of work today and somehow it was fully broken on my main machine (Fedora 39). Luckily I had the previous version's RPM lying around but I spent a long time today trying to figure out the issue, never did. Broken version is 3.0.78.1, working version 3.0.77.2. The only clues I found were references to gdm. The packages from Symless say they're for Fedora 38, did something change around gdm for 39?
|
# ? Jan 3, 2024 17:31 |
|
Klyith posted:When I was in college, in the year 2000, the compsci assignments all had to be done on sun workstations in the lab rooms. I installed suse and figured out remote X desktop, so I could do them from my dorm. In related events, I became a computer toucher instead of a programmer. Around the same time my compsci department had Suns and SGI O2s in the class rooms. The O2s had Mozilla Thunderbird installed but were usually all in use. So I logged in to a Sun and then started TB remotely from random O2.
|
# ? Jan 3, 2024 17:40 |
|
Saukkis posted:Around the same time my compsci department had Suns and SGI O2s in the class rooms. The O2s had Mozilla Thunderbird installed but were usually all in use. So I logged in to a Sun and then started TB remotely from random O2. This sort of thing BLEW THE GRADUATE STUDENTS' MINDS
|
# ? Jan 3, 2024 17:55 |
|
Why are fonts like this? They've been like this for a long time. Why doesn't someone do something?? The font formats support Unicode!!! It makes it annoying to choose a font and it often makes the font chooser slow to a crawl. An example but it's not just Noto: quote:
|
# ? Jan 3, 2024 19:42 |
|
You don’t want a font that supports all of Unicode because loading the metadata for it would be huge and slow, and you don’t want every font to have to have every glyph in it in order to be used. Font selection for different glyphs (weights, sizes, ligatures, blah blah) should be picking the underlying fonts as needed, but I agree that font selection UI is a pile of butt generally. It should just let you say “use this whole family, ok?” and hide the component fonts unless you engage font-nerd mode.
|
# ? Jan 3, 2024 19:50 |
|
Noto was a good one to pick: the goal of that font is to have "No Tofu". They want to map every single glyph in Unicode. Maybe one day we'll have systems that can handle 3GB font files, but right now, it's more space efficient and easier on everybody to rely on "glyph borrowing" built in to basically everything. Then, if you're setting up a system and you're sure you will only need Latin, Klingon, and Tamil, you can only drop those font files, and save cost on not needing a massive Micro SD card for your gigantic font when you ship units. Not to mention, embedding a 3GB font in a web page is going to make page loads slow as hell. Basically, it'll happen, but we're not there yet. Back in college, everybody was struggling with color pallette switching. The VGA chip could only show 256 colors at a time, so if you went into Netscape, it would be like "I don't care what the other windows are doing, I need all 256" and your paint program colors would go bonkers until you clicked that back into focus. At the time, we were all "this sucks, one day we're going to look back on this and be glad we don't have to deal with it any more". And I'm here in the future to tell you that we were right: that was the worst. cruft fucked around with this message at 20:08 on Jan 3, 2024 |
# ? Jan 3, 2024 20:04 |
|
There are obviously reasons for splitting the physical files, both in terms of size and in terms of being able to set whether Chinese or Japanese versions of characters are the default on a device, but, yeah there should clearly be some way to collapse them into a single entry when selecting a font from a UI perspective.
|
# ? Jan 3, 2024 20:06 |
|
I wrote some C to interpret and update tables within a TTF file before. There are a lot of tricks in the format to save as much memory as possible -- practically everything is specified in terms of offsets and lengths. It's been a long time but think it should be possible to avoid loading the entire font into memory (at worst, mmap to make it easier). The main thing is an UI issue though. I just really wonder why it's like this as it's a very common PITA and I don't get why it was done this way
|
# ? Jan 3, 2024 20:38 |
|
mawarannahr posted:I wrote some C to interpret and update tables within a TTF file before. There are a lot of tricks in the format to save as much memory as possible -- practically everything is specified in terms of offsets and lengths. It's been a long time but think it should be possible to avoid loading the entire font into memory (at worst, mmap to make it easier). The main thing is an UI issue though. I just really wonder why it's like this as it's a very common PITA and I don't get why it was done this way I love the notion that mmapping a 2023 font would exhaust the addressable userspace memory on a 32-bit architecture. I have this little LEGO Margaret Hamilton on my desk. She's standing there with her pile of books, containing printouts of the Apollo code. I like to imagine her judging my decisions. That this font exists is absolutely appalling to 1960s LEGO Margaret Hamilton.
|
# ? Jan 3, 2024 20:54 |
|
cruft posted:I love the notion that mmapping a 2023 font would exhaust the addressable userspace memory on a 32-bit architecture. lol I hadn't considered the format might break due to integer overflow in offset/length fields but I assume otf fixed that / new tables could have fixed it. they do have some tricks though like using offsets from powers of 2 to keep the format tight. vv mmap isn't necessary, probably more convenient than what was kind of like reading from some sort of tape. But just grouping the fonts by metadata would fix the UI issue. mawarannahr fucked around with this message at 21:01 on Jan 3, 2024 |
# ? Jan 3, 2024 20:56 |
|
having to shift mmap windows during text layout (and again for rendering!) to conserve address space would have really made the Linux desktop feel zippy
|
# ? Jan 3, 2024 20:57 |
|
Is there a VNC setup I can use in this day and age that will start up before a local login? I've particularly been trying Tiger VNC and KDE's own krfb. Both require that I have fully logged in to the actual machine on its local screen first. So the only solution I'd have is to enable auto-login, and I really don't want to do that.
|
# ? Jan 4, 2024 00:35 |
|
I haven't had to do that in a while esp. since wayland, if your on X I think something like this should work? https://wiki.archlinux.org/title/TigerVNC#With_systemd No idea with wayland, no machine maybe might support it?
|
# ? Jan 4, 2024 01:08 |
Rocko Bonaparte posted:Is there a VNC setup I can use in this day and age that will start up before a local login? I've particularly been trying Tiger VNC and KDE's own krfb. Both require that I have fully logged in to the actual machine on its local screen first. So the only solution I'd have is to enable auto-login, and I really don't want to do that. Perhaps ssh into the account and a dbus command to start a wayland session (assuming that's your compositor)? code:
|
|
# ? Jan 4, 2024 01:16 |
|
mawarannahr posted:I wrote some C to interpret and update tables within a TTF file before. There are a lot of tricks in the format to save as much memory as possible -- practically everything is specified in terms of offsets and lengths. Can you share the code?
|
# ? Jan 4, 2024 16:10 |
|
Armauk posted:Can you share the code? GitHub wasn't around yet but here's Apple's docs. It was kind of fun to figure out once I got the idea. Lots of bit twiddling and big endian poo poo. A good way to learn C imo (I would also reimplement the same as a learning project for other languages, even Ruby). Unfortunately I feel like I'm not as smart as I was back then... misspent youth You can see more clever tricks like the stuff below everywhere in the format. Incidentally, someone posted a blog a few days ago pn HN about their more recent experience working with the format: Writing a TrueType font renderer Anyway Character to Glyph Mapping Table - TrueType Reference Manual developer.apple.com posted:… mawarannahr fucked around with this message at 16:49 on Jan 4, 2024 |
# ? Jan 4, 2024 16:46 |
The best thing about X is that it's called that because it was "one better than" the windowing system called W on an OS called V. Rules of pun also apply to how XFree86 got to be called that, because it was the free reimplementation of X386, which was a proprietary port for X to the i386, when the latter was still new.
|
|
# ? Jan 5, 2024 14:31 |
|
Wayland really should have been called Y.chaos or some other dumb pun on X.org .
|
# ? Jan 5, 2024 15:21 |
|
Unfortunately we no longer live in the era of pun names, we live in an era where it needs to be easy to get the top result in a google search.
|
# ? Jan 5, 2024 16:14 |
|
see also Xr =~ Cairo Toshok of lesstif and other fame had a Y window system hobby project for a while too, IIRC
|
# ? Jan 5, 2024 16:17 |
|
Rocko Bonaparte posted:Is there a VNC setup I can use in this day and age that will start up before a local login? I've particularly been trying Tiger VNC and KDE's own krfb. Both require that I have fully logged in to the actual machine on its local screen first. So the only solution I'd have is to enable auto-login, and I really don't want to do that.
|
# ? Jan 5, 2024 16:21 |
On the topic of dumb names... One of Redhat's pipeline projects, bootupd which, among other things, should let the user reboot to apply (non-kernel) updates without doing a full back-to-the-bios/uefi reboot doesn't actually use a daemon despite the name's implicationquote:Why is bootupd a daemon?
|
|
# ? Jan 5, 2024 16:43 |
|
Subjunctive posted:see also Xr =~ Cairo We could've been using Berlin on GGI.
|
# ? Jan 6, 2024 03:59 |
|
Rocko Bonaparte posted:Is there a VNC setup I can use in this day and age that will start up before a local login? I've particularly been trying Tiger VNC and KDE's own krfb. Both require that I have fully logged in to the actual machine on its local screen first. So the only solution I'd have is to enable auto-login, and I really don't want to do that. Hmm, I have a VM that I VNC to, but only via a SSH forwarded port (works from windows and linux). And now that you asked, I looked to see what am I using and how I configured things: - Display manager is lightdm (no VNC support configured for it) - Session is openbox - VNC server is TigerVNC And I have no loving clue what is going on and how they're configured. Done this a decade ago and it's working and I have no idea what's what. In /etc/tigervnc/vncserver.users I have defined my user to a specific VNC display number: :1=user I have the vncserver@:1 service enabled and started. The display-manager is enabled and started. I ssh to the computer, then I can nicely VNC to it via the local forwarded port. I ... think it automatically logs in but I'm not 100% sure. Just SSH-ing to the thing, it shows me the following processes: pre:433 ? S 0:00 xinit /etc/lightdm/Xsession /usr/bin/openbox-session -- /usr/bin/Xvnc :1 -alwaysshared -geometry 1920x1080 -localhost -auth /home/user/.Xauthority -desktop VM:1 (user) -pn -rfbauth /home/user/.vnc/passwd -rfbport 5901 442 ? S 0:00 /usr/bin/Xvnc :1 -alwaysshared -geometry 1920x1080 -localhost -auth /home/user/.Xauthority -desktop VM:1 (user) -pn -rfbauth /home/user/.vnc/passwd -rfbport 5901 444 ? S 0:00 /usr/bin/openbox --startup /usr/lib/openbox/openbox-autostart OPENBOX
|
# ? Jan 6, 2024 04:27 |
|
Volguus posted:I hope this poo poo never breaks, cause I don't know how to fix it. This post reminded me of Lilo and x modelines.
|
# ? Jan 6, 2024 05:31 |
|
cruft posted:This post reminded me of Lilo and x modelines. I think Mandrake used Lilo instead of GRUB, back in the day. Also, was it Gnome that relied on Compiz for compositing, or was it more general? The reference to Cairo earlier made me remember Compiz for some reason.
|
# ? Jan 6, 2024 05:56 |
|
CaptainSarcastic posted:I think Mandrake used Lilo instead of GRUB, back in the day. by default gnome 2/3 use metacity/mutter as their windowmanager respectively but you could install compiz and use it instead for your wobbly window needs also jesus i never would have put the cairo pun together in 100 years
|
# ? Jan 6, 2024 06:14 |
|
I'm probably SOL but thought I'd ask anyway. I have an ancient Macbook Air running Fedora 39 and have just discovered the joys of PD. I got a nice USB4 (is that a thing? AliExpress says it is) cable rated for 120W and a power supply. It powers fine. Anyway, I want it to suspend/hibernate at night but still have the power connected, because I want to destroy all rechargeable batteries and insist on having them powered all the time. OK not really, but I'm an old with Habits. The thing is, when I close the lid and put it on the floor by the bed at night the magsafe connector inevitably joggles a bit because the cable is thick and stiff, and the Macbook helpfully fires up again because I obviously want it to at 2AM when I'm going to sleep. Rinse, repeat until I give up and power down and disconnect the cable. I've found how to disable "AutoBoot" on old pre-M(1|2) Macs ("sudo nvram AutoBoot=%00"), but that only works if you haven't blown away OSX and installed Fedora. I optimistically did a "dnf whatprovides */nvram" but that only gives me some stuff from QEMU and MAME. How do I disable this feature that it's hard to imagine anyone wanting? I'm not averse to surgery, if there's a sensor I can disconnect or something like that.
|
# ? Jan 7, 2024 07:45 |
|
That'll be an EFI variable, you can use the efivar command or /sys/firmware/efi to manipulate them.
|
# ? Jan 7, 2024 08:50 |
|
pseudorandom name posted:That'll be an EFI variable, you can use the efivar command or /sys/firmware/efi to manipulate them. So there's hope! But I'm not seeing AutoBoot anywhere here: code:
|
# ? Jan 7, 2024 10:18 |
|
I ended up starting a tiger VNC server in a separate session in a virtual frame buffer. When I connected the machine to the lab equipment, the video forced the physical screen frame buffer to 1280x1024. I have no idea of RDP can work on a separate buffer like that, but I'd want to try that too in case it's faster. I'd still much rather have it right next to me, of course. How abnormal is it these days if you're day-to-day engineering work is Linux that you have to use a Windows laptop for everything? I'm pretty sure I'm not being crazy on this being ridiculous.
|
# ? Jan 8, 2024 04:28 |
|
just want to say turbovnc is good and you should try it if you aren't committed to tigervnc. They very recently had a new release too. TurboVNC | About / What About TigerVNC? turbovnc.org posted:The TigerVNC Project was founded by some of the former TightVNC developers, Red Hat, and The VirtualGL Project in early 2009, with the goal of providing a high-performance VNC solution based on the RealVNC 4 and X.org code bases. Throughout 2010 and 2011, The VirtualGL Project contributed many hours of labor (probably half of them pro bono) to the development of TigerVNC, in hopes of turning TigerVNC into "TurboVNC 2.0." Ultimately, however, it became apparent that, both from a technological and a political point of view, making TigerVNC into a TurboVNC work-alike was going to be like fitting a square peg into a round hole. Unlike TurboVNC, TigerVNC is not focused on 3D and video applications, so its developers were not generally very concerned with making such applications performant by default. Furthermore, there was resistance to including some of TurboVNC's 3D-specific features, such as automatic lossless refresh, in TigerVNC. In general, there was also just an irreconcilable clash of project management styles. Thus, with the release of TigerVNC 1.2.0, The VirtualGL Project stepped down as a contributor and supporter of TigerVNC in order to focus on moving TurboVNC forward.
|
# ? Jan 8, 2024 04:55 |
|
Hey! Sorry for the late reply, I've been busy with holiday/family stuff for the past two weeks so I haven't had time to sit and mess around with my PC and have only been online here and there. For reference to things I'll be replying to below, my original post: Framboise posted:Dunno if this is the right thread for this since my question is probably really base level compared to most, but: VictualSquid posted:Mounting windows partitions in linux works fairly well these days. And WSL should be able to mount a linux partition, though I never actually tried it. Having a third drive/partition does work, the same as using a removable drive. So long as it works, I'm good. That way I can just redirect anything I save or want to access from either OS in one directory. Would that also work with Steam installs, or are those installed in completely different ways? I'm content to stick with X11 for now; it's just that I don't really understand what distinguishes the two and why wayland is so fussy with nvidia. I just see a lot of neat desktop designs in tiling managers like Hyprland and Sway posted on reddit and twitter and stuff so I kinda had my eye on those as a "well, that must be what the cool kids are using! " kinda thing. I've heard of EndeavorOS and have been meaning to try it! I haven't really had much issue with Arch so far and feel like most issues I've had are less the OS or my lack of understanding moreso than it just is VM compatibility hiccups. But I could be wrong! (And probably am!) One thing I've realized as I've been exploring Linux that for every question I seek answers to, I generally find at least three different answers that give me more questions. It's hard to know what the gently caress you're doing as someone new as far as what "right answers" are. cruft posted:I'm going to suggest you physically disconnect that Windows disk so you can feel free to drink around with distributions and reinstall a lot, without having to worry about accidentally nuking the OS you're familiar with. Shouldn't be an issue then really. I currently have Windows 10 installed on my old SSD, but the new SSD I just bought will only work on my PC's board if I disconnect the other one (I basically get the SATA SSD I've been using for the past 8 years or so, or the NVMe one I just bought, and it won't work if I've got both hooked up), so all I'd need to do is reconnect the old drive if I really gently caress something up, but I'm also not really all that scared of working with partitions and installing things anymore after doing a manual Arch install. The only thing I'm afraid of is losing all my files, and I can back all that stuff up. mystes posted:The proprietary nvidia drivers suck with wayland and sway won't work with them at all. I'm not sure what the state of the open source drivers is. I would suggest sticking with x11 and i3 for now unless you want to switch to an amd gpu It's a desktop, yeah. I've had it since 2016 and it's served me well, but I am looking to upgrade it this year. Currently running NVIDIA GeForce 980Ti, 16 GB of RAM, Intel i7-4790K CPU. Not really sure what integrated graphics means or how I'd know if I do, or what gpu passthrough is, though I've read something about just using Windows on a VM in Linux using gpu passthrough rather than dual booting or something like that, which sounds nice too. Mr. Crow posted:Wayland and nvidia arent there yet by all reports and sounds like your just confirming it. Maybe try it with the next big KDE release but I'm guessing you'll need to wait on the open source nvidia driver (NVK i think?) or some update by nvidia themselves. All I really want to do is exclusively have a partition specifically for images/documents/videos/music/Steam stuff, etc. I'd hope most of that is generally friendly between both systems. ziasquinn posted:yeah if you have Nvidia just use the proprietary drivers tbh and stick to X11 for now at least. Can do. I do intend to get an AMD GPU when I get around to upgrading so I won't have to worry about that anymore. feedmegin posted:Windows cannot see Linux partitions (ext4 eg) natively so be aware of that. You'd want to have a Windows (NTFS) partition you mount from Linux not the other way around. I've played with WSL a little bit before I just started exploring Linux more in VMs. I'll play around with it a bit more. And good to know, yeah. Hopefully if what I heard about just using Windows in a VM with gpu passthrough or whatever, I won't need to rig up something that both systems play nice with and can just keep everything in my home partition, yeah?
|
# ? Jan 8, 2024 06:22 |
|
Framboise posted:Shouldn't be an issue then really. I currently have Windows 10 installed on my old SSD, but the new SSD I just bought will only work on my PC's board if I disconnect the other one (I basically get the SATA SSD I've been using for the past 8 years or so, or the NVMe one I just bought, and it won't work if I've got both hooked up), so all I'd need to do is reconnect the old drive if I really gently caress something up, but I'm also not really all that scared of working with partitions and installing things anymore after doing a manual Arch install. The only thing I'm afraid of is losing all my files, and I can back all that stuff up. This typically means that your SATA SSD is connected to a port that's shared with the M.2 slot: The M.2 connector can supply both PCIe (for NVME drives) and SATA, and I've seen several motherboards where a couple of the SATA ports are badly documented or just marked on the board as "only works when M.2_1 is not in use" or similar. There's typically a handful of SATA ports that do not have this issue. Framboise posted:Not really sure what integrated graphics means or how I'd know if I do, or what gpu passthrough is, though I've read something about just using Windows on a VM in Linux using gpu passthrough rather than dual booting or something like that, which sounds nice too. "Integrated graphics" means there's a small graphics card directly inside your CPU. It's how most laptops work, and they're also fairly common on desktop CPUs that could end up in an office PC - it saves the manufacturer from having to add a whole GPU just to render some spreadsheets. If you have an HDMI (or DisplayPort) connector on your motherboard, it's for this. Your i7 4790K comes with "Intel® HD Graphics 4600". GPU forwarding needs a bit of background. I don't know if you've ever used a SNES emulator or anything like that? Virtual machines are the same kind of idea. You run a program that pretends to be a completely separate computer. The biggest difference between this and a SNES emulator is that your "real" machine and the VM are basically the same. That means that instead of needing to run a lot of code to simulate e.g. a SNES, you can just fence off a bit of your CPU and run the VM directly, which is incredibly much faster; there's nearly zero overhead. The problems come when you want to use any other devices. Storage is easy enough, just inject a fake device into the VM that looks like a hard drive, and on the outside we can redirect it to whatever we want, like a file or an actual disk. Graphics is harder, because it's much more demanding. A graphics card isn't meant to be driven by two operating systems at the same time: If both Linux and Windows tried to use it simultaneously, it would instantly mess up for both of them. To keep this from happening, you're just not allowed to give a VM control of a card that's already in use. There's three ways around this: - You can create a fake graphics card in the VM, like we do with storage. That works, but it's not fast enough for gaming - it's comparable to windows remote desktop. - You can buy one of the stupid expensive server cards that actually support being driven by multiple OSes (by pretending to be multiple cards) - though I don't even know if those support graphical output or if they're only for accelerating AI things - You can put two graphics cards in your PC: One for Linux and one for the virtual machine. On the linux side, you set it up so it reserves the second GPU and doesn't touch it, and then you let the Windows virtual machine get full control over that card. This is GPU forwarding - Linux just passes one GPU on to the virtual machine. In your case, you actually have two GPUs: The integrated intel graphics, and the nVidia 980. You probably need to find a BIOS setting for "activate the integrated GPU even if I have a separate GPU plugged in", though. Computer viking fucked around with this message at 16:01 on Jan 8, 2024 |
# ? Jan 8, 2024 15:30 |
|
|
# ? May 19, 2024 17:48 |
|
Framboise posted:I'm content to stick with X11 for now; it's just that I don't really understand what distinguishes the two and why wayland is so fussy with nvidia. I just see a lot of neat desktop designs in tiling managers like Hyprland and Sway posted on reddit and twitter and stuff so I kinda had my eye on those as a "well, that must be what the cool kids are using! " kinda thing. X11 and Wayland are doing the same thing: they're protocols rather than software, setting up how an app communicates with the OS display software. X11 is ancient and has many problems due how old and decrepit it is. Wayland is new and has many problems due to being new and unfinished. Nvidia is fussy with Wayland because Nvidia doesn't give a poo poo about linux, particularly not consumer desktop linux. (CUDA and non-display server stuff works fine.) They waited until Wayland was actually being widely used before they got serious about support. What the cool kids post in "check out my new desktop" on reddit isn't what most people are using. Most people use gnome or KDE. The cool kids are the people who get entertainment from setting up the new FOTM desktop environment. Not that there's anything wrong with that, or the DE that's the current new hotness. If a tiling WM works well for you, go with it. Framboise posted:
Linux definitely has a lot of three correct answers going on, and in most cases the correct choice is "go with your distro's default unless you have a strong reason otherwise". That's a lot of what a distro does -- pick from among sets (A,B,C) and (X,Y,Z) in ways that avoid problems between B and Y. This is one of the reasons arch isn't recommended for newbies, it has fewer defaults and makes you pick for yourself. And if B and Y have problems together, well that's on the wiki, you read the wiki first didn't you? Framboise posted:Currently running NVIDIA GeForce 980Ti, 16 GB of RAM, Intel i7-4790K CPU. The integrated graphics is on your CPU, it's what comes out the video port (probably DVI at that vintage) on your mobo. It's good enough for desktop 2d stuff. I was also intrigued by windows on a VM in linux with hardware passthrough. Then I discovered all my games work fine in linux. Since games were the only thing I needed that for, I never bothered getting a 2nd GPU and setting it up. A windows VM using spice video is good enough for non-game software. Framboise posted:All I really want to do is exclusively have a partition specifically for images/documents/videos/music/Steam stuff, etc. I'd hope most of that is generally friendly between both systems. Only fat32 and exfat are totally friendly between both systems. Fat isn't great. Linux can see NTFS fine but running steam games off in linux a NTFS partition is not recommended. Also, having a single partition that is writable by two OSes at the same time (ie the linux host and a windows VM) is very very bad. Only a matter of time until your data is corrupted bad. The two OSes will make changes that the other doesn't see, so will be working from incorrect state. So tldr it's kinda hard to have two co-equal OSes even with the VM method. If you are frustrated with Windows or MS, I would say just go whole hog and try to move over. If you're into linux as a fun project or to learn, do a dual-boot and keep windows as the main OS.
|
# ? Jan 8, 2024 15:51 |