Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
reading
Jul 27, 2013

evol262 posted:


Bluntly, though, this is a total waste of time, especially if you're on an SSD. An incalculable amount of time has gone into caching and paging performance in the kernel, and readahead is awfully good at predicting this. Plus games are better about splitting, caching, and reusing resources. I get that this is some gamer thing which was also popular 20 years ago, but it's unlikely to make much of a difference for the effort you put into it.

Yeah I got it downloaded onto my ramdisk and noticed zero improvement. My little brother had been raving about how great his ramdisks were under Win7 so I thought I'd try it, but I guess my SDD HD is quite up to the task (and/or the bottleneck to performance lies elsewhere, which is possible since I'm pushing this laptop's CPU to the max).

Time to copy everything back to where it was and consider this experiment a successful null hypothesis.

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

reading posted:

Yeah I got it downloaded onto my ramdisk and noticed zero improvement. My little brother had been raving about how great his ramdisks were under Win7 so I thought I'd try it, but I guess my SDD HD is quite up to the task (and/or the bottleneck to performance lies elsewhere, which is possible since I'm pushing this laptop's CPU to the max).

Time to copy everything back to where it was and consider this experiment a successful null hypothesis.

Most of the time, the ridiculous things the gaming community does and raves about (overclocking, GPU overclocking, ramdisks, etc) are pretty useless. A bunch of 16 year olds on Tomshardware or Steam or whatever the cool forum is these days are not better at performance optimization than people who write general purpose operating systems, and the people paid by companies making billions of dollars a year to make those operating systems faster.

It's confirmation bias. Especially since the vast majority of time in multiplayer games is spent waiting on the network or other players to sync. Think of it this way: if your ping is 30ms, a 300MB/s SSD (which is pretty meh these days) can put 9 minutes of music into memory (assuming 128kb mp3s -- I don't really keep up on music sizes these days) in the time it takes for you to reach the server. The network is the bottleneck, probably even moreso than the processor in your laptop.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

It can be useful. I've used tmpfs to great effect when doing large compiles involving lots of small files. 5-10x speedups. That system did not have an SSD, though. I definitely agree that gaming is not likely to be a good candidate for a ramdisk, unless you really, really need to load first for bragging rights or something.

reading
Jul 27, 2013

evol262 posted:

Most of the time, the ridiculous things the gaming community does and raves about (overclocking, GPU overclocking, ramdisks, etc) are pretty useless. A bunch of 16 year olds on Tomshardware or Steam or whatever the cool forum is these days are not better at performance optimization than people who write general purpose operating systems, and the people paid by companies making billions of dollars a year to make those operating systems faster.

It's confirmation bias. Especially since the vast majority of time in multiplayer games is spent waiting on the network or other players to sync. Think of it this way: if your ping is 30ms, a 300MB/s SSD (which is pretty meh these days) can put 9 minutes of music into memory (assuming 128kb mp3s -- I don't really keep up on music sizes these days) in the time it takes for you to reach the server. The network is the bottleneck, probably even moreso than the processor in your laptop.

This is for a single player, offline game (Shadowrun Returns). It has tons of loading during which the player has to sit and stare at a loading screen, so I thought that having it all in RAM would surely be faster. Nonetheless, after using the ramdisk, my hard-drive-activity indicator showed that nothing was happening with my harddrive, but I still had lots of loading times. I'm not sure what was loading what into where...this laptop uses a crummy Intel graphics card and the CPU gets heavily taxed during gameplay, but I don't get why having everything in RAM wasn't faster.

evol262
Nov 30, 2010
#!/usr/bin/perl

reading posted:

This is for a single player, offline game (Shadowrun Returns). It has tons of loading during which the player has to sit and stare at a loading screen, so I thought that having it all in RAM would surely be faster. Nonetheless, after using the ramdisk, my hard-drive-activity indicator showed that nothing was happening with my harddrive, but I still had lots of loading times. I'm not sure what was loading what into where...this laptop uses a crummy Intel graphics card and the CPU gets heavily taxed during gameplay, but I don't get why having everything in RAM wasn't faster.

I'm honestly surprised about this. I don't play a lot of games these days, but Shadowrun is one of them. I've never noticed load time being anywhere near problematic on a Yoga 2 Pro (also Intel graphics, laptop, SSD), but I've only really played the stock campaigns and Unlimited.

pseudorandom name
May 6, 2007

The bulk of the loading time is probably spent compiling shaders.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Well, it's a process of elimination, no? If it's not IO, it has to be the CPU. Does perf top point at any culprits while the game's loading?

Longinus00
Dec 29, 2005
Ur-Quan

reading posted:

This is for a single player, offline game (Shadowrun Returns). It has tons of loading during which the player has to sit and stare at a loading screen, so I thought that having it all in RAM would surely be faster. Nonetheless, after using the ramdisk, my hard-drive-activity indicator showed that nothing was happening with my harddrive, but I still had lots of loading times. I'm not sure what was loading what into where...this laptop uses a crummy Intel graphics card and the CPU gets heavily taxed during gameplay, but I don't get why having everything in RAM wasn't faster.

Only part of the "loading" time of a game is physically reading information from your disk. There's a lot of other processing that goes on afterwards.

As suspicious dish alluded to, the best thing is to do some profiling instead of doing baseless premature optimization.

CaptainSarcastic
Jul 6, 2013



Longinus00 posted:

Only part of the "loading" time of a game is physically reading information from your disk. There's a lot of other processing that goes on afterwards.

As suspicious dish alluded to, the best thing is to do some profiling instead of doing baseless premature optimization.

I've been playing Shadowrun Returns quite a bit, and it is quite evident that the larger and more complex a map, the longer the load times. I haven't tried playing it on Windows yet, but I'm guessing that I would see the same thing happening on that side. My Steam library runs off a normal SATA HD, not an SSD, and the load times are only annoying when I am having to load new areas frequently (the Workshop DLC Darkness Falls has an area where you can end up having to go from a large to a small map and then back to the large map repeatedly). Generally, though, I don't consider the load times to be excessive.

JHVH-1
Jun 28, 2002
Seems to me a game might need to use RAM to store data that doesn't fit on VRAM (Which may even be your bottleneck) or whatever when you load a level. So if you are taking up most of your RAM with the game itself, and then it goes to load the game into memory, then it would have to do all kinds of swapping memory and disk back and forth to allocate RAM or get data from tmpfs that isn't actually in RAM at that point. Could make performance even worse at that point if you don't know what is going on internally.

ExcessBLarg!
Sep 1, 2001

taqueso posted:

I've used tmpfs to great effect when doing large compiles involving lots of small files. 5-10x speedups.
The main difference there is that when compiling, you're writing out a bunch of object files to stable storage. So yeah, you're introducing a considerable amount of iowait, since the filesystem will require that at least the journal transactions are written out between each file. Putting the build directory on a tmpfs will speed things up considerably, but at the cost of losing the entire build directory in a power event--which is generally OK.

However, with read-only workloads on a file-system mounted as noatime or relatime, there's nothing to write out, and generally the first time a file is read it's maintained in the page cache unless you're RAM starved. I suppose there's some seek patterns that are terrible with mechanism disks such that a sequential prefetch makes the first-time loading faster, but in most cases it's not worth doing.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


ExcessBLarg! posted:

The main difference there is that when compiling, you're writing out a bunch of object files to stable storage. So yeah, you're introducing a considerable amount of iowait, since the filesystem will require that at least the journal transactions are written out between each file. Putting the build directory on a tmpfs will speed things up considerably, but at the cost of losing the entire build directory in a power event--which is generally OK.

However, with read-only workloads on a file-system mounted as noatime or relatime, there's nothing to write out, and generally the first time a file is read it's maintained in the page cache unless you're RAM starved. I suppose there's some seek patterns that are terrible with mechanism disks such that a sequential prefetch makes the first-time loading faster, but in most cases it's not worth doing.

I have distcc setup with my coworkers. We mostly program on our laptops and just shove the compile work on to our workstations since they're never in use.

Weird Uncle Dave
Sep 2, 2003

I could do this all day.

Buglord
What's the current state-of-the-art in "using Windows AD credentials to log into your Linux servers"?

We're a pretty big Windows shop, with a pretty large AD environment. It's large enough that we can't use PowerBroker. (If you have more than 512k objects in a domain, PBIS starts duplicating UIDs, which is obviously bad...) Right now we're using Centrify, which works well when it works, but sometimes it just stops randomly enumerating group members for a day or two, and no amount of restarting the service/flushing the AD cache/rebooting the server helps reliably. This is only a problem with multi-domain setups (server is joined to domain1, using credentials from domain2, the two domains have a two-way trust). If we only had one domain to worry about, we'd probably just use Samba, or I'd force everyone to upgrade to RHEL7 and use realmd.

Unfortunately, I'm limited to "stuff that can run on Linux by itself," with no helpers. (Centrify wants to sell you stuff that snaps into your domain controllers, but nobody on the Linux team is allowed within a hundred feet of the DCs. The paid version of PowerBroker seems to require the DC components, whereas Centrify just recommends them.)

Is Centrify still the least-awful product for this?

Thalagyrt
Aug 10, 2006

Weird Uncle Dave posted:

What's the current state-of-the-art in "using Windows AD credentials to log into your Linux servers"?

We're a pretty big Windows shop, with a pretty large AD environment. It's large enough that we can't use PowerBroker. (If you have more than 512k objects in a domain, PBIS starts duplicating UIDs, which is obviously bad...) Right now we're using Centrify, which works well when it works, but sometimes it just stops randomly enumerating group members for a day or two, and no amount of restarting the service/flushing the AD cache/rebooting the server helps reliably. This is only a problem with multi-domain setups (server is joined to domain1, using credentials from domain2, the two domains have a two-way trust). If we only had one domain to worry about, we'd probably just use Samba, or I'd force everyone to upgrade to RHEL7 and use realmd.

Unfortunately, I'm limited to "stuff that can run on Linux by itself," with no helpers. (Centrify wants to sell you stuff that snaps into your domain controllers, but nobody on the Linux team is allowed within a hundred feet of the DCs. The paid version of PowerBroker seems to require the DC components, whereas Centrify just recommends them.)

Is Centrify still the least-awful product for this?

Well, from your first sentence I was going to recommend Centrify... But you're already doing that, so yeah. It really is the least awful product for it. I've also tried Likewise a few years ago and my experience with it was that it was slooooooow as hell. Centrify gets the job done for me.

the
Jul 18, 2004

by Cowcaster
Good program to rip audio CDs?

spankmeister
Jun 15, 2008






the posted:

Good program to rip audio CDs?

Yes?

the
Jul 18, 2004

by Cowcaster

What is a?

spankmeister
Jun 15, 2008






Asunder.

Earl of Lavender
Jul 29, 2007

This is not my beautiful house!!

This is not my beautiful wife!!!
Pillbug

the posted:

Good program to rip audio CDs?

If you like command line stuff, I'd use cdparanoia.

Hollow Talk
Feb 2, 2014

the posted:

Good program to rip audio CDs?

I always liked Grip (http://sourceforge.net/projects/grip/), though I'm not sure whether it's actively developed anymore. It's quite nice since you can essentially set up the encoding stage with all the commandline parameters you might want, add a nice naming scheme and still get CDDB lookups and a GUI to work with.

fuf
Sep 12, 2004

haha
Is there a way to make bash immediately list the possibilities when autocomplete matches more than one thing? Right now I press tab once and get a beep, then I have to press it again to show the list. I want to skip the beeping part.

Hollow Talk
Feb 2, 2014

fuf posted:

Is there a way to make bash immediately list the possibilities when autocomplete matches more than one thing? Right now I press tab once and get a beep, then I have to press it again to show the list. I want to skip the beeping part.

Have you tried the options mentioned in this SO post yet? https://unix.stackexchange.com/questions/73672/how-to-turn-off-the-beep-only-in-bash-tab-complete

fuf
Sep 12, 2004

haha

uh, no, I didn't, and the first one works (putting "set show-all-if-ambiguous" in .inputrc). Sorry, that was obviously a searchable solution that didn't need a post. Sometimes I get lazy and slip into the habit of just asking questions as they occur to me!

especially when you are all such helpful fellows

Hollow Talk
Feb 2, 2014

fuf posted:

uh, no, I didn't, and the first one works (putting "set show-all-if-ambiguous" in .inputrc). Sorry, that was obviously a searchable solution that didn't need a post. Sometimes I get lazy and slip into the habit of just asking questions as they occur to me!

especially when you are all such helpful fellows

Oh, I didn't mean to be passive aggressive, sorry if it came across like that! Glad it worked! :)

Also: :justpost:

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I'm trying to puppetize something, and I just can't wrap my head around a "best practices" way to do this, and it's frustrating me and making me want to just configure it by hand because I can't figure out what the hell I'm "supposed" to do with it.

We have a Samba server. It has a samba config. Sometimes people request new samba shares, so we modify the smb.conf file to include the new share and restart the samba server. It's just one server doing this, and the config file changes sometimes. I have no idea where to begin to figure out what the best way to manage this through samba, or how of it I should even be putting into puppet. There seems to be so much documentation on how to make puppet do things, but very little on what you should be doing with puppet to not create an awful rat's nest.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

I'm trying to puppetize something, and I just can't wrap my head around a "best practices" way to do this, and it's frustrating me and making me want to just configure it by hand because I can't figure out what the hell I'm "supposed" to do with it.

We have a Samba server. It has a samba config. Sometimes people request new samba shares, so we modify the smb.conf file to include the new share and restart the samba server. It's just one server doing this, and the config file changes sometimes. I have no idea where to begin to figure out what the best way to manage this through samba, or how of it I should even be putting into puppet. There seems to be so much documentation on how to make puppet do things, but very little on what you should be doing with puppet to not create an awful rat's nest.
You probably want to manage file fragments with concat that get pieced together into a master smb.conf.

xdice
Feb 15, 2006

FISHMANPET posted:

I'm trying to puppetize something, and I just can't wrap my head around a "best practices" way to do this, and it's frustrating me and making me want to just configure it by hand because I can't figure out what the hell I'm "supposed" to do with it.

We have a Samba server. It has a samba config. Sometimes people request new samba shares, so we modify the smb.conf file to include the new share and restart the samba server. It's just one server doing this, and the config file changes sometimes. I have no idea where to begin to figure out what the best way to manage this through samba, or how of it I should even be putting into puppet. There seems to be so much documentation on how to make puppet do things, but very little on what you should be doing with puppet to not create an awful rat's nest.

I've been working on something related, and found that smb.conf has a couple handy options that might help you with this.

From the samba.org docs you might check out the 'include' and 'copy' directives - they've allowed me to make my shares (and the config) a bit easier to deal with.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
I'm trying to create a vagrant base box from a 17GB vdi file but vagrant package --base <vm> yields a package.box file that is only 20MB. Any idea what the issue might be?

edit: oh shite It's because I'm out of disk space. I'm surprised the package command didn't throw an error.

fletcher fucked around with this message at 09:20 on Jul 16, 2014

jre
Sep 2, 2011

To the cloud ?



Not sure if this is the best thread for this but ....


We are looking at redoing our source control and deployment process for one of our LAMP web apps from scratch.

My initial thoughts were to use git for source control and have the changes automatically pushed to web servers when things are committed to the production branch.

I looking for suggestions on how to manage changes to the sql schema , and how to manage syncing / rolling back files on the production web servers.

What do folk here use for this ?

Thalagyrt
Aug 10, 2006

jre posted:

Not sure if this is the best thread for this but ....


We are looking at redoing our source control and deployment process for one of our LAMP web apps from scratch.

My initial thoughts were to use git for source control and have the changes automatically pushed to web servers when things are committed to the production branch.

I looking for suggestions on how to manage changes to the sql schema , and how to manage syncing / rolling back files on the production web servers.

What do folk here use for this ?

This is pretty much exactly what a CI system is for. Deployment is part of the whole process - build your software, run unit/integration tests, and deploy with a deploy script if everything passes. I use Bamboo and it's been excellent, but it's not free - if you want something free take a look at Jenkins.

evol262
Nov 30, 2010
#!/usr/bin/perl

jre posted:

Not sure if this is the best thread for this but ....


We are looking at redoing our source control and deployment process for one of our LAMP web apps from scratch.

My initial thoughts were to use git for source control and have the changes automatically pushed to web servers when things are committed to the production branch.

I looking for suggestions on how to manage changes to the sql schema , and how to manage syncing / rolling back files on the production web servers.

What do folk here use for this ?

We use Jenkins everywhere, for everything.

You should:
  • Set up a web-based code review system. Gerrit, barkeep, whatever.
  • Developers push a commit to gerrit. It automatically kicks of a jenkins job which runs linters, unit tests, and spins up a new VM for functional tests (you may be able to get away without the VM, but you should definitely be testing).
  • If any of these fail, it automatically -1s the patch because it's broken.
  • Go through code review
  • Once it passes code review and gets merged into the actual repo, jenkins builds again. Tests. Functional tests not optional here. Deploy to a dedicated test environment if you need to. Vagrant, Cloudformations, Heat, or some other tool to spin up multiple clean test servers with the same config every time not optional.
  • If it passes tests, Jenkins deploys. Ideally by tearing down existing VMs and building new ones from puppet/salt/whatever. Again, you don't have to teardown, but the deployment should be automatic, and through config management

dennyk
Jan 2, 2005

Cheese-Buyer's Remorse
I'm currently looking into options for a server management system for our department. I'm responsible for about 250 RHEL 5/6 servers (some physical, mostly VMWare VMs) and a handful of CentOS systems, and right now I'm pretty much doing everything "manually" (mostly with a bunch of little half-assed bash/perl scripts I've written along the way).

- Our primary objectives are inventory collection and security patching functionality.

- Robust reporting capabilities for both of those areas are a must. A pretty dashboard would be good, but at the very least we need to be able to run and export reports from the thing, and they need to be somewhat customizable, particularly the security reports (i.e. we really need to be able to pull a report like "here are the unpatched packages with Important or higher security vulnerabilities on these servers" rather than just "hey server X has like three hundred packages with new versions available! :downs:").

- Some sort of integration with RHN's errata notifications or some similar security tracking system would be a huge plus.

- Some form of basic configuration management would also be nice to have, but isn't essential.

- Compatibility with AIX 6.x+ client machines would also be nice, if there's something out there with decent functionality that supports both platforms well, but I'm not looking to sacrifice functionality or useability for multi-platform support.

- The "master" application really needs to run on Linux, not Windows, since I'll have to manage the thing.

- The agents (or any agentless configuration processes) really need to be as simple as possible to install and configure; interactive installs or heavily customized agent-side configurations are no good, as we have way too many existing hosts to integrate into this system. I need something I can at least install/deploy with a script or something, because hell if I'm going spend fifteen or twenty minutes setting up an agent manually on each of my 250 servers.

- Cost is not a huge deal, as long as it's not too unreasonable; we have the budget and are willing to pay for an application for this, especially if a commercial app saves us a lot of configuration and setup time or gets us better reporting and UI functions.


Right now I'm kind of leaning towards RHN Satellite 5.x, due to its integration with RHN for security vulnerability tracking. The downsides are that it doesn't support AIX or any other RHEL forks (CentOS, OEL, etc.), and it's fairly expensive for the number of servers we have. There's also the question of whether support for the legacy subscription system that Satellite 5.x uses will remain functional in RHEL 7 once Satellite 6 is released, or if it will suddenly disappear in a later point release to "encourage" users to move to Satellite 6.

There's also Spacewalk, of course, but I'm pretty sure using Spacewalk to manage RHEL servers, while technically possible, would violate our license agreements, sadly. It's also a bit more of a pain to set up and maintain without the direct RHN integration.

I've looked at a few of the popular config management tools out there like Puppet, Chef, and CFEngine; the customizability is nice, but they seem to be more geared towards, well, config management, which isn't our primary goal, and learning and setting them up and managing them would be a pretty big undertaking (since this is something that I'd probably be doing all by myself on top of my usual day-to-day work).

I've also looked at some other big-box products like Altiris and Tivoli Endpoint Manager, but it's hard to find useful information about their actual capabilities and functionality among the sea of marketing bullshit and CxO-targeted white papers.

Any products out there for this sort of application that I've overlooked? Or has anyone had any experience (good or bad) with any of the ones I'm looking at?

JHVH-1
Jun 28, 2002
We have some web apps that we get the deployment artifacts from jenkins and then use puppet to point to the tar file based on version number. Our other app we have is a java container managed by puppet, and all the hiera config files along with the artifacts are put into git and the git version is put into puppet. Then puppet pulls whatever version we set it to. It allows the same config to be used across pre-prod and prod, and you can make changes to the current branch without disrupting anything.

evol262
Nov 30, 2010
#!/usr/bin/perl

You're on the right track with Satellite. I can tell you that RHN classic's infrastructure is being maintained just for satellite even after it goes away.

I've never looked at AIX integration, and though AIX supports RPM, it's not great. I haven't found a good systems/software management tool which handles both Linux and AIX, but I'm sure there is one, probably somewhere under the Tivoli umbrella.

Anyway, Satellite works fine with CentOS and Fedora. Spacewalk won't violate your subscription agreement, either, but I'm not sure about the status of attaching it to RHN.

But Katello is next-gen Satellite. This is included in Cloudforms, I think, but don't quote me on that. Still, Katello's Candlepin component talks to our repos, too. This should also be fine, but talk to your TAM.

Satellite does exactly what you want.

Satellite is not a dead product and will keep working. Satellite 6 is Katello and will include foreman (which is also a puppetmaster), and more componentized parts (pulp, candlepin). Ask your TAM about using it now if you need/want those features (mostly puppet/foreman) now.

Otherwise, use Satellite/Spacewalk with your own foreman/chef/salt/whatever server.

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.
I'm using Bamboo at the moment on the super cheap self hosted option ($10 license), and while I like it I know once we ramp up the product we're working on it's potentially going to get expensive. Anyone who has experience with Bamboo & Jenkins have some thoughts on how they compare?

Thanks for outlining your review/test/deploy process evol262, even though it's just me in backend/client development and a front end designer/developer I want to have a good process in place so we can add to that without having to massively change how we do things, it's good to see a template of what that looks like. I already use Vagrant and Ansible to setup local dev environments without too much hassle, and I'm just trying to flesh out the deploy process at the moment. We use bitbucket for code storage at the moment, just running CI through Bamboo on a staging server and then doing manually triggered script deploys if we're happy with the results, but obviously there's some gaps here I intend to address.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!



Do you have a TAM with RedHat? Might wanna ask him.

evol262
Nov 30, 2010
#!/usr/bin/perl

jaegerx posted:

Do you have a TAM with RedHat? Might wanna ask him.

Also, if you don't have a tam but want to use satellite or katello and you have specific questions about the status of subscriptions when you use upstream instead of a product, etc, you can probably get them answered in the spacewalk or katello channels/lists or by opening a case.

If all of these fail, ping me. I'm in under the same umbrella as these products, and "I can't get a straight answer from anyone about X, can you help?" often gets a good response internally. But you should be able to get them answered in upstream channels

fuf
Sep 12, 2004

haha
I started using my ubuntu server as a socks5 proxy server recently to bypass blocked sites. Is that activity logged somewhere so I can see what sites were visited etc?

evol262
Nov 30, 2010
#!/usr/bin/perl

fuf posted:

I started using my ubuntu server as a socks5 proxy server recently to bypass blocked sites. Is that activity logged somewhere so I can see what sites were visited etc?

Your web browser history.

What are you using to proxy? SSH or other?

Adbot
ADBOT LOVES YOU

fuf
Sep 12, 2004

haha
ssh, like this:

code:
$ ssh vps -D 5555
Is there a way of logging activity on the server side?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply