Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Charles 2 of Spain
Nov 7, 2017

Try a validation set after every epoch to see if you're maybe overfitting there.

Adbot
ADBOT LOVES YOU

Charles 2 of Spain
Nov 7, 2017

Not sure where to post this so I'll put it here.

I'm streaming audio data from a microphone and am continuously running a Pytorch model on it (speech recognition and other stuff). On a Linux laptop with Ubuntu and a GeForce GPU the inference time is around 8ms, which is nice and fast. When I run the exact same code and model on a Windows desktop also with a GeForce GPU the inference time is around 20ms, more than twice as slow.

What could be the reason for this? GPUs are the same on both systems and they are both being used as far as I can tell. I would understand a slight difference in performance depending on the operating system but this is quite large. Is it something to do with how the model was trained?

Charles 2 of Spain
Nov 7, 2017

USB mic on both systems and I'm using PyAudio for streaming. The specific bottleneck seems to be caused by the call to the model encoder. Unfortunately I'm not familiar with the details of the transformer model, but this command appears to be doing the bulk of the inference.

Charles 2 of Spain
Nov 7, 2017

Thanks, I'll check those out. I'm running Windows natively.

Charles 2 of Spain
Nov 7, 2017

According to the profiler the Windows machine is taking longer for inference than the Linux one for basically all the operations. So it does seem to confirm that there is an issue with this particular model and Windows. I've tried on a couple of other desktops as well with the same results.

Charles 2 of Spain
Nov 7, 2017

Charles 2 of Spain posted:

Not sure where to post this so I'll put it here.

I'm streaming audio data from a microphone and am continuously running a Pytorch model on it (speech recognition and other stuff). On a Linux laptop with Ubuntu and a GeForce GPU the inference time is around 8ms, which is nice and fast. When I run the exact same code and model on a Windows desktop also with a GeForce GPU the inference time is around 20ms, more than twice as slow.

What could be the reason for this? GPUs are the same on both systems and they are both being used as far as I can tell. I would understand a slight difference in performance depending on the operating system but this is quite large. Is it something to do with how the model was trained?
OK so on Windows the model slows down if the console window used to launch the program isn't in focus. For example if I minimize the window then it grinds to a halt but as soon as I bring it back up it starts running smoothly.

:wtc:

Adbot
ADBOT LOVES YOU

Charles 2 of Spain
Nov 7, 2017

Lol it appears that I fixed this by turning off Hardware-Accelerated GPU Scheduling.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply