|
Try a validation set after every epoch to see if you're maybe overfitting there.
|
# ¿ Jun 24, 2022 06:33 |
|
|
# ¿ May 17, 2024 01:21 |
|
Not sure where to post this so I'll put it here. I'm streaming audio data from a microphone and am continuously running a Pytorch model on it (speech recognition and other stuff). On a Linux laptop with Ubuntu and a GeForce GPU the inference time is around 8ms, which is nice and fast. When I run the exact same code and model on a Windows desktop also with a GeForce GPU the inference time is around 20ms, more than twice as slow. What could be the reason for this? GPUs are the same on both systems and they are both being used as far as I can tell. I would understand a slight difference in performance depending on the operating system but this is quite large. Is it something to do with how the model was trained?
|
# ¿ Mar 19, 2024 03:22 |
|
USB mic on both systems and I'm using PyAudio for streaming. The specific bottleneck seems to be caused by the call to the model encoder. Unfortunately I'm not familiar with the details of the transformer model, but this command appears to be doing the bulk of the inference.
|
# ¿ Mar 19, 2024 03:44 |
|
Thanks, I'll check those out. I'm running Windows natively.
|
# ¿ Mar 19, 2024 04:14 |
|
According to the profiler the Windows machine is taking longer for inference than the Linux one for basically all the operations. So it does seem to confirm that there is an issue with this particular model and Windows. I've tried on a couple of other desktops as well with the same results.
|
# ¿ Mar 19, 2024 05:09 |
|
Charles 2 of Spain posted:Not sure where to post this so I'll put it here.
|
# ¿ Apr 2, 2024 05:23 |
|
|
# ¿ May 17, 2024 01:21 |
|
Lol it appears that I fixed this by turning off Hardware-Accelerated GPU Scheduling.
|
# ¿ Apr 4, 2024 07:58 |