Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nidoking
Jan 27, 2009

I fought the lava, and the lava won.

GrandmaParty posted:

Grandma's Guide to Slightly More Than Minimum Effort LPs for Ugly Babies

Two things I'd recommend to improve your workflow:

Get rid of the FFIndex call. The first call to FFVideoSource or FFAudioSource will run FFIndex if the index file doesn't already exist, and if it does, they'll skip the indexing operation. I don't know whether calling FFIndex again forces it to re-index the file, but if it does, you're spending a few extra minutes every time you process the script. If not, then it's just an extra line that does you no good.

I find it a lot easier to do the AudioDub operation before you do your trims, so the Avisynth script handles the synchronization of your video and audio. Why do the work of trimming out the same frames twice when you can do it once? I open the script in VirtualDub and Save as WAV to get the audio file for Audacity, but you can also just MixAudio your final commentary track with the original audio and handle the leveling by adjusting the mix ratio until you like what you hear. If you're just exporting the original video for commentary, you can use the same Avisynth script for both video and audio input in MeGUI. In fact, I've been doing this:

code:
try
{
  soundclip = WavSource("..\Audio\" + clipname + "sound.wav").ResampleAudio(48000)
  mainclip = mainclip.AudioDub(soundclip)
}
catch (err_msg)
{
  notfound = FindStr(err_msg, "couldn't open file")
  Assert(notfound != 0, err_msg)
}
to grab the commentary track (mixed with game audio in Audacity, in this case, but I could easily use MixAudio instead) if it exists and use that for the audio; otherwise, export just the game audio. Now, I can use the same script to export the edited video, and after I do the commentary, I export it to Audio\clipnamesound.wav and re-run the audio job. Avisynth and MeGUI pick up the file and replace the audio without me changing the script in any way. (clipname is the ScriptFile variable run through LeftStr to strip off the .avs part.)

Further, previewing the file in VirtualDub lets you grab exact frame numbers for your trims instead of having to do math. You could also just do the math in the script. floor((minutes * 60 + seconds) * video1.framerate) should do the trick, and you can even use fractional seconds that way. But exact frame numbers are much easier.

Adbot
ADBOT LOVES YOU

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.

Natural 20 posted:

I have literally no concept of what bitrate does or anything so I just used the default for ages.

Quick and dirty explainer time!

A "bit" is a binary digit, a 0 or a 1. It's handy for storing information in an electronic medium because you can represent the values with "on" and "off" in a circuit, but a single bit can only store one of two values. To store complex stuff like meaningful numbers, documents, pictures, or videos, you need to combine multiple bits into a single value. Two bits can represent four values: 00, 01, 10, and 11. The number of values increases exponentially with the number of bits used - 8 bits can store 256 distinct values, 32 bits gets you into the billions, and 64 bits cover more than the expected number of seconds in the lifespan of the universe.

Of course, the bits alone are pretty meaningless until you assign some significance to them. Obviously, each pattern of bits is a binary representation of a number (and you can encode negative numbers in a variety of ways), but then you have encodings like ASCII or Unicode that assign each numerical value a character or symbol, so that you can turn text into bits. Graphical data, for the most part, is split into pixels, and each pixel will be represented by a number to indicate its color. There are lots of ways to do that, but I'm going to stick with RGB24, which assigns 24 bits to each pixel - 8 bits for the amount of red, 8 bits for the green, and 8 bits for the blue. A graphic file will also have a header to describe features of the image such as its dimensions, so that the graphics driver can assemble the pixels correctly into the image.

Video data, setting aside the audio portion, is a series of images, plus information like frame rate, dimensions, which color scheme it's encoded in, and so on. The video portion of the file is a gigantic string of bits that are split up into frames, each of which is a string of pixels, each of which is a string of bits. You really get to appreciate the power of graphics processors when you consider how rapidly they can read such a file and convert it into the video you see on your screen. But just how many bits are there in a video? Let's say it's a 1920x1080 video with RGB24 pixels and 30 frames per second. Without compression, that's 1920x1080x24x30 bits per second, or 1,492,992,000 bits per second.

Fortunately, video encoders do things like store only part of some frames, or combine two pixels into fewer bits, to cut down on the size needed to store the video. But there's only so much you can decrease the size of a file before you start to lose some of the data. The fewer bits you have available, the fewer distinct files you can possibly store. Try to reduce the size too much, and the encoder can't possibly keep all of the detail in each frame. However, encoders have algorithms that try to determine, based on the content of the frames, which data will be least harmful to lose in representing the video. Pixels that don't change for a long time, or change to values that are very close to the previous values, won't affect the final product as much if they're skipped over for a frame or two, while if the whole image is changing rapidly, it's going to take a lot of bits to keep up. So going a bit below the minimum for lossless encoding might not be noticeable, but the fewer bits you make available, the worse the video will look. That's what the bit rate is - how many bits you make available per unit of video, usually seconds - and why it's important to how your videos look, as well as why different values are appropriate for different videos. But, as has been said, the more bits, the better it will look, until you hit the bit rate for uncompressed video.

Nidoking fucked around with this message at 00:12 on Jun 13, 2019

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.

Stabbey_the_Clown posted:

I need a replacement for my outdated MP4Box plugin for MeGUI. What should I be looking for to encode MP4 or MKV files?

I've just been using the mkvmerge in MeGUI and haven't had any problems.

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.

Jamesman posted:

Right now, just eyeballing a video I'm working on right now, I would say my audio track is roughly 10db louder than the other audio track. If I adjust the volumes any more, it sounds way too unbalanced to me, where I either can't hear the other audio track, or my voice just sounds way too loud. If I can't hear the problem myself, how can I fix it? (Please don't say auto-ducking we don't do that in this house)

I think the best thing to do would be to ask those people if they can give you some representative timestamps where your voice gets lost most noticeably to them, and listen specifically to those parts to try to get a better impression of the problem. My solution is usually to raise my own voice just in places where the game gets louder - it's more work to manually find those parts of the track and amplify them, but it's much more precise than trying to find perfect levels for each track as a whole. I rarely go out of my way to adjust portions of the game audio that way, although I've done it a few times, particularly in cases like games that use the Music volume for all cutscene audio and become inaudible.

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.

BadMedic posted:

AviSynth being a scripting language feels so weird to me though. I watched my dad edit while growing up, and it is a very visual process. I can't imagine tweaking a special effect or even colours to be just right by editing a text file.

Doing it once may well be more tedious than doing it in a more immediately visual editor, although I find it very smooth just to have Editpad and VirtualDub open side by side and reload the video as I change the script. The real advantage to Avisynth comes from only having to do most of the edits once. For some series, I can just rename a script, change a couple of numbers, and turn the input files into a complete video in a matter of seconds (not counting encoding time). I can also set up a complicated sequence of edits using a variable and tweak the whole thing just by changing a number in one place, if I set it up properly.

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.

GrandmaParty posted:

What's the best editor for this? Avisynth is making it look byzantine at this point.

Bah.

left = StackVertical(upperleft, lowerleft)
right = StackVertical(upperright, lowerright)
final = StackHorizontal(left, right)

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.

Supeerme posted:

I am trying to use VirtualDub to hard sub my videos but for a strange reason, the filesize is absurdly high. (600GB+) How do I stop it from doing that?
Here is my AVS:

The input is much less important than the output encoding. Are you setting an encoding type, or outputting the raw video? Likewise, are you switching the audio to Full Processing and choosing a compression, or just copying the raw audio?

You might find it easier to run the scripts through MeGUI for the final output. I only use VirtualDub for previews and editing.

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.
What's the error? And did you leave out the quotes around the filenames when you pasted the script here, or is that maybe the source of the error?

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.
Is that at the AviSource line, or the Trim ++ line? Either way, you just need to ChangeFPS one of the clips to match the other. I do that by passing the clip rather than a number.

clip1 = clip1 ++ clip2.ChangeFPS(clip1) is pretty much guaranteed to fix that problem.

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.
No, that's just how the Dissolve filter works. It has to process twice as many clips per frame, so you get significantly less speed. The same sort of thing happens with filters like StackHorizontal. If you want to see the video at full speed, you'll have to render it.

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.
That is a form of rendering it, yes. I do this all the time when I need to verify the results of an edit - I export just the relevant part of the video with some quick and dirty settings, tweak it as needed, and then go back to my normal method when it's time to render the complete video.

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.
Have you tried opening the script in, say, VirtualDub, saving the audio as a Wav, and using that as the audio track (either directly in MeGUI, or in the script)?

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.
Little swirling Batman scene change GIF, but Tidus's face instead of the Batman symbol.

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.
This has been happening to me lately with my capture device too. As long as it's gradual desync, I've just fixed it in post. I adjust the audio delay until the beginning lines up, then adjust the video frame rate until the end lines up as well.

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.
Another option is to leave the video at its original resolution and pad it with black bars all around, so you get a tiny window of video in the middle of the screen somewhere. That's a lot closer to the way PCs were handling video back then.

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.
I've dealt with this sort of thing, in a way, just using Avisynth, but it's a colossal pain. Basically, if you can successfully split the video and audio into parts that should line up but don't, you can AssumeFPS and ChangeFPS the video portions to force them to the correct length and speed and then match them with the audio. If you've got visible hitches, that might help you line up the splits. It's up to you whether it's more effort to do that or to re-record the video. Of course, if the problem keeps happening, then you'd need to identify and fix the root cause.

Adbot
ADBOT LOVES YOU

Nidoking
Jan 27, 2009

I fought the lava, and the lava won.
My method for DOSBox 320x200 video was to PointResize to 640x400, then LanczosResize to 640x480. You could then PointResize again to 1280x960 to get something that should encode pretty well.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply