Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
nielsm
Jun 1, 2009



Artix posted:

Still doesn't help. It seems okay at first, then by four minutes in, combat starts you can tell that the audio is running about a quarter-second behind the video. No matter what framerate I change it to, Virtualdub always says that the framerate would have to be a little bit higher to make the audio/video lengths match, so I think one of them is just flat out shorter than the other for some reason, but I don't know which one or by how much.

E: Audio cuts out before the video does. I don't know exactly how long, but it's about a quarter of a second pretty consistently once it starts desyncing.

This makes it sound more like you lost some video frames somewhere. So if the audio is perfectly synced for the first 4 or so minutes but then suddenly goes out of sync, you will need to either insert some extra video frames (without touching audio), or remove a bit of the audio (without touching the video) at the appropriate location.
If it happens right when a combat scene opens, it may be exactly at the point where the game cuts to combat that the video frames were lost so maybe you can just insert some blank frames, or duplicates, there.

It should be relatively easy to do with Avisynth, something like:
code:
vid = Avisource("d:\capture\video.avi")
cut_location = 7200  #frame number
lost_frames = 7  #about 1/4 second at 30 fps
respliced_vid = vid.Trim(0,cut_location) + BlankClip(vid, length=lost_frames) + vid.Trim(cut_location, 0)
AudioDubEx(spliced_vid, vid)

Adbot
ADBOT LOVES YOU

nielsm
Jun 1, 2009



Strange Quark posted:

I got a Blue Yeti mic today, and after a few hours of using it, every recording I've made recently has turned crackly after about 25 seconds, and then goes in and out of that distortion every ten or so seconds after that. Here's an example.

Anyone know what might be causing this?

Perhaps a ring buffer not being read out fast enough, or something else driver or bus-related.
It's a USB-connected mic, right? Try removing as many USB devices as possible (down to only keyboard and mouse, if possible) and see if the problem keeps occurring.
Check that your CPU usage isn't peaking 100% during recording.
Maybe try recording at different samplerates.


Also re. screen recording, VirtualDub isn't terribly good for it, and configuring it properly can be clunky. However OBS can also record to a local file instead of (or in addition to) broadcasting to an internet service. Try using OBS for recording instead.

nielsm
Jun 1, 2009



It's going to be pretty hard to really lower or eliminate delays when the sound has to go through the entire hardware and driver stack, and loop through a software mixing system to go back out again, every component there will introduce more delay.
You can look for buffer size settings in your drivers (perhaps try to check the device properties in Device Manager for your microphone) but it's unlikely to get much better without having a shorter audio path. E.g. a microphone attached (with analogue connection) to the same sound card the speakers are on could theoretically allow for a shorter path.

My best suggestion would be to train your microphone skills to keep a consistent distance and speaking volume when using it, and be able to repeat it between sessions so you never have to adjust the gain. If you're recording post-commentary, always record with as much gain as you reasonably can, to get the best quality sound, and adjust the volume only for the final mix. For live commentary, just check with your audience if the volume is right.

nielsm
Jun 1, 2009



VALIS666 posted:

Ah, makes sense. I always assumed people who did simultaneous commentary while recording monitored their voice through their equipment as well, but I guess it's unnecessary with a few other considerations.

Ideally you should be monitoring yourself, professional broadcasters do, but if you aren't ready to invest in the equipment you need to do it with low enough delay it's probably better to hold off on it.


The T posted:

So I have an issue where my Audacity recording will record at a slightly slower rate than the separate gameplay video recording.

It's probably because of sample rate skew in your sound card. A higher quality audio interface would probably
lessen the problem, but it is more equipment to purchase.

Instead make some sync points for yourself. Near the start of the recording make a loud noise that also causes something to happen in the video, e.g. smack the space bar, and do the same thing near the end of the recording. Use those marks to sync on, and cut them out in the final video.

nielsm
Jun 1, 2009



The T posted:

That's a really good idea; now how do I do it for this video that's already recorded.

That's what I get for phoneposting.

The stretch factor for the audio is the same regardless of where you measure it, although the calculation will be less accurate if you have a too short sample.
So check through the entire recording and look for something that can be used as a sync point. Or guess.

Guessing may be easier if you instead of adjusting the audio speed, adjust the video speed. With Avisynth you can load in the video and AssumeFPS(something) and try making small adjustments, check if it seems to sync up. Then calculate the audio stretch factor as (fps_that_matches / actual_fps_of_video). The change in fps should probably be in the range +/- 0.025, if the skew is 3 seconds per hour and the video is 30 fps originally. (Double that if the video is 60 fps.)

nielsm
Jun 1, 2009



Touchfuzzy posted:

So, just wondering; does anyone know if the versions of x265 in MeGUI are worth using? Is h265 a thing that's worth using at this point, or still in its early stages and best left alone for now?

It's still rather experimental, and not useful at all for "web video". You could use it if you only wanted the videos to be for download, but don't expect it to be better than H.264 encoding yet. Also, it's still insanely slow at encoding.

nielsm
Jun 1, 2009



ritcheyz posted:

If I accidentally deleted the .aup Audacity file but still have the data folder, is there a way I can recover the project?

The data folder contains the raw audio clips, in no particular order, while the .aup file contains all the data about which tracks and which order the raw audio clips are in.
You can't recover the project intact, but you have a chance to reconstruct it with manual work.

nielsm
Jun 1, 2009



Wikipedia says that the SNES runs at 512x448 (for NTSC) or 512x478 (for PAL), so ideally your initial captures should be at one of those resolutions. However since the games are typically intended to be viewed in 4:3 aspect ratio, you should then resize them to something else. Depending on the graphical style of the game, different resizers might work better, but 597x448, 637x478 (for correct pixel aspect keeping the number of scanlines) or perhaps 800x600 could work well. Alternatively scale all the way up to 1024x768 to double the horizontal resolution instead, and interpolate the vertical resolution.

nielsm
Jun 1, 2009



Artix posted:

So I have a question regarding subtitles. I need to add them to a video, which I can do no problem. The thing is, I'd like to have them fade in and out, and I don't know how (or even if) I can do that in Avisynth. Is it possible, or do I need to use a program like Aegisub to get all fancy?

If you mean you're using the Subtitle() filter in Avisynth, then yes it is sort-of possible to make them fade in and out, but it will involve rendering the subtitles to a separate clip with transparent background, doing some fading on that, and then overlay that clip onto the main video. You could of course wrap that in a function to do everything, but it's not very nice and will probably slow down encoding speed measurably.
fe: ^^ or what he said

When you get used to a workflow with Aegisub and .rear end subtitles you can do it quite easily, just by inserting the \fad tag in a line. (But beware that it tends to look really ugly if you have a shadow on the text, so don't use \fad with text that has shadow.) To add text subtitles to your video in Avisynth you should get a recent VSFilter.dll (from e.g. the MPC-HC project), load it as an Avisynth plugin, and then use the TextSub(subtitle_file_name) filter.

nielsm
Jun 1, 2009



There's an extra quotation mark in the middle of the path. Right after the last \.

nielsm
Jun 1, 2009



Getsuya posted:

If you make your videos on Youtube private so that only people with the link can find them, will that circumvent the whole content-blocking thing? I ask because I really want to do a VLP of Final Fantasy Type-0 for my next LP and I've heard Squeenix is kind of defensive about music or certain cutscenes from their recent games. I'd much rather have a private video that only people from the LP thread can see than a public video I have to mute or mangle parts of to get it through censorship.

No, the Content ID thing works equally on public, hidden and private videos. Making a video hidden/unlisted means just that, it's still public, it just doesn't get listed in search results.

You'd have to use your own hosting for the videos, instead of a public video upload site, to get around this.

nielsm
Jun 1, 2009



Tried following the advice given to the poster right above you? Use something other than VDub for recording, for instance OBS.

nielsm
Jun 1, 2009



Tendales posted:

Quick subtitling question that I just can't seem to find the answer to. I'm using Aegisub, and in a few spots I have continuous lines. Currently, it displays the first line, and then displays the second line over it. I'd like it to display the first line higher, and display the second line below it. What's the easiest way to accomplish that?

Continuous lines as in, continuous in time? The second line comes later in time than the first, but is a direct continuation? Or the lines are supposed to show simultaneously and are one logical sentence unit, but they show in wrong order?
Or are they lines from different speakers/characters spoken partially on top of each other?

If it's the first, continuous in time, I'd recommend just making them properly continuous by copying the end time of the first line tot he start time of the second, then they won't overlap and the second will just replace the first.
If it's the second, one logical sentence to be shown in a single go, then it should be a single subtitle and not two. Copy the text of the second to the end of the first one, then delete the second.

If the lines are from separate characters and do need to overlap, one thing you can try to do is create a blank line that will push the first line up initially, and then let the blank line disappear just as the second line is to be shown.
You can make a blank line by having the only text on it be the "\h" control code.

nielsm fucked around with this message at 10:16 on Jul 3, 2014

nielsm
Jun 1, 2009



Nidoking posted:

Youtube also has a new message when I upload MP4s telling me that the encoding will go faster if I encode to a streamable format. I haven't tested MKV to see whether it likes that better, since I'm in the middle of an LP and hate changing methods midstream, but from what I've been reading here, that's likely.

The "streamable format" thing is mostly just a feature of how the MP4 file is muxed. It should be possible to re-mux the file to have streaming hints, or better yet have the encoding software just produce these streaming hints in the first place.

nielsm
Jun 1, 2009



Also, do read some of the previous LPs of various JRPG etc. which have VN-ish sections, to get an idea of how others have been doing it. There is definitely a balance to strike between too many or too few full screenshots, how to handle characters speaking (mugshots, sizes and cropping), what to do about special effects/animations/sounds happening, how to add your own commentary if any, and such.

nielsm
Jun 1, 2009



That's not interlacing, that's frame blending or motion blur.

Motion blur is an intentional effect from the game engine's side, you can probably turn it off if you don't want it.

Frame blending is a side effect of frame rate conversion during encoding, whether it happens depends on what the frame rate of your original recording is, what your target encoding frame rate is, and what method you use for frame rate conversion. If you want to avoid blending entirely you will need to use a simple frame duplication/decimation conversion, the main drawback of that is that the result can appear more choppy in playback.

nielsm
Jun 1, 2009



If your Foobar music is getting captured as part of the game audio, try switching Foobar to a different output plugin and/or device. If you're using DirectSound for Foobar maybe try using WASAPI instead, and if you're using WASAPI then try DirectSound.

For getting rid of sound from a mechanical keyboard, I'll recommend getting a barrier to place between the keyboard and microphone. Having the mic very close to the mouth can help improve signal/noise ratio, since the signal (your voice) will be louder than the noise (keyboard clacking). The only way a different microphone can help with this is if it has some sort of active noise cancelling, by having another microphone pointing away from you that captures the ambient noise and cancels that out from the main signal. High-end headsets are the most likely to have this, but they will typically be intended for telephony and not broadcast/production, audio-quality-wise.

nielsm
Jun 1, 2009



If you're willing to throw more money at a microphone, I believe the Snowball uses a standard thread for mounting on the tripod it comes with, so you could buy a gooseneck with desk-clamp to hang it e.g. above or besides your monitor.

nielsm
Jun 1, 2009



Try running:

regsvr32 /i c:\windows\system32\quartz.dll

If you're on 64 bit Windows, also try this:

c:\windows\syswow64\regsvr32 /i c:\windows\syswow64\quartz.dll

nielsm
Jun 1, 2009



DirectX and DirectShow haven't had any relation for the last 10 years, and DirectShow is a fully integrated component of Windows, so there isn't really anything useful you can reinstall. The reinstall of DirectShow essentially is regsvr32'ing quartz.dll.
I think the problem might be that the core filters of DirectShow have been uninstalled in some way, and that includes some of the base AVI stuff.
Alternatively it might be avifil32.dll or something like that, which has gone haywire.

nielsm
Jun 1, 2009



Have you tried reading the OP?

There are lots of capture software made specifically for games, and neither VDub nor HyperCam are among them.

nielsm
Jun 1, 2009



Mico posted:

if it's h264, then no there's not a better way i don't think.

Yep, H.264 makes everything harder. The video would have to be encoded from the start, with segment-replacement in mind.
The problem is that H.264 allows frame references for prediction (P- and B-blocks) across intra-frames, so if you blindly replace a segment the preceding or following segment might suddenly be referencing frames that look different than expected, causing corruption. Then you'd also have to re-encode those segments, and eventually cascade all the way.

nielsm
Jun 1, 2009



ZombieIsland posted:

Sorry, not the game audio. I mean my commentary is what I'm recording in audacity and I can't get it to sync up with the video I record.

You can't get it to sync up by timeshifting it? It's common that different capture devices have different delays, so you will have to shift one of the signals if you want them to sync up.

Or do you mean that one or the other runs faster than the other, so the delay might be 0 at the start, but then gradually increases?

nielsm
Jun 1, 2009



Xenoveritas posted:

As has been brought up elsewhere, SSDs have a limited number of writes before they croak. For both recording and encoding video, you're basically doing a sequential large number of writes, so this shouldn't be a giant issue. Unless you intend to re-encode the file multiple times for some reason.

Basically you may decrease the lifetime of the SSD a very small amount by recording/encoding to it, and you won't really get any sort of speedup from it, so if you have a giant HDD sitting around, you might as well use that.

By a very, very small amount yes. In practice an SSD will not die from write-wear unless you intentionally torture it.

nielsm
Jun 1, 2009



MoronsInc posted:

Hey, we're having a problem with interlacing on a video. We captured the video from a PS2, and the interlacing is very blocky. After spending a couple of hours using different deinterlacers in AviSynth, we've discovered that "Telecide" and "Yadif" make the interlaced lines smaller, but still present. Nothing else makes any noticeable effect. Any suggestions?

Be aware that there are two kinds of interlacing, true interlaced video and telecined video. Telecine is a process for converting 24 fps progressive-scan film to 29.97 fps interlaced video. You usually only see that on VHS and DVD home video of movies. Since video games actually run at the framerate of the video system (or an even divisor of it) they will basically never have telecined content but always true interlacing, or alternatively be true progressive-scan.

Telecide is a filter to do (semi-)automatic inverse telecine, meant for fixing film ripped off DVD or TV. Yadif is a deinterlacer meant to make true interlaced content look less bad on a progressive scan display (like most flatscreens.)

But yes post a short, unprocessed video sample of your capture.

nielsm
Jun 1, 2009



Youtube compression ruins almost everything about that sample.

But check that your capture card doesn't do any resizing/scaling or attempts at deinterlacing of its own.

nielsm
Jun 1, 2009



berryjon posted:

Somehow, my AegisSub subtitle file, when applied through VirtualDub was resizing my 800x600 video into 640x480.

I have no idea how, but while my computer is chugging away at the correct video versions, could someone help me figure out why this happened, or how?

PS, thank goodness I kept my raw files.

That sounds really strange, and certainly shouldn't happen by itself.

Try opening your subtitle file (should be a .rear end file) in Notepad and copy out the [Script Info] section from it to here.
Also take a screenshot of your filter processing dialog in VirtualDub, if you have anything set there.
And a copy of your AVS file if you're using Avisynth.

nielsm
Jun 1, 2009



berryjon posted:

I fixed the problem by changing the PayResX and PlayResY to the proper values.

As for the filter process, I couldn't see anything unusual there.

While the problem is fixed, I am still confused as to how it happened in the first place.

The PlayRes values should not cause any resizing to happen, they only control how positioning coordinates, font sizes etc. are interpreted.
I'd really like to know what was going on there, because if you didn't do anything wrong it sounds like a bug in some of the software involved. (And I'm involved with some of that software.)

nielsm
Jun 1, 2009



Did you check whether the encoded file was actually in a different resolution? From your description it sounds like the video file did not change pixel resolution at any time.
However the script resolution (PlayRes) was working as intended: It gives the resolution of the "virtual canvas" the subtitles are produced for. If the actual video resolution happens to be different from the script resolution, coordinate scaling is performed before subtitle rasterization, so there isn't any loss of quality. It's only the numbers used for font sizes, positioning etc. that are different.

It's not entirely true that script resolution can be anything and it scales correctly. First, things get weird if the width/height ratio of script and video resolution are different. Second, perspective rotations with the \frx and \fry tags behave differently depending on video resolution, especially when combined with \org rotation origin override. Unfortunately those things aren't fixable, but they are at least avoidable.

By the way TextSub 2.33 is ancient, and you shouldn't be forcing H.264 video into AVI files.

You can change the script resolution in Aegisub from Properties in the File menu, you can also change the default wrapping mode there.

nielsm fucked around with this message at 01:40 on Jan 3, 2015

nielsm
Jun 1, 2009



You should probably be adding the subtitles after all the DS video juggling, otherwise you are putting subtitles on the raw video, and the subtitles will also be cut and moved around by the DS video functions.

code:
import("C:\Program Files (x86)\AviSynth 2.5\plugins\MastiDS.avs")
raw= avisource("C:\Users\paul\Desktop\4133 - Nanashi no Game Me (JP)(Independent)_14_31433.avi")
raw= raw.ChangeFPS(30)
trim(raw, 1454, 3022)

MDS_Set169WideScreen()

MDS_VertStack()

MDS_TopVSlide(raw, 0)

TextSub("C:\Users\paul\Desktop\NanaMeSubTest.rear end")

nielsm
Jun 1, 2009



ChaosArgate posted:

Also, why doesn't 5x resizing protect like 4x and 6x do? Seems a bit weird that the one integer between the two just doesn't. And for 3DS scaling, not all games are fully rendered 3D. Fire Emblem, for example, spends more time showing the player sprites than 3D models, whereas Ocarina of Time is entirely 3D.

Because of chroma subsampling. All video format in common use only store chroma (color nuance) for every 2x2 pixels, only the luma (brightness) is stored at full resolution.
Because the chroma is at half resolution in both directions, if you pixel resize something by a non-even factor such as 5, you will get color bleeding at the edges of pixels.

There are also the codec macroblocks to take into consideration, for H.264 they can be 4, 8 or 16 pixels each side. This means that 4x scaling will typically also look better than 6x.

nielsm
Jun 1, 2009



Note that 1161 is 918 + (918-675), which is what you specify for "end" of the ImageSource.
What is the length of just clip2 ?
How does clipOverlay look by itself?

Try doing it a different way, perhaps:
code:
clip1 = Trim(FFVideoSource("S:\Elgato\Destiny - Came in like a destiny wrecking ball - 2015-01-10 21-13-44.mp4"),1507,2424)
imgclip1 = Overlay(clip1, ImageReader("D:\Pictures\reaction\1362441296642.jpg", end=FrameCount(clip1), fps=FrameRate(clip1)), 0, 0)
return Dissolve(Trim(clip1, 0, 774), Trim(imgclip1, 675, 0), 100)

nielsm
Jun 1, 2009



Avisynth just needs the right plugin to load whatever video you want.

But use FFMS2 for everything, it tends to work the best.

nielsm
Jun 1, 2009



Position, as in distance from bottom/top of screen?
You can change the margins in the Style Editor. You can also change the alignment to another edge, corner or the center.
And you can make multiple styles as appropriate.

nielsm
Jun 1, 2009



Niggurath posted:

It's more of a specific X,Y position rather than just general alignment, so the 9 positions they have in the styles manager isn't really too helpful.

I'll try to look into this to see if maybe there's something in there. Thanks.

There is no such thing as a "position profile", but there are the margins in styles. Those definitely control the distance from picture edge to the text, letting you (somewhat indirectly) position things.

nielsm
Jun 1, 2009



Analog TV signals have the great advantage that they always run at the same number of lines and scan frequencies. Regardless of what the game and console does for storage and rendering resolution of gameplay, FMVs etc an analog US NTSC signal will always be the same. Digitized it becomes 720x480 pixel 59.94 hz and you can then process it from there.
A really good grabber might then be able to let you control the horizontal sample rate which might improve quality somewhat if the console outputs at an uncommon rate too, but the number of lines is fixed.

Can you post a sample of what you mean by it crapping out?

nielsm
Jun 1, 2009



Sounds like Macrovision or something. No idea why a game would produce a signal with that.

nielsm
Jun 1, 2009



Kaptain K posted:

I have a question semi-related to LPing. Is there any way for me to stream my gameplay just to one person without having to use Twitch or something else? The main concern is delay.

I've seen some videos like the Slowbeef & Woolie play Alien Soldier video where it's two people in different countries, the game is being played live and there's no 4 second delay.

Steam recently put the Sharing feature into general availability. It allows you to use Steam to stream your games, and you can choose who can get to watch it: Friends on request, freinds always, anyone always.

I haven't actually tried it, don't know how well it works.

nielsm
Jun 1, 2009



FuriousAngle posted:

I don't know if this has been answered recently, but if it has ihaven't seen it. What is the best and most cost-effective method for recording two commentators? My computer does not have multiple input jacks.

If two commentators are going to be in the same room, do I need to get a two-line mixer and two microphones? Or would one snowball mic be able to pick us both up without too many issues?

Again, I have to stress cost-effective because I an currently poor.

Thanks in advance.

Assuming you don't want to just talk into a single microphone (major disadvantage is that you'll end up recording a lot of room reverb), get two USB microphones as good as you can afford.
Plug both in.
Start two instances of Audacity, configure one to record from the first microphone, the second to record from the other microphone.
Push Record in both Audacity windows, it's okay to have a couple seconds between it. You'll end up with two somewhat desynced audio files.
At the beginning of the recording, make a sync signal, e.g. clapping loudly so both microphones should pick it up.
When done, save the files, and then import one recording into the other, so you get them as two separate tracks. Line up the two tracks on your sync signal.
You've now got the two recordings ready, and can crop the start with sync signal out.

Adbot
ADBOT LOVES YOU

nielsm
Jun 1, 2009



FuriousAngle posted:

Thanks! That all makes a lot of sense! The only problem is there's only one microphone IN jack in my computer.Since I don't have any expansion slots free (it's an mITX) I'm going to have to do something creative, right? Will using a headphone splitter work to join two microphones into one jack? It seems idiotic, but it's the only thing I can think of without venturing into line mixer territory.

That's why I'm suggesting you get USB microphones, or perhaps just a cheap USB audio interface.

USB microphones have a built-in audio interface and register as if they were a complete separate sound card on your machine, they don't depend on your built-in sound hardware.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply