01/29/23

Prevent Audio Desynchronization when recording via GeForce Experience

If you’ve ever tried to record gameplay footage using GeForce Experience, you will notice that the audio gradually desynchronizes over time. When the footage is < 5 minutes, you wouldn’t notice it but when it’s above 20 minutes, the audio delayed could be as much as 2 seconds. (Technically, it’s not the audio that’s delayed, rather, it’s the video not being synced properly.)

The following is an example of me trying to record gameplay at 60 fps but my machine can only produce 30-50 fps. The original recording was over 45 minutes long and I had to edit the footage by cutting and resyncing the audio multiple times. It was a pain to edit.

Valheim running at 30-50 fps while recording at 60 fps

Most common solution that you will find online is to limit your recording to 5-10 minutes per clip. This way, the desynchronization resets every start of a new clip. However, I found a way to keep the audio and video in sync. The following video had an original footage of 20+ minutes but I didn’t need to edit much to resync the audio and video.

Valheim captured and recorded at 30 fps

The main cause of the issue is simple: video games are rendered at a variable frame rate or frames per second (fps). Videos are played back in constant frame rate such as 29.97 fps (NTSC) or 25 fps (PAL) amongst many other standards. To minimize, if not eliminate, audio desynchronization, it is best to match the frame rate of the game and the recording frame rate. GeForce Experience has 2 recording frame rate options: 30 and 60 fps.

GeForce Experience recording settings

For the game, you must set a limit to your frame rate.

Frame Rate option in Overwatch

But note: limiting the frame rate in the game simply puts a maximum. It doesn’t necessarily mean that it will hit that frame rate. Meaning, if your game is running below the desired frame rate, we need to tweak some settings to increase it. The frame rate is primarily dependent on 2 things:

  • Graphics Settings
  • Graphics Processing Unit (GPU) or Video Card

Your graphics settings will determine the computation power needed while your GPU will perform those computations. The higher the quality of your graphics settings, the more powerful GPU you’ll need. If money isn’t a problem, then simply buy a more powerful graphics card and your problems would be solved – no need to read the rest of this guide. Who said money can’t solve your problems? But if you’re not related to Richie Rich, then please continue reading.

Lowering the following graphics options would have the highest impact on improving your frame rate.

  • Screen Resolution
  • Shadow Quality
  • Reflections/Refractions
  • Particles
  • Post-Processing (Bloom, HDR, Depth of Field, etc)

Screen Resolution setting is pretty straight forward – lower resolution means less pixels to show, less work for the GPU, more frame rate. In theory, 4K resolution has 4 times the number of pixels compared to 1080p. Dropping your screen resolution should be able to increase your frame rate by 4x but there are other factors that prevent that much gain. And those factors are quite deep in the details of GPU architecture which I do not plan to cover here. Nonetheless, the effect is immediately noticeable.

Shadow Quality affects how sharp the shadows are. Shadows are one of those graphics techniques that add a lot of immersion in video games simply because our brain intuitively uses shadows to determine a few things like depth or distance. Unfortunately in video games, shadows are an extra render pass. This means, that the scene is rendered once per shadow casting light source to determine which objects occlude light. The option to lower the quality of these render passes are the shadow quality settings. You will see significant frame rate boosts when lowering shadow quality or turning off shadows completely in scenes where shadow casting light sources exist.

Comparison of Shadow Quality Settings (Low, Medium, High, Ultra) in Overwatch

Side note: do not confuse Shadows with Lighting. Lighting is when a light source illuminates objects. When a light source is blocked by an opaque object, a shadow is cast behind. In reality, these 2 work in conjunction. In video games, lighting is a separate computation from shadowing. In fact, there are advanced techniques in lighting (ex; Deferred Lighting) that allows hundreds of lights in the frame and it would barely affect the frame rate. Shadows, on the other hand, are normally limited to 1-2 light sources since they’re computationally expensive. Games cheat on this limitation by putting a maximum distance to the shadows that a light source can produce.

Reflections and Refractions are uncommon in video games for the simple reason that they’re computationally expensive with little benefit to the overall experience. Similar to shadows, these features add additional render passes but unlike shadows, it has less effect on immersion. These options are normally a toggle but sometimes they can be a resolution quality. In order to see the effect, you need to look at a reflective or refractive surface which is, again, not commonly in front of you in many scenes.

Particles are a visual effect to denote fluid-like motion such as the flames in a campfire, explosions of a dynamite, smoke from a burning bush, sparks for an exposed live wire, bubbles when exhaling underwater, exhaust of jet engines, etc. The applications are endless and they add a lot of realism and signaling in video games. GPUs manufactured in the last 5 years can easily render thousands to tens of thousands of particles every frame. But just in case your game renders millions of particles or your GPU is a bit on the aging side, lowering the particle quality or count could have a small boost in your frame rate.

Post-Processing effects are very popular nowadays. They add visual effects that mimic how the eye dilates depending on the amount of light present (HDR), mimic how cameras feather out strong lights (Bloom), mimic how the eyes focus on objects of varying distances (Depth of Field), etc. These techniques are pretty standard nowadays and can easily be added in video games. However, they are an additional computation which could lower your frame rate. Some effects cost more than others and a bit of experimentation is necessary to measure your milage.

It is possible for frame rates to drop for other reasons. For example, (AI) if you have hundreds of pigs roaming and path-finding on a 3D map or (Physics) thousands of balls bouncing off one another in a confined space; then you would see significant drops in frame rate. Ideally, the game wouldn’t let you reach that point unless you did something crazy.

Unlimited lox, wolf, boar, and chicken farm in Valheim

Let’s do a recap on how to record your footage with minimal audio desynchronization:

  1. Leave your frame rate uncapped (for now)
  2. Lower your graphics settings until you get a higher than desired frame rate. For example, you’re aiming for 60 fps, lower your graphics options until you reach 70 fps.
  3. Cap your frame rate; either to 30 or 60.
  4. Set your GeForce Experience recording target frame rate to your desired frame rate.
  5. Double check that your game can almost constantly run in your desired frame rate.
  6. Hit record!

I hope this helps and may you produce high quality epic footage of your gaming experiences!

Did you notice a mistake in my post? Or simply have the urge to chat with me? Either way, feel free to reach out in this twitter thread.

01/23/17

Experiment at the GGJ17

My biggest takeaway in my GGJ17 experience was implementing a ‘sprite sheet animation‘-selector based on a given angle. This is only applicable to 2D games with more than 2 views per animation. If you haven’t read about our game from my last post, you can get more info here.

Here are the requirements:

  • Surfers could go be going to the right, down right, down, down left, or left
  • Dolphins could go left, up left, up, up right, right
  • Tiki could point left, up left, up, up right, and right

Each of these directions is a sprite sheet/sequence animation. The solution:

  • Each direction is a game object that animates through the sprite sheet/sequence
  • These direction game objects are children of a selector game object
  • The criteria of the selector is based on an absolute direction (to a target object, mouse, etc)
  • The selector activates the direction closest to the criteria and deactivates all others

If the directions weren’t animations (like the Tiki pointing), then a simple sprite selector based on angle would be enough.

A public repo of the project can also be found here. Here’s a UnityPackage of the demo:

AnimationAngleSelector
AnimationAngleSelector » Post
AnimationAngleSelector.unitypackage
1.5 MiB
1402 Downloads
Details

01/2/15

Recommended Facebook Privacy Settings

This post is not related to Game Development or Programming but I find it relevant to spread this. Nonetheless, I hope you find this useful for your online reputation.

Have you ever had a stranger like a picture on your Facebook? Or someone you don’t know suddenly commenting on your status? Or did you know that whenever you get tagged, it shows up on your wall without your permission? But most importantly, do you want to have more control over the privacy of your Facebook profile? Continue reading

10/12/14

Determining if an Integer is a power of 2

Using C/C++, what is the fastest way to determine if an integer is a power of 2?

Continue reading

09/19/14

C Operator Precedence

Given the statement below, what is pf?

  1. a pointer to an array of 10 arrays of 5 pointers to functions, with float and double parameters, that return an int
  2. an array of 10 arrays of 5 pointers to pointers to functions, with float and double parameters, that return an int
  3. a pointer to a pointer to an array of 10 arrays of 5 functions, with float and double parameters, that return an int
  4. an array of 10 arrays of pointers to an array of 5 pointers to functions, with float and double parameters, that return an int

Continue reading

12/31/13

Touch Events in Unity3D

I’ve been working on Crazy Bugz for the past few days to take advantage of the 2D physics and sprites brought in by the latest version of Unity3D. Many things have been updated which reserves a post all by itself. For now, I want to discuss a discovery I made regarding the touch events. I’m not sure if this is iOS or Unity3D specific but I’ve built a work around that seems to be working for now 😀

For a demonstration, here’s a Unity3D package (requires 4.3). Build and test it on touch devices. I’ve only tested it for iOS devices. You can test it with Unity Remote but it has limitations like touch responsiveness which is critical for this demonstration.

TouchDemo
TouchDemo
TouchDemo.unitypackage
14.3 KiB
1929 Downloads
Details

First and foremost, I made a generic event handler for the different touch phases in TouchMonoBehavior.cs

The OnTouch* event handlers are all declared public virtual void and passes the Touch parameter. Meaning, each event handler could be called multiple times per Update() depending on how many touches there are (Input.touchCount). We can treat each touch separately by taking note of the finger id that comes with the touch.

All the while, I thought the touch phases have the following state diagram

Touch Phases State Diagram

Unfortunately, upon testing over and over again, it is POSSIBLE to start with Moved or Ended! I haven’t noticed if it could start with Stationary since I don’t use it for my projects at the moment. But my point here is that the Began phase CAN BE SKIPPED! I’m not sure if this is intentional but it’s happening and it got me pulling my hair for the past couple of days.

I’m using an object pool in my project where the objects react to touch. When the user touches, an object is created, let’s call that Object 0. When the user touches again, another object is created called Object 1. If Object 0 gets disabled (as part of the game mechanic) and the user touches again, Object 0 will be re-intialized and treated as something new. Objects don’t get destroyed, rather, they become disabled. This is basically how object pooling works.

Theoretically, every time a Began phase is encountered, a mirror is re-initialized in Crazy Bugz. The user can rotate or stretch this mirror by moving their finger which corresponds to the Moved phase. When the user releases their finger, an Ended phase is encountered and that mirror remains enabled until it gets disabled (shattered) by the laser. However, there are certain occasions where I touch and get a NullExceptionError. It turns out, my game is trying to look for a mirror with a specified finger id that was not created. This means, the Began phase was skipped!

As a work around, if in case the Began phase has been skipped and goes directly to Moved phase, I would treat that as a Began phase. In the case the Began phase has been skipped and goes directly to Ended, I would simply ignore it.

Not exactly the best solution but this will have to do.

The ideal case? Well, Began phase shouldn’t be skipped… ever 🙂