FiveM recursive freezes/hitching on clients


A part of the players on one of my servers is complaining about random freezes/hitching while moving in the area where most players are (Los Santos). My guess is that these hitching could be related to the game spawning/removing the player’s ped but I may be wrong. I hope you can help me solve this issue.

I have attached a video and a trace of where this issue happened for a player.

No big event / heavy events were triggered during the experiment, pmms:sync is a redundant event from a public script recently added it is not the issue here.


1 Like

@Disquse maybe?

Seems to be GPU memory running out, and the NVIDIA driver being slow at paging these days (leading to the D3D11 query call in rage::grcSetup::BeginDraw being slow). What are the graphics settings here (and is ‘extended texture budget’ set to the default of 0% as it should never be any higher for 4 GB of VRAM)?

The guy from the video has the slider set to ~30% with a 2060 6GB of VRAM

Another player with the issue set it at 50%

It shouldn’t ever be set above 0% or worst case 20% unless having 12/24 GB VRAM.

Is this on production, or beta/latest? There were some changes a while ago to try to mitigate this, and it’d be helpful to know if they had no effect (in case this is on beta or newer).

The video is on canary and the user confirmed it is still happening on canary

I can vouch for this one.

3080 12 GB version and after you have been in a server for more then 2 hours (just a speculation), it happens.

I have found out that this happens (mostly) when you are about to get into a loading zone for MLOs. Los Santos is prime spot as there is MRPD maps and Pillbox hospitals near.

I have lot of streamers in my city which have this issue. I will be sending some clips if that helps.

This is very rare that the freeze is soo long, it will just crash your game, but this is one of many I found.

There are Luxart Vehicle Siren events and pmms sound script.

Instead of ‘vouching’, can you provide reproduction steps and/or another trace?

Mainly reproduction steps since traces just show opaque functions in the NVIDIA GPU driver, and a fix can’t be verified without reproduction steps.

Did ‘the user’ confirm it happens with extended texture budget disabled entirely?

One says that removing NVE (he did not tell me he had that) and reducing the extended texture budget issue fixed the issue
The other one with a 2060 with 6GB of VRAM says he still has small slutters with the texture budget sets to

Gonna ask him to play a bit at 0%, the issue is that it leads to some texture loss sometime despite few content added

I’m updating this thread as I’m getting the same issue right now on canary with Extended Texture Budget at around 40% (with a 3080) so it could be something else, here are the traces: 535.79 MB file on MEGA

I downloaded them but I can’t analyze them yet as symbol server configuration for b2802 seems off. This is pending a change from the infra folks.

Do I need to make a new trace once the server symbols is ready?

Not sure, I would assume not, it’s just a redirection setting missing.

1 Like

It’s again the game waiting for the GPU driver completing the last frame. Seeing the amount of background processes (including ‘NVIDIA Broadcast’ and quite a few Electron apps that would be holding on to GPU allocations) and assuming a ‘RTX 3080’ is a card with still ‘only’ 10/12 GB VRAM, it’s likely the exact same issue again: running out of VRAM and paging stuff back in since your budget setting (+ any NUI usage, perhaps?) still exceeds VRAM allocation.

Traces don’t seem to contain exact details in that regard but the commit size for the game process is way over what it should be and it could be large part of this is GPU allocations (and, similarly, in-process NUI GPU might trip an edge case in the NVIDIA driver that causes larger/longer hitches, the NVIDIA driver really does not appear to be tested with multiple GPU contexts).

For numbers’ sake, a 40% budget slider (vid_budgetScale being 8) seems to make the grcResourceCache cap for ‘very high’ texture quality around 6575 MB (strmem shows this), adding other things in the game that might use GPU memory here - mainly NUI - and other heavy background apps such as likely some 1-2 GB consumption by ‘NVIDIA Broadcast’ and you’re likely to run into weird paging behavior.

cl_drawperf nowadays has a VRAM column that’ll show VRAM usage, for example on my system right now it’s as follows:


… which means ‘1.5 GB used by game D3D11 device, 5.6 GB used on the entire system, 23.8 GB total’, also indicating that the workload I’m running now is in fact using 4.1 GB of VRAM on background apps (+ NUI), checking Task Manager (Performance → GPU) a bit after exiting the game shows this is correct as my idle non-game workload is using ~3 GB, with top consumers (Details with the ‘Dedicated GPU Memory’ column visible) being chrome.exe, dwm.exe, Code - Insiders.exe, mstsc.exe and WindowsTerminal.exe.

To reduce this lag we actually briefly had some behavior in game that’d make it not allocate anything over the available VRAM, but since some servers actually have so much oversized content that the required/pinned content (network players, weapons, and vehicles) would already fill people’s entire budget on low-end GPUs this had to be reverted.


Thanks for the heads up I was not expecting that
I guess I don’t need a texture budget that high as I don’t even have in-game issues, and yeah NVIDIA broadcast is a bit too hungry

I have an interesting note regarding a detail we found when trying to remedy ‘stutters’ for players on my server. We found that actually installing NVE basically eliminated all stutters, and other players were reporting that QuantV also had the same effect. Any idea why graphics modifications can contribute to basically eliminating these hard freezes?

These hitches were also an issue for me when running just vanilla GTA5, so possibly worth seeing if it’s FiveM specific.

Personally I ended up running DDU and disabled some stupid windows thing called superfetch - can finally run two clients at mid-high settings again, whereas I was struggling to run one at low settings for like a year.

1 Like

I finally fixed it when I just made my game not go above 100 FPS. I never saw that issue again.

I can still recreate the issue if I disable that frame cap.