GTA5.exe!sub_ crashes and ped pool crashes

Before continuing, please make sure that your issue/crash has not been filed before on this forum. If it has, please provide additional information in the existing forum topic by filling out the template there.

To provide valuable feedback for OneSync issues or crashes, please fill out the following template as much as possible.


Using canary? No
Windows version:
System specifications:


Operating system: Server 2019 Standard
Artifact version: 2967
System specifications: AMD Ryzen 7 8-core cpu | 32gig ram


Summary: Many crashes including the errors:
GTA5.exe!sub_1405EFF9C (0xe8) | GTA5.exe!sub_1405104BC (0xd2) | GTA5.exe!sub_140F07288 (0x6a) | GTA5.exe!sub_1405A0DB4 (0x3d)
as well as some ped pool crashes.

Expected behavior: Not to crash
Actual behavior: Crashing at random times. Players can be doing anything from a job, driving or walking and they will crash.
Steps to reproduce: At the moment there are none? Been trying to have a look for an issue but we can’t seen to locate anything at the moment.
Server/Client? CLient crashes
Files for repro (if any): n/a
Error screenshot (if any): No screen shots as the players are sending in their dumps.
.dmp files/report IDs: (1.4 MB) (83.1 KB) (1.5 MB) (1.5 MB) (1.6 MB) (1.5 MB) (1.6 MB) (1.2 MB) (1.7 MB) (1.5 MB) (1.7 MB) (1.8 MB) (1.6 MB) (1.7 MB) (1.4 MB)

Ped pool dump:
a_m_m_business_01: 53 entries
a_m_y_business_03: 49 entries
a_f_y_business_01: 48 entries
a_m_y_busicas_01: 35 entries
mp_m_freemode_01: 24 entries
mp_m_shopkeep_01 (script loffe_robbery) (no netobj): 16 entries
a_m_y_business_01: 8 entries
a_m_y_business_02: 6 entries
a_f_m_business_02: 2 entries
a_c_cow (script prp-nonwljobs) (no netobj): 1 entries
a_c_coyote: 1 entries
a_c_crow: 1 entries
a_c_pigeon: 1 entries
a_m_m_hillbilly_01 (script prp-moonshine) (no netobj): 1 entries
a_m_m_hillbilly_01 (script prp-nonwljobs) (no netobj): 1 entries
a_m_y_ktown_01: 1 entries
a_m_y_stwhi_02: 1 entries
mp_f_freemode_01: 1 entries
s_f_m_shop_high (script prp-serverscripts) (no netobj): 1 entries
s_m_m_doctor_01 (script prp-pillboxheal) (no netobj): 1 entries
s_m_m_migrant_01 (script fishing) (no netobj): 1 entries
s_m_m_scientist_01 (script prp-pillboxheal) (no netobj): 1 entries
s_m_y_autopsy_01 (script prp-pillboxheal) (no netobj): 1 entries
s_m_y_dealer_01 (script prp-illegaljobs) (no netobj): 1 entries

Any additional info:
I have attached many different dump files that my community have sent in today and yesterday. I simply can not find a solution for this at the moment and i am still going through old commits from when the first GTA5.exe!sub_ crash occurred.

For the ped issue I have simply gone through all scripts that use peds and have made sure that any un wanted peds are set to un wanted with “SetPedAsNoLongerNeeded” However this still seems to occur but not as often.

From the above pool dump i had a look at what uses “a_m_m_business_01” and it seems to be from the clothing script to where you can select a ped.

Please let me know if there is any additional information needed for this report.

Any advice or help would be greatly appreciated!

Has anyone got any info on this? I have since tried different artefact’s, removed MLO’s and stopped scripts but people still seem to get the same crash

Until you can provide this, no, you won’t get any magical ‘info’ or ‘fix’.

Since we don’t have access to any server affected by this kind of issue, we can’t investigate it, or fix it, and as nobody provided any reproduction steps, we can’t make ourselves affected by this either.

So, can I ask what information is in the crash dumps?

I am not really sure what to tell you considering other people that have the same issue have no way of pinpointing the issue either.

I am hardly asking for magical info or a fix.

Nothing useful to investigate the cause of these, sadly, since it’s some server-side state leading to such. At best (of crash dumps) one can provide a task manager (right click the right FXServer.exe -> Create dump file) dump from the server side at a time when a lot of players are having this kind of issue.

(these dumps get a bit big but compress well using winrar/7zip)

Sure thing, I never actually knew i could do that.

Maybe one thing I might have notice is that the crash reports happen when the player count is above 45? I could be wrong but I will keep an eye out on that today and ill try to gather a few dumps like you mentioned.


So I have a dump from the way you asked. -

Hope you are able to spot something in this. Our server will be busy soon so ill get another dump later.

Right - not directly related to the recent influx of these issues some other folks are having (I guess) but I did notice you’re on server build 2840 which I believe was from a bit of a nasty range in regard to replication issues.

That is correct however if i move to the latest build we seem to experience other issues and i stuck with this build as at the time it seemed more stable.

Ill update the artifacts and update with a new dump tomorrow while on it.

To avoid this type of ‘long list of peds’ crash you can just blacklist major of these with a very simple script. However, if you put that blacklist, you’ll get a new crash with the message “Ped Pool Full, Size == 180” - I fixed all of these errors by changing the artifacts to XXXX. I hope I helped :smile:

Are the people posting experiencing this running normal onesync, or legacy?

That’s not even a solution, for it would lead to basically no population spawning anymore at all.

The ped pool crash is minor compared to the other listed crashes however I included to see if anything could be spotted within any of the dumps or if there was any known information about it.

@deterministic_bubble I am hoping you could shed some light on this but does manually upping the ped pool values in the gameconfig.xml actually do anything? I’ll admit i have done this myself and i have not had any ped pool issues and if i am honest i rarely get the crashes listed however the same can’t be said for the rest of my players.

I will hopefully have an updated server dump later tonight as i will be updating to 2967 at 5pm.

nope, the file likely even gets reverted on first launch :confused:

Can Confirm same problems!

I would like to provide a little bit of what I have noticed.

Running an older build we are not really seeing pool crashes, however we do see a ton of nuts-ten. To tackle getting a newer server build without infinity. What we attempted…

When trying 3071 linux legacy onesync

  • Ped pool crashes started showing up after around 40 players. It was probably timing, but it seemed like most of them were regional people. I restarted my client, and when joining I started getting the crash as well. It happens directly once spawnmanager is ready to toss you in from what I can see.

When trying 2911 linux legacy onesync

  • This is our current version, and for the most part it prevents less crashes than we had before with an older build (which makes sense). Peds start becoming glitchy around 40 players.

For both instances we are setting peds in a frame. We also noticed less nuts-ten crashes the higher up we went. We tested recommended as well, though I do not remember the test results

It is my belief that migrating off of legacy will help solve most of our issues, there are some other stats that I have been gathering, watching the discord, and generally keeping an eye on things. I have an assumption that most of these issues are caused due to people being on channels of updates that are not current, the biggest offender being onesync legacy.

We are in the process of doing this now, and will be doing testing on onesync non-legacy soon.

I will also try to find some way to reproduce this problem without needing to get to higher player counts.


Alright, might have something. I created a loop on the client on all peds within range. I noticed some (maybe) abnormal activity with recreating peds. These tests were conducted using infinity, no convars besides population was used, Only player online was me.

Edit: The found_ago is off, I forgot I am running that in the loop every time it’s seen. It is showing last seen which will always be around the same value because of the interval timer

Reading these logs -

-- When a new ped is found
 [ time:30.0s ] [ type:Female ] [ active_peds:10 ] Ped 93186 was found

-- When a ped is lost
[ time:30.0s ] [ type:Michael ] [ lost_count::7 ] [ found_ago:0.012 ] Ped 86018 was lost

Mission row (ymap loaded)

Directly after server was restarted, and client was restarted.

Moved to strawberry store/ car wash

After taking the above logs, and moving away from mission row. Seems like normal behavior

Moved back to mission row

Behavior seemed to return to abnormal returning to Mission Row

Flew from missions row to the middle of the forest, near nude camp

I noticed, and have noticed for awhile we don’t have animals, but I do see console logs of them spawning. A decent amount of seagulls. Besides animals, behavior here seems to be normal, flying quickly past non-npc-spawning areas

Out of curiosity…

  • It seems like there is a “point” where this starts happening, the second I move into the “zone” it starts spamming the ped creations and lost prints
  • The only place I saw this happen is where mission row is
  • I was able to stand at mission row, without it spamming and showing abnormal activity but I am unable to make that happen again

I hope this helps figure something out


Here is an updated dump on the latest recommended build

If you’re looking at peds in cars, that’s 100% expected behavior. To be more certain, you should be logging network IDs, not script handles: if the same network IDs constantly pop in and out with the exact same model for each instance, you can be sure of there being an issue.

Hmmm alright, it was only doing this at mission row, it’s like it was trying to spawn 10-15 peds on my client, failing and then retrying which was the odd behavior I was pointing out. It felt almost like there was a bad generator or something. From watching the logs it was pretty easy to see about 15 peds get re-created every second, and then my client looses them on the next thread that runs (on 10ms)

I will switch to logging network handles, and also log information from onesync with entityCreated. Will get back to you soon with results.