Nativeaudio hanging & lagging voices (b2189)

Hello folks. I started noticing this with Nativeaudio last weekend (2021-03-13) when both I and my pal were on Canary. At the time, stable did not have the same issue.

Details
Client version: Canary b2189 (Specifically tested at 2021-03-18, issued first encountered on 2021-03-13)
Server artifact: #3679 and #3652 Windows ← The video below was taken on #3652
Voice parameters: voice_use3dAudio false, voice_useSendingRangeOnly true, voice_useNativeAudio true

Fully reproducible on #3679 unmodified server. Only occurs with native audio enabled. With 3d audio, the issue does not occur.

Issue
it appears that if MumbleSetVolumeOverrideByServerId is being run on a target for a second time (within a relatively short time span? unconfirmed) the target’s voice starts lagging horribly.

Expected Behavior
I use MumbleSetVolumeOverrideByServerId to allow for long-distance communication like talking on a radio/phone. What should happen is this sets the volume of the transmission and overrides distance calculations and 3daudio. When using -1.0 it should reset so that the player can be heard locally.

Current Behaviour
When MumbleSetVolumeOverrideByServerId is being run for a second time on a player, said player’s voice becomes extremely laggy or hangs completely. (The scruffy voice seems to clear after 15-30 seconds or so after MumbleSetVolumeOverrideByServerId has been set to -1.0).

Repro
I ran this snippet in a command to simulate someone starting to speak on a radio:

MumbleSetVolumeOverrideByServerId(2, 0.5) -- Player started sending on radio
Wait(3000)
MumbleSetVolumeOverrideByServerId(2, -1.0) -- Player stopped sending on radio
print(1) -- Print so I can see in-game when we reset

Video
Notes: You can see in the video when his voice should have been reset. He had completely stopped transmitting at the end of the video. After an unknown interval, I can hear him normally again.

https://streamable.com/e/7zgklf

Link in-case video doesn’t load: https://streamable.com/e/7zgklf

Sorry it took me a week to send this in, should have said something 7 days ago :upside_down_face:

5 Likes

Same video: 2021-03-20_13-23-42

1 Like

We have a strange lag with mumble on our server where it only happens when the player count is high. By high I mean over 64. Any suggestions?

Doesn’t seem related to this topic, use a channel splitting resource.

1 Like

So what you’re trying to say is that people should use another resource to make the native audio work? Fascinating.

How is that “fascinating”? We - as in the Cfx.re team - don’t run any server nor do we play/understand RP, so we are unable to make any one-size-fits-all resource let alone test one, similar goes for any potential routing optimizations to make routing any faster with crazy amount of players in one channel.

Also, everyone has different needs - default voice works fine as it’s always “global” unless you use proximity config scripts, so it wouldn’t even make sense to do channel splitting, so that’s not offered by default either.

Even more so, this is entirely offtopic to the original bug report which is about a build 2189-specific bug with the game code behaving differently, which is written by R* and not us, it can be worked around by using 1604 or 2060 though.

1 Like
  • according to their use case

Yes of course. God forbid you have to write some code!!!

It’s not a conversation about a one-size-fits-all, it’s a point of where your artifacts can handle 500+ people per server, but the native voice is clearly struggling. The solution should be able to handle the capacity advertised.
I’ve seen other people suggest to server owners to increase their bandwidth, which is a ridiculous statement, voip doesn’t need 1gb up/down as others have claimed here. So it seems that anyone running more than 64 people should either do channel splitting or use an external voip resource like teamspeak.

1 Like

Which it can, as long as your code using proximity voice also splits by channels. Note that by default “the solution” doesn’t have any proximity voice, I doubt with “500 players” all talking at once you wouldn’t already be using a proximity voice resource yourself.

No system can route 500*500*N voice packets per second cleanly, not even “teamspeak”, I suspect whatever plugins you use for that also do custom routing restriction similar to what channel splitting (as in “pma-voice” and similar scripts) would do here.

I am experiencing the same with an external murmur server.
I noticed that CPU usage of murmur is high when this start happening.
But I haven’t checks the CPU usage before the recent changes so maybe it is unrelated.
Which is strange because from what I understand murmur should not be cpu intensive.

3 Likes

Apparently this issue is b2189 specific. Switching off from b2189 to any other version should prevent this issue from occurring until hopefully one day, someone with more brains than me can actually fix it.

My solution to the mumble voice issue like sluttering etc…
Follow what bubbles say about spliting channel.
Already have a modified mumble voip works with radio ,calls, voice proximity and channel is based on player current location. Its like a grid system from pichot.
This will prevent some mumble bug voice slutter./lag.
Specially in high number of users.

I would like to release it but doubt its allow to rerelease the same script.

Could be fucking awesome if you did…
We literally tried everything, and this stutter shit is so annoying.
If you find any fix for server version b2189, please let me know! :smiley:

What everything are you refering?
Voice slutter issue can be a bandwidth problem, firewall,sysctl settings, number of players, server build and lastly if mumble bugged.
We did not have any slutter issue anymore, but still testing phase atm. So its not 100% surely fix.

The only fix for us is that we had to change server build.
If we use build 2189 then voices lag/stutter, but any other build is good and doesn’t have problems.
Did you find any solution to fix this on server build 2189?

if you are talking about client build 2189 cayo perico?
its a client problem you should wait for bubbles to assist you.

Server build = server fx version

Yeah, client build.

Confirming the issue still exists in artifact 3810, b2189 (b2060 worked fine). Same issues as OP video.

@nta Any statement on the native fix apply on regular build to b2189 build ?
Apply the same native fix on b2189 is : possible ? In todo list ? impossible ?

What “native fix” “applied in regular build”?

The “regular build” and “2189 build” run exactly the same code. This is not some case of an issue only having been fixed on one or so.

Also, you’re not entitled to a “statement” at all especially for an issue I’m unable to reproduce and therefore unable to fix as well.

1 Like