I’m trying to setup nginx (cache proxy) and use the fileserver option using the Cookbook guide.
It’s working great with awesome loading times, except that sporadicly assets is failing to download as showed on this picture:
We’re also experienced this, It’s seems to be a random assets (and random client too).
It’s appear to have this behavior when we’re updated to FXServer build 5053 (between 4527 - 5053. My apologies that can’t point any exact artifact, We’re jumping to recommend only.)
Well, then I guess you have the repro setup above.
Repro recipe would be as follows:
Setup nginx (have tried both 1.14.0 and 1.16.0) - Debian distro
Setup fileserver as shown above
Move around to download assets until error occurs, if error doesnt occor delete ‘server-cache-priv’ on client to re-download assets to provoke the error
Clear nginx cache and resource assets will download just fine
Have you tested this ‘repro setup’ yourself from first principles? (i.e. not having any set of ‘move around’-type assets oneself, or a server with a lot of players to make this race condition or whatever more likely?)
As I don’t have either so I don’t ‘have the repro setup above’.
If it’s any help, pre-loading stream assets in loading screen never fails, it is always in the game.
EDIT; The above statement is false, after further testing.
When I was solo-testing this everything seemed fine, when we we’re two joining simoustanely the error happened already in loading-screen while pre-loading assets. This could maybe hint to a race condition?
No, that wouldn’t help, since the issue isn’t going to be something induced client side.
I guess you don’t want a fix then, if you’re all like ‘lol i can’t share the creator’s work!!!’, even if without a fix that ‘creator’s work’ is useless.
You could try figuring out if you can induce it with any set of random assets that aren’t some ‘creator’s work’ and link those…?
The problem we encountered seemed to be network related, as we’ve have moved our FiveM box away from OVH. Though the cache/file-server itself is still at OVH, and now we have no problems, and everything is working perfectly.