Suggestion for CFX.RE & Server developers

Hi there recently encountered multiple issues that are pain to resolve but could easly be done with change in FIVEM client.

So a lot of players use chrome developer tools to spoof requests, modify XHR or remove elements from NUI, so I had a question / suggestion.

Is it possible to add client agent ID or any information if he is using release that enables developer tools. As a server owner on my production server I could easily just drop players that don’t run base release rather than checking every possible way to spoof things on every script.

I fully understand that server developers can resolve these issues with a lot of work but simple agent check would solve most of them with a blink of an aye.

BR

Any release ‘enables developer tools’. There’s no release without ‘developer tools’.

It wouldn’t, since if someone is out to be malicious, they can bypass said ‘simple agent check’ and stuff still wouldn’t be solved, unlike when you fix your scripts, where there’d actually be no means to bypass anything.

Again, the same which applies for normal web applications applies here: never trust user input. There is no ‘quick fix’ for security.

I agree about never trusting cliient, from my point of view, we are trying to achieve same experience all over the server for all players but when some remove NUI elements, etc its just annoying since not all players are minded alike.

As far as I know when using base release, I can’t open dev tools & use debug commands like resmon, netgraph …

Or is there other ways to enable it?

And by bypass agent check why you think it would be so easy to bypass it, if it’s done in native level, where similar like user data that is being passed when connecting there would also be record of game build version client is using that I as server maintainer can use for anything?

I’m more leaning not to check if player has dev tools enabled, but just not to allow players on server that don’t use “base release”

image

Regarding last mention about security, I just did a small test:

  1. I found an event that dose a pretty large query;
  2. Injected code into that plugins NUI;
  3. Wrote simple javasscript loop ;
  4. Execute and see that just by injecting JS I can make server usage go full throttle.
setInterval(function () {
        $.post("https://script/eventThatHasHighLoad", JSON.stringify()
        );
}, 20);

As this is very basic example, but I believe you understand that event chain that goes
NUI → client → server where server has any type of SQL or array calculations this simple trick will create infinitive CPU overhead while your loop requests can be sent faster than server can handle the calculation It’s just matter of finding loop interval that dosen’t cause network reliability crash in client

Yes:

  1. Launching the game with +set moo 31337 as launch option enables all console commands even in production releases.
  2. Using http://localhost:13172/ in an external browser also leads to the Chrome dev tools, no matter the client configuration.
  3. Not to mention, client-side cheats, or even other less-detectable things (such as packet injection) screw with expectations no matter what, requiring anything server-side to be written to take into account abuse - see ‘there is no quick fix’ above.

That would still be data submitted by the client, which still is not a fix at all. Similarly, it’d not be indicative of any of the myriads of other ways clients can send “incorrect” data.

The same mitigation applies there that it would everywhere else on the web - rate limits, for example. Not ‘trying to detect tampered clients using signals that clients send and therefore can tamper with themselves’.

1 Like

Makes sense, to summarize do you see that adding ability to see if client has launch parameters added and what build he is using to extract for possible cheater, spoofer list armatures that don’t know how to use wireshark or similar tools is to much effort to just minimize the issue?
I’ m 100% on board that its not a solution but just an extra step to make it a bit complicated.

If we split the issue into two
1 - Detecting if launch params are added or if build version has dev tools wouldn’t be that hard, but would allow to drop part of abusers;
2 - Packet injections are more complicated that I’m on board that require rate limiting & well looked after scripts.

Aren’t there effective ways to encode packet contents for initial requests not to give sniffers option to decode them and spoof? some sort of signatures, hashes etc ? - sorry thats not my field just a wild guess

The issue is, rather, that if you reduce awareness of (and ability to commit/check for/test) abuse to a select crowd that’s ‘aware’ of complicated means, you’ll then find people are less and less willing to mitigate abuse properly in server-side code, which leads to the little abuse that exists hitting way harder, which leads to a few curious ecosystem outcomes, which are curiously analogous to those in broader society: as the world has become safer, the little excesses that challenge our safety get overblown and lead to public demand of disproportionate responses, leading to a lose-lose situation for basically everyone.

To this extent, ‘quick fixes’ of any sorts, while they may increase the barrier of entry to ‘abuse’, have a long-term net-negative effect as they do not solve any root causes, nor do they properly fix the ‘broken’ systems lacking checks, rate limits, or the likes.

We’re already at a point where there’s a heavily abusive market for ‘quick fixes’ to such abuse over here, and the ecosystem is already made infinitely more complex by these competing dynamics that seem to ignore and often amplify a lot of problems (see people who are ‘blocked’ by built-in anticheat measures because they were ‘testing’ a third-party ‘anticheat’ which they were basically fooled into buying - testing it using a paid abusive cheat, again funding the abuse ecosystem from both sides - to ‘fix’ a barely-relevant problem that should be fixed properly by script authors, then leading into a six-way web of complexity between the server owner, players on their server, us, resource authors, cheat vendors and ‘anticheat’ vendors, each with conflicting interests, most of which not too motivated to further actual progress, wasting so much time on all sides).

The simplest fix is still to properly check things on the server side - any other attempt at a fix is insufficient and will lead to issues in the long run.

An additional note - if people can abuse, but said abuse will not be visible to anyone else, nor will it have any lasting effects that annoy anyone, this removes a lot of motivation for said abuse, and a lot of motivation for the anger at said abuse, since this particular abuse - online cheating - can only exist if the ecosystem lets it, and ‘getting mad’ and scrambling to fix stuff right away only fuels the powers at play for this. You don’t have to prevent all abuse or ‘punish’ people/take away abilities, just preventing lingering effects and reducing impact goes a very long way.

You’ve made a good point.

P.S.
As a developer I would love to see native that can return all registered client/server event :), that would help a lot to start fixing all holes on the ship.

grep is a good starting tool for a code audit - there doesn’t need to be any runtime support for that.

It’s a shame that we’re basically over 20 years into the public internet, and still people seem to learn to code client-server systems without taking into account this core principle of ‘do not trust client input’ and the likes, which goes well until it doesn’t. :stuck_out_tongue: In this sense, making “it doesn’t” a thing that happens as early as possible is perhaps the best education tool in this regard.

As to the original question - here’s a classic StackOverflow thread about a similar concern: javascript - How to disable browser developer tools? - Stack Overflow - showing again that this sort of mitigation does not make sense in the long run.

Also since adding sv_puremode mybe it’s an idea that if, it’s enabled it also disabled developer tools?
or just another convar. It would be a small step in direction of giving server owners ability to gain some time span for fixing the underlying issue. Since in scenarios where anyone needs to debug production server they can enable it and track the issue but on day-to-day basis developer tools would be unavailable on build itself. That leaves us with only one problem to resolve → packet spoofers

Ideally not. Pure mode is for game hardening primarily, not script hardening. Game hardening is a much harder issue to properly solve (to the extent of ‘kind of impossible, given the game runs client-side anyway’) than script hardening.


Also, going back to your earlier point:

A PC is still an open system and the user could still noninvasively get the encryption key from their game client.

I kinda agree we both sides:

I personally always secure my scripts to the maximum level, if it’s a feature that can cause problems, it won’t be added.

So basically I agree with @nta.
BUT, I also agree with @kompots that blocking dev tools is something that is okay.

Developer tools, as it states, is for developers. This is not elegant to add this feature to the basic user (not an excuse to not secure your scripts). Users can delete elements, mess with positioning and trigger events, which is not something that should be available to everyone (ofc cheaters can do it too, but its more risky and can result to a global ban). And just as there isn’t some kind of public “runcode” to the basic user, there shouldn’t be dev tools. Just as you can’t just “delete” colors or images that being drawn with natives, there shouldn’t be option to delete the elements from the dev tools.

This is just easier to make unfair advantages and mess with stuff when it is given to you so easily:
You wanna make the screen black through NUI so they wouldn’t see anything → just delete the element…
And no one gonna ever find that you’ve done that.

I’m personally going to search for methods to kick players with open dev tools, EVEN THOUGH, it doesn’t gonna matter at all (security aspect) as I made sure everything is secured, but still.

Well look at it form a bit different perspective, most of people running servers are beginners or people who do this by curiosity or hobby.

So they using Fivem expect certain level of “pre-done” things, where security would be one of those in most cases. Suddenly they realize they have been thrown into hell of abusers and try to find the quickest / easiest solution. Not to mention that with tebex where you get encoded resources it’s another problem …

Copyrights of code vs trusting random guy there are no memory leaks & code is not exploitable

Everyone who has worked in development for a while knows that in real world fixing things go by flow:
hot fix → bug fix → It’s not a bug it’ s a feature → release

So I’m with you about final solution being fixing garbo code on server + setting proper data validation & rates it requires a lot of refactoring and takes time, and for that time we’re looking for somewhat “quick fix” to mitigate part of problem where we have capacity to trace in logs abusers that go crazy, at some point we run out of capacity to trace hundreds of cheaters.

grep
  • With this I will get singular list, what I was thinking is not greping every time I add new event but a native I could call on server startup that collects both event arrays, and via addEventHandler I would inject additional protection like spam protection & logging without tracking every single event but do this systematically, so my idea would be:

Get current event list for server events, by naming schema I could find the ones that for example use db

script:server:db:checkForUsers

if this event string contains :db: I would attach another listener to it, when it’s triggered to store log record of this being triggered & also if player is requesting this event before “cooldown period” I would (example) drop the player or similar solution.

Now I have 4600 events by quick search, so adding manaully changes to them will turn me into alcoholic.

For that reason - to prevent inadvertently breaking stuff using it - it’s ‘hidden’ (i.e. more difficult to access) in production mode.

‘Global bans’ are a bad solution in the first place and lead to a lot of work overload for us, both from adding detections, mitigating false positives, dealing with the trenches of the abuse ecosystem, but also a lot of human factor from dealing with people ‘being punished’ in the end for reasons that are not directly and entirely ‘their fault’, but are the fault of the broader abuse ecosystem and factors beyond.

In fact, a lot of these abusive anti-cheat systems in themselves are almost covert advertisements for cheats, reinforcing the cheating ecosystem and giving them more incentive, not less - not to begin about the anti-cheat systems that themselves have close and direct ties to cheat vendors, or those that act like cheats in themselves by exploiting vulnerabilities in platform code for profit and doing everything in their power to prevent us from fixing this.

The start of a proper fix to all of this would be to break the cycle of abuse → anti-abuse quick fixes → fear → more fear by promoting proper design and constructive, freely-available ways of ‘having fun’, rather than an ecosystem driven by perverse incentives to keep abuse alive that feeds off fear.

There should’ve been, in fact, and once the ecosystem allows, there will be. However, the ecosystem is aligned in such a way that this will likely not be the case, as there’ll always be some legacy insecure code running somewhere.

… and why would this be? If someone wants to inspect what you’re doing on their machine, why not let them? If your UI has a bug, why not let them investigate and be able to provide a proper informed report, and perhaps a workaround for themselves?

Why should you punish someone for behavior that has no direct correlation with abuse, when, as you said, no abuse is possible via such on your server, and as such, you’re basically punishing people for… being curious?!

Instead of seeing possible ‘unfair advantages’ and then trying to come up with preemptive mitigations from that viewpoint when no such ‘abuse’ has even been shown yet, why not… leave things be as they are, and deal with such?

For example, removing a blanked-out screen isn’t something with lasting effects, and as such not even worth trying to mitigate beforehand - and if someone does do such in your example, have a laugh, admit ‘heh, didn’t think of that’ or ‘hah, yeah, knew that wouldn’t last’, try to understand what happened, and try to think of a constructive fix to that, whether it be adapting your gameplay design to fit the scenario users expect, having a stern talking-to people who somehow end up seeing through your blanked out view, using the game’s screen fade-out commands, etc., rather than taking preemptive measures that way outweigh the potential damage done here, even if by ‘normalizing’ the ‘yeah just to be sure, got to block dev tools somehow’-style behavior.

(similarly, by the way, we already tend to mitigate ways that people ‘detect dev tools’ and when writing code to “kick players when detected”, you may as such find all players get kicked in platform updates as a result of this mitigation, since, as said, this is a non-fix - at most, log such so you can correlate this against future markers of abuse, don’t try to instantly act and punish)

And, again, sure - but at a certain point the hunt for such quick fixes seems to become a goal in itself, and I personally, as with most of us on ‘this side’, do not want to constantly have to weigh these interests and facilitate this constant hunt blindly, while constantly being pecked upon by all five other sides in this for ‘not doing things well enough’ and all.

Even adding ‘simple’ checks like this can and will have complex consequences way, way down the line, and eventually will come back to bite us as well.

At some point I’d hope the ecosystem, or maybe us, will find a way to get ‘rid’ of low-quality code being the standard (e.g. by having a properly-designed and flexible base set of gameplay code be available for people to use in common use cases), and finally solve this class of issues once and for all, especially since the ‘quick fix’ market seems to actually be dependent on the abuse itself, as well as low-quality scripts continuing to exist, indeed reinforcing the ‘a goal in itself’ state.

As to your other idea,

… you can do this even without platform support already - assuming you’re using JS/TS or Lua, in C# it’s perhaps a little different but by then you have other refactoring tools available due to static typing - by just wrapping AddEventHandler in a script included in your resources to add some baseline rate limiting.

by just wrapping AddEventHandler in a script included in your resources to add some baseline rate limiting.

what would be the best way todo this by the way?, i`ve seen many attempts at it but not entirely sure what the best ones would be :smile:

As a general note, it should be said that I do not wish for this discussion to last for too much longer. I’ve stated my views in this regard, and have observed and taken the views expressed in this topic as well, but am noticing my own calmness and ability to remain considerate without ending up offensive/defensive dropping off steadily, mostly due to the complexity involved in integrating the views expressed by others with my own thoughts on this, and it seems to again be difficult to reach an agreement or steady state here, so I’d rather we leave it at this for the time being.

Since we are there, may I ask You something about insecure mode? I must specify that I am trying to run FiveM on MacOS 14 beta. GTA5 runs perfectly, without using parallels or crossover, or any emulator. I just can’t get rid of the insecure mode, which causes the crash of the app.