Add high-FPS render pacing and telemetry
Some checks failed
build / Windows Build (push) Has been cancelled

This commit is contained in:
server
2026-04-16 10:37:08 +02:00
parent 49e8eac809
commit bfe52a81f9
15 changed files with 1402 additions and 233 deletions

View File

@@ -0,0 +1,307 @@
# Anti-Cheat Architecture 2026
This document proposes a practical anti-cheat design for the current Metin2
client and server stack.
Date of analysis: 2026-04-16
## Executive Summary
The current client contains only a weak integrity layer:
- `src/UserInterface/ProcessCRC.cpp` computes CRC values for the running binary.
- `src/UserInterface/PythonNetworkStream.cpp` calls `BuildProcessCRC()` when the
client enters select phase.
- `src/UserInterface/PythonNetworkStreamPhaseGame.cpp` injects two CRC bytes
into attack packets.
- `src/UserInterface/PythonNetworkStreamPhaseGame.cpp` contains a `CG::HACK`
reporting path.
- `src/UserInterface/ProcessScanner.cpp` exists, but it is not wired into the
live runtime.
- `src/UserInterface/PythonNetworkStreamPhaseGame.cpp::__SendCRCReportPacket()`
is currently a stub that returns `true`.
This is not a production-grade anti-cheat. It raises the bar slightly for basic
binary tampering, but it does not defend well against modern external tools,
automation, packet abuse, or VM-assisted cheats.
The recommended 2026 design is:
1. Make the server authoritative for critical combat and movement rules.
2. Add telemetry and delayed-ban workflows.
3. Add a commercial client anti-cheat as a hardening layer, not as the primary
trust model.
4. Treat botting and farming as a separate abuse problem with separate signals.
## Goals
- Stop speedhack, combohack, rangehack, teleport, packet spam, and most forms of
client-side state forgery.
- Raise the cost of memory editing, DLL injection, and automation.
- Reduce false positives by moving final judgment to server-side evidence.
- Keep the design compatible with a legacy custom client.
## Non-Goals
- Absolute cheat prevention.
- Fully trusting a third-party kernel driver to solve game logic abuse.
- Real-time automatic bans on a single weak signal.
## Current State
### What exists today
- Client self-CRC:
- `src/UserInterface/ProcessCRC.cpp`
- CRC pieces attached to outgoing attack packets:
- `src/UserInterface/PythonNetworkStreamPhaseGame.cpp`
- Hack message queue and packet transport:
- `src/UserInterface/PythonNetworkStreamPhaseGame.cpp`
### What is missing today
- No active process blacklist or scanner integration in the runtime path.
- No complete CRC reporting flow to the server.
- No server-authoritative verification model documented in the current client
source tree.
- No robust evidence pipeline for review, delayed bans, or behavioral scoring.
## Recommended Architecture
### Layer 1: Server-Authoritative Core
This is the most important layer. The server must become the source of truth for
combat and movement outcomes.
### Movement validation
The server should validate:
- maximum velocity by actor state
- acceleration spikes
- position delta against elapsed server time
- warp-only transitions
- path continuity through collision or map rules
- impossible Z transitions and fly transitions
Recommended outputs:
- hard reject for impossible moves
- suspicion score for repeated marginal violations
- session telemetry record
### Combat validation
The server should validate:
- attack rate by skill and weapon class
- combo timing windows
- skill cooldowns
- victim distance and angle
- hit count per server tick
- target existence and visibility rules
- animation-independent skill gate timing
This removes most of the value of classic combohack, attack speed manipulation,
and packet replay.
### State validation
The server should own:
- HP / SP mutation rules
- buff durations
- item use success
- teleport authorization
- quest-critical transitions
- loot generation and pickup eligibility
### Layer 2: Telemetry and Evidence
Do not ban on one event unless the event is impossible by design.
Collect per-session evidence for:
- movement violations
- attack interval violations
- repeated rejected packets
- target distance anomalies
- client build mismatch
- CRC mismatch
- anti-cheat vendor verdict
- bot-like repetition and route loops
Recommended workflow:
- score each event
- decay score over time
- trigger thresholds for:
- silent logging
- shadow restrictions
- temporary action blocks
- GM review
- delayed ban wave
Delayed bans are important because instant bans teach attackers which checks
worked.
### Layer 3: Client Integrity
Use client integrity only as a supporting layer.
Recommended client responsibilities:
- verify binary and pack integrity
- report build identifier and signed manifest identity
- detect common injection or patching signals
- report tampering and environment metadata
- protect transport secrets and runtime config
Suggested upgrades to the current client:
- replace the current partial CRC path with a real signed build identity
- sign client packs and executable manifests
- complete `__SendCRCReportPacket()` and send useful integrity evidence
- remove dead anti-cheat code or wire it fully
- add secure telemetry batching instead of ad hoc string-only hack messages
### Layer 4: Commercial Anti-Cheat
### Recommended vendor order for this project
1. `XIGNCODE3`
2. `BattlEye`
3. `Denuvo Anti-Cheat`
4. `Easy Anti-Cheat`
### Why this order
`XIGNCODE3` looks like the best functional fit for a legacy MMO client:
- public positioning is strongly focused on online PC games
- public feature set includes macro detection, resource protection, and real
time pattern updates
- market fit appears closer to older custom launchers and non-mainstream engine
stacks
`BattlEye` is a strong option if you want a more established premium PC
anti-cheat and can support tighter integration work.
`Denuvo Anti-Cheat` is technically strong, but it is likely the heaviest vendor
path in both integration and commercial terms.
`Easy Anti-Cheat` is attractive if budget is the main constraint, but it should
not change the core rule: the server must still be authoritative.
### Layer 5: Anti-Bot and Economy Abuse
Botting should be treated as a separate control plane.
Recommended controls:
- route repetition heuristics
- farming loop detection
- vendor / warehouse / trade anomaly scoring
- captcha or interaction challenge only after confidence threshold
- account graph analysis for mule and funnel behavior
- hardware and device clustering where legally appropriate
Do not rely on captcha alone. It is only a friction tool.
## Proposed Rollout
### Phase 1: Fix trust boundaries
- document which outcomes are currently trusted from the client
- move movement and attack legality fully server-side
- add structured telemetry records
Deliverable:
- server can reject impossible movement and impossible attack cadence without any
client anti-cheat dependency
### Phase 2: Replace weak integrity
- complete binary and pack integrity reporting
- version all reports by client build
- bind reports to account, session, and channel
Deliverable:
- reliable client integrity evidence reaches the server
### Phase 3: Vendor pilot
- integrate one commercial anti-cheat in observe mode first
- compare vendor verdicts with server suspicion score
- review false positives before enforcement
Deliverable:
- enforcement policy based on combined evidence
### Phase 4: Ban and response pipeline
- implement delayed ban waves
- implement silent risk tiers
- implement GM review tooling
Deliverable:
- repeatable response process instead of ad hoc action
## Concrete Changes For This Codebase
Client-side work:
- `src/UserInterface/ProcessCRC.cpp`
- replace simple CRC flow with signed manifest and richer integrity report
- `src/UserInterface/PythonNetworkStream.cpp`
- send integrity bootstrap at phase transition
- `src/UserInterface/PythonNetworkStreamPhaseGame.cpp`
- implement real CRC and integrity packet submission
- replace free-form hack strings with typed evidence events
- `src/UserInterface/ProcessScanner.cpp`
- either remove dead code or reintroduce it as a fully designed subsystem
Server-side work:
- movement validator
- combat cadence validator
- anomaly scoring service
- evidence storage
- review and ban tooling
## Design Rules
- Never trust timing sent by the client for legality.
- Never trust client-side cooldown completion.
- Never trust client position as final truth for hit validation.
- Never ban permanently on a single client-only signal.
- Never ship anti-cheat changes without telemetry first.
## Success Metrics
- drop in successful speedhack and combohack reports
- drop in impossible movement accepted by the server
- reduced farm bot session length
- reduced economy inflation from automated abuse
- acceptable false positive rate during observe mode
## Recommended Next Step
Implement Layer 1 before vendor integration.
If the server still accepts impossible combat or movement, a commercial
anti-cheat only increases attacker cost. It does not fix the trust model.
## External References
- Unity cheat prevention guidance
- Epic Easy Anti-Cheat public developer material
- BattlEye public developer material
- Wellbia XIGNCODE3 public product material
- Denuvo Anti-Cheat public product material
- Metin2Dev discussions around server-side validation, anti-bot workflows, and
weak community anti-cheat releases

View File

@@ -0,0 +1,342 @@
# High-FPS Client Plan
This document describes how to move the current client from a hard 60 FPS model
to a safe high-FPS model without breaking gameplay timing.
Date of analysis: 2026-04-16
## Executive Summary
The current client is not limited to 60 FPS by a single config value. It is
limited by architecture.
Current hard constraints in the source:
- `src/UserInterface/PythonApplication.cpp`
- constructor switches `CTimer` into custom time mode
- main loop advances by a fixed `16/17 ms`
- main loop sleeps until the next tick
- `src/EterBase/Timer.cpp`
- custom mode returns `16 + (m_index & 1)` milliseconds per frame
- `src/GameLib/GameType.cpp`
- `g_fGameFPS = 60.0f`
- `src/GameLib/ActorInstanceMotion.cpp`
- motion frame math depends on `g_fGameFPS`
- `src/EterLib/GrpDevice.cpp`
- presentation interval is also involved, especially in fullscreen
Because of this, a simple FPS unlock would likely produce one or more of these
problems:
- broken combo timing
- incorrect animation frame stepping
- combat desync with the server
- accelerated or jittery local effects
- unstable camera or UI timing
The correct design is:
1. Keep simulation on a fixed tick.
2. Decouple rendering from simulation.
3. Add interpolation for render state.
4. Expose render cap and VSync as separate options.
## Current Timing Model
### Main loop
In `src/UserInterface/PythonApplication.cpp`:
- `CTimer::Instance().UseCustomTime()` is enabled in the constructor.
- `rkTimer.Advance()` advances simulated time.
- `GetElapsedMilliecond()` returns a fixed `16/17 ms`.
- `s_uiNextFrameTime += uiFrameTime` schedules the next frame.
- `Sleep(rest)` enforces the cap.
This means the current process loop is effectively built around a fixed 60 Hz
clock.
### Gameplay coupling
`src/GameLib/GameType.cpp` defines:
```cpp
extern float g_fGameFPS = 60.0f;
```
This value is used by motion and attack frame math in:
- `src/GameLib/ActorInstanceMotion.cpp`
- `src/GameLib/RaceMotionData.cpp`
The result is that update timing and motion timing are coupled to 60 FPS.
## Design Target
Target architecture:
- simulation update: fixed 60 Hz
- render: uncapped or capped to monitor / user value
- interpolation: enabled between fixed simulation states
- VSync: optional
- gameplay legality: unchanged from the server point of view
This lets the client render at:
- 120 FPS
- 144 FPS
- 165 FPS
- 240 FPS
- uncapped
while still keeping game logic deterministic.
## Recommended Implementation
### Step 1: Replace the custom fixed timer for frame scheduling
Do not delete all timer code immediately. First isolate responsibilities.
Split timing into:
- simulation accumulator time
- render frame delta
- presentation pacing
Recommended source touchpoints:
- `src/EterBase/Timer.cpp`
- `src/EterBase/Timer.h`
- `src/UserInterface/PythonApplication.cpp`
Preferred clock source:
- `QueryPerformanceCounter` / `QueryPerformanceFrequency`
- or a StepTimer-style wrapper with high-resolution real time
### Step 2: Convert the main loop to fixed update + variable render
Current model:
- one process pass
- one fixed pseudo-frame
- optional sleep
Target model:
```text
read real delta
accumulate delta
while accumulator >= fixedTick:
run simulation update
accumulator -= fixedTick
alpha = accumulator / fixedTick
render(alpha)
present
optional sleep for render cap
```
Recommended fixed tick:
- `16.666666 ms`
Recommended initial render caps:
- 60
- 120
- 144
- 165
- 240
- unlimited
### Step 3: Keep gameplay update on fixed 60 Hz
Do not increase gameplay tick in the first rollout.
Keep these systems on the fixed update path:
- network processing that depends on gameplay state
- actor motion update
- attack state transitions
- player movement simulation
- effect simulation when tied to gameplay
- camera state that depends on actor state
This reduces risk dramatically.
### Step 4: Render from interpolated state
The renderer must not directly depend on only the last fixed update result if
you want smooth motion at high refresh rates.
Add previous and current renderable state for:
- actor transform
- mount transform
- camera transform
- optionally important effect anchors
At render time:
- interpolate position
- slerp or interpolate rotation where needed
- use `alpha` from the simulation accumulator
Without this step, high-FPS output will still look like 60 FPS motion with extra
duplicate frames.
### Step 5: Separate render cap from VSync
The current D3D present settings are not enough on their own.
Recommended behavior:
- windowed:
- allow `immediate` plus optional software frame cap
- fullscreen:
- keep VSync configurable
- do not rely on fullscreen `PresentationInterval` as the only limiter
Source touchpoint:
- `src/EterLib/GrpDevice.cpp`
### Step 6: Make `SetFPS()` real or remove it
`app.SetFPS()` exists today but only stores `m_iFPS`. It does not drive the main
loop.
Source touchpoints:
- `src/UserInterface/PythonApplication.cpp`
- `src/UserInterface/PythonApplication.h`
- `src/UserInterface/PythonApplicationModule.cpp`
Required action:
- wire `m_iFPS` into the render pacing code
- or rename the API to reflect the real behavior
### Step 7: Audit all 60 FPS assumptions
Search and review every system that assumes 60 FPS timing.
Known touchpoints:
- `src/GameLib/GameType.cpp`
- `src/GameLib/ActorInstanceMotion.cpp`
- `src/GameLib/RaceMotionData.cpp`
- `src/AudioLib/MaSoundInstance.h`
- `src/AudioLib/SoundEngine.cpp`
- `src/AudioLib/Type.cpp`
Expected rule:
- gameplay timing stays fixed at 60 Hz for rollout 1
- render-only timing becomes variable
- audio smoothing constants may need review if they are implicitly tied to frame
rate instead of elapsed time
## Rollout Plan
### Phase 1: Mechanical decoupling
- introduce high-resolution real timer
- add accumulator-based fixed update loop
- keep simulation at 60 Hz
- render once per outer loop
Deliverable:
- no visible gameplay behavior change at 60 Hz cap
### Phase 2: Interpolation
- store previous and current render states
- render with interpolation alpha
- validate player, NPC, mount, and camera smoothness
Deliverable:
- motion looks smoother above 60 Hz
### Phase 3: Render settings
- implement `render_fps_limit`
- implement `vsync` toggle
- expose settings to Python and config
Deliverable:
- user-selectable 120 / 144 / 165 / 240 / unlimited
### Phase 4: Validation
Test matrix:
- 60 Hz monitor, VSync on and off
- 144 Hz monitor, cap 60 / 144 / unlimited
- minimized and alt-tab paths
- crowded city combat
- boss fight
- mounted combat
- loading transitions
- packet-heavy scenes
Metrics:
- stable attack cadence
- no combo timing regression
- no movement speed regression
- no animation stalls
- no camera jitter
- no CPU runaway when capped
## Risks
### High risk
- attack and combo logic tied to frame count
- animation transitions tied to frame count
- server-visible timing drift
### Medium risk
- effect systems that assume one render per update
- audio systems with frame-based smoothing
- UI code that assumes stable fixed frame cadence
### Lower risk
- D3D present configuration
- config plumbing for user settings
## Minimal Safe First Patch
If the goal is the fastest safe path, the first implementation should do only
this:
1. keep gameplay at fixed 60 Hz
2. render more often
3. interpolate actor and camera transforms
4. expose render cap option
Do not attempt in the first patch:
- 120 Hz gameplay simulation
- rewriting all motion logic to time-based math
- changing server combat timing
## Success Criteria
- client can render above 60 FPS on high refresh displays
- gameplay remains server-compatible
- no measurable change in combat legality
- 60 FPS mode still behaves like the current client
## Recommended Next Step
Implement Phase 1 and Phase 2 together in a branch.
That is the smallest meaningful change set that can produce visibly smoother
output without committing to a full time-based gameplay rewrite.