Add runtime validation gate and known issues baseline
This commit is contained in:
2
.gitignore
vendored
2
.gitignore
vendored
@@ -2,6 +2,8 @@
|
||||
/.idea/
|
||||
/.vscode/
|
||||
/node_modules/
|
||||
__pycache__/
|
||||
*.pyc
|
||||
*.tmp
|
||||
*.swp
|
||||
keys/
|
||||
|
||||
14
README.md
14
README.md
@@ -98,6 +98,19 @@ Python smoke test:
|
||||
python scripts/mcp_smoke_test.py
|
||||
```
|
||||
|
||||
Linux headless end-to-end test:
|
||||
|
||||
```bash
|
||||
python scripts/headless_e2e.py
|
||||
```
|
||||
|
||||
Runtime scenario gate against the real client runtime:
|
||||
|
||||
```bash
|
||||
python3 scripts/validate_runtime_gate.py \
|
||||
--runtime-root /tmp/m2dev-client-runtime-http
|
||||
```
|
||||
|
||||
## Build
|
||||
|
||||
```bash
|
||||
@@ -190,3 +203,4 @@ For the runtime key payload contract, see [docs/launcher-contract.md](docs/launc
|
||||
For release steps, see [docs/release-workflow.md](docs/release-workflow.md).
|
||||
For key rotation policy, see [docs/key-rotation.md](docs/key-rotation.md).
|
||||
For legacy pack migration, see [docs/migration.md](docs/migration.md).
|
||||
For Linux headless and real client runtime testing, see [docs/testing.md](docs/testing.md).
|
||||
|
||||
@@ -47,6 +47,8 @@ delivery gradually, not through a single risky cutover.
|
||||
- decompression failures
|
||||
- manifest signature failures
|
||||
- key mismatch by `key_id`
|
||||
- runtime smoke against the real client
|
||||
- scenario gate with known-issues baseline
|
||||
|
||||
## Practical migration loop
|
||||
|
||||
@@ -55,8 +57,31 @@ delivery gradually, not through a single risky cutover.
|
||||
3. Verify it.
|
||||
4. Diff it against the source tree.
|
||||
5. Boot the client with runtime keys.
|
||||
6. Validate asset loads in logs and in-game.
|
||||
7. Move to the next pack group.
|
||||
6. Run the runtime scenario gate.
|
||||
7. Validate asset loads in logs and in-game.
|
||||
8. Move to the next pack group.
|
||||
|
||||
Recommended gate command:
|
||||
|
||||
```bash
|
||||
python3 scripts/validate_runtime_gate.py \
|
||||
--runtime-root /tmp/m2dev-client-runtime-http
|
||||
```
|
||||
|
||||
For release tightening:
|
||||
|
||||
```bash
|
||||
python3 scripts/validate_runtime_gate.py \
|
||||
--runtime-root /tmp/m2dev-client-runtime-http \
|
||||
--strict-known-issues
|
||||
```
|
||||
|
||||
The shared baseline lives in:
|
||||
|
||||
- `known_issues/runtime_known_issues.json`
|
||||
|
||||
This lets the migration fail only on new regressions while still tracking
|
||||
historical client data problems explicitly.
|
||||
|
||||
## Confirmed startup-safe pack group
|
||||
|
||||
@@ -170,6 +195,13 @@ effect files and reports 12 concrete missing references:
|
||||
- `effect/background/turtle_statue_tree_roof_light01.mse` missing `turtle_statue_tree_roof_light01.mde`
|
||||
- `effect/etc/compete/ready.mse` missing `ready.DDS`
|
||||
|
||||
Those current actor and effect findings are also recorded in:
|
||||
|
||||
- `known_issues/runtime_known_issues.json`
|
||||
|
||||
That file is now the shared runtime baseline used by the validators and the
|
||||
aggregated release gate.
|
||||
|
||||
Recommended next pack groups:
|
||||
|
||||
1. remaining startup-adjacent patch packs
|
||||
|
||||
@@ -229,3 +229,50 @@ This validator checks text-based effect assets in `Effect`:
|
||||
- `.msf` `BombEffect`
|
||||
- `.msf` `AttachFile`
|
||||
- derived `.mss` sound scripts and their referenced `.wav` files
|
||||
|
||||
Runtime release gate:
|
||||
|
||||
```bash
|
||||
python3 scripts/validate_runtime_gate.py \
|
||||
--runtime-root /tmp/m2dev-client-runtime-http
|
||||
```
|
||||
|
||||
Strict runtime release gate:
|
||||
|
||||
```bash
|
||||
python3 scripts/validate_runtime_gate.py \
|
||||
--runtime-root /tmp/m2dev-client-runtime-http \
|
||||
--strict-known-issues
|
||||
```
|
||||
|
||||
The gate runs these validators together:
|
||||
|
||||
- `scripts/validate_runtime_scenarios.py`
|
||||
- `scripts/validate_actor_scenarios.py`
|
||||
- `scripts/validate_effect_scenarios.py`
|
||||
|
||||
By default they load the shared baseline:
|
||||
|
||||
- `known_issues/runtime_known_issues.json`
|
||||
|
||||
Result semantics:
|
||||
|
||||
- `known_issue_ids`: currently accepted historical content issues
|
||||
- `unexpected_issue_ids`: new issues that fail the gate
|
||||
- `stale_known_issue_ids`: baseline entries not observed anymore
|
||||
|
||||
Default behavior:
|
||||
|
||||
- known issues are reported but do not fail the gate
|
||||
- only unexpected issues fail the gate
|
||||
|
||||
Strict behavior:
|
||||
|
||||
- unexpected issues fail the gate
|
||||
- stale known-issue entries also fail the gate
|
||||
|
||||
Current baseline on the real runtime:
|
||||
|
||||
- `world`: `0`
|
||||
- `actor`: `5`
|
||||
- `effect`: `12`
|
||||
|
||||
24
known_issues/runtime_known_issues.json
Normal file
24
known_issues/runtime_known_issues.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"world": [],
|
||||
"actor": [
|
||||
"actor:paired_model:ymir work/monster/misterious_diseased_host:25.gr2",
|
||||
"actor:paired_model:ymir work/monster/skeleton_king:24.gr2",
|
||||
"actor:paired_model:ymir work/monster/thief2:03_1.gr2",
|
||||
"actor:paired_model:ymir work/npc/christmas_tree:wait.gr2",
|
||||
"actor:paired_model:ymir work/npc/guild_war_flag:wait.gr2"
|
||||
],
|
||||
"effect": [
|
||||
"effect:reference:ymir work/effect/background/moonlight_eff_bat.mse:ymir work/effect/pet/halloween_2022_coffin_bat_01.dds",
|
||||
"effect:reference:ymir work/effect/background/moonlight_eff_bat.mse:ymir work/effect/pet/halloween_2022_coffin_bat_02.dds",
|
||||
"effect:reference:ymir work/effect/background/moonlight_eff_bat.mse:ymir work/effect/pet/halloween_2022_coffin_bat_03.dds",
|
||||
"effect:reference:ymir work/effect/background/moonlight_eff_bat_02_20s.mse:ymir work/effect/pet/halloween_2022_coffin_bat_01.dds",
|
||||
"effect:reference:ymir work/effect/background/moonlight_eff_bat_02_20s.mse:ymir work/effect/pet/halloween_2022_coffin_bat_02.dds",
|
||||
"effect:reference:ymir work/effect/background/moonlight_eff_bat_02_20s.mse:ymir work/effect/pet/halloween_2022_coffin_bat_03.dds",
|
||||
"effect:reference:ymir work/effect/background/mushrooma_01.mse:ymir work/effect/background/mushrooma_01.mde",
|
||||
"effect:reference:ymir work/effect/background/mushrooma_03.mse:ymir work/effect/background/mushrooma_03.mde",
|
||||
"effect:reference:ymir work/effect/background/mushrooma_04.mse:ymir work/effect/background/mushrooma_04.mde",
|
||||
"effect:reference:ymir work/effect/background/smh_gatetower01.mse:ymir work/effect/monster2/smoke_dust.dds",
|
||||
"effect:reference:ymir work/effect/background/turtle_statue_tree_roof_light01.mse:ymir work/effect/background/turtle_statue_tree_roof_light01.mde",
|
||||
"effect:reference:ymir work/effect/etc/compete/ready.mse:ymir work/effect/etc/compete/ready.dds"
|
||||
]
|
||||
}
|
||||
@@ -5,7 +5,8 @@
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"mcp": "node mcp_server.mjs",
|
||||
"mcp:smoke": "node scripts/mcp_smoke_test.mjs"
|
||||
"mcp:smoke": "node scripts/mcp_smoke_test.mjs",
|
||||
"headless:e2e": "python3 scripts/headless_e2e.py"
|
||||
},
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.29.0",
|
||||
|
||||
243
scripts/headless_e2e.py
Executable file
243
scripts/headless_e2e.py
Executable file
@@ -0,0 +1,243 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import annotations
|
||||
|
||||
import filecmp
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||
DEFAULT_BINARY = REPO_ROOT / "build" / "m2pack"
|
||||
ENV_BINARY = "M2PACK_BINARY"
|
||||
|
||||
|
||||
def resolve_binary() -> Path:
|
||||
env_value = os.environ.get(ENV_BINARY)
|
||||
if env_value:
|
||||
candidate = Path(env_value).expanduser()
|
||||
if candidate.is_file():
|
||||
return candidate
|
||||
|
||||
if DEFAULT_BINARY.is_file():
|
||||
return DEFAULT_BINARY
|
||||
|
||||
raise FileNotFoundError(
|
||||
f"m2pack binary not found. Build {DEFAULT_BINARY} or set {ENV_BINARY}."
|
||||
)
|
||||
|
||||
|
||||
def run_json(binary: Path, *args: str) -> dict:
|
||||
proc = subprocess.run(
|
||||
[str(binary), *args, "--json"],
|
||||
cwd=REPO_ROOT,
|
||||
text=True,
|
||||
capture_output=True,
|
||||
check=False,
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
detail = proc.stderr.strip() or proc.stdout.strip() or f"exit code {proc.returncode}"
|
||||
raise RuntimeError(f"command failed: {' '.join(args)}\n{detail}")
|
||||
|
||||
stdout = proc.stdout.strip()
|
||||
if not stdout:
|
||||
return {"ok": True}
|
||||
return json.loads(stdout)
|
||||
|
||||
|
||||
def write_asset_tree(root: Path) -> None:
|
||||
(root / "locale" / "de").mkdir(parents=True, exist_ok=True)
|
||||
(root / "icon").mkdir(parents=True, exist_ok=True)
|
||||
(root / "ui").mkdir(parents=True, exist_ok=True)
|
||||
|
||||
(root / "locale" / "de" / "welcome.txt").write_text(
|
||||
"metin2 secure pack test\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
(root / "ui" / "layout.json").write_text(
|
||||
json.dumps(
|
||||
{
|
||||
"window": "inventory",
|
||||
"slots": 90,
|
||||
"theme": "headless-e2e",
|
||||
},
|
||||
indent=2,
|
||||
)
|
||||
+ "\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
(root / "icon" / "item.bin").write_bytes(bytes((i * 13) % 256 for i in range(2048)))
|
||||
|
||||
|
||||
def compare_trees(left: Path, right: Path) -> None:
|
||||
comparison = filecmp.dircmp(left, right)
|
||||
if comparison.left_only or comparison.right_only or comparison.diff_files or comparison.funny_files:
|
||||
raise RuntimeError(
|
||||
"tree mismatch: "
|
||||
f"left_only={comparison.left_only}, "
|
||||
f"right_only={comparison.right_only}, "
|
||||
f"diff_files={comparison.diff_files}, "
|
||||
f"funny_files={comparison.funny_files}"
|
||||
)
|
||||
|
||||
for child in comparison.common_dirs:
|
||||
compare_trees(left / child, right / child)
|
||||
|
||||
|
||||
def main() -> int:
|
||||
binary = resolve_binary()
|
||||
|
||||
with tempfile.TemporaryDirectory(prefix="m2pack-headless-") as tmp_dir:
|
||||
tmp = Path(tmp_dir)
|
||||
assets = tmp / "assets"
|
||||
keys = tmp / "keys"
|
||||
out_dir = tmp / "out"
|
||||
extracted = tmp / "extracted"
|
||||
client_header = tmp / "M2PackKeys.h"
|
||||
runtime_json = tmp / "runtime-key.json"
|
||||
runtime_blob = tmp / "runtime-key.bin"
|
||||
archive = out_dir / "client.m2p"
|
||||
|
||||
assets.mkdir(parents=True, exist_ok=True)
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
write_asset_tree(assets)
|
||||
|
||||
keygen = run_json(binary, "keygen", "--out-dir", str(keys))
|
||||
build = run_json(
|
||||
binary,
|
||||
"build",
|
||||
"--input",
|
||||
str(assets),
|
||||
"--output",
|
||||
str(archive),
|
||||
"--key",
|
||||
str(keys / "master.key"),
|
||||
"--sign-secret-key",
|
||||
str(keys / "signing.key"),
|
||||
"--key-id",
|
||||
"7",
|
||||
)
|
||||
listed = run_json(binary, "list", "--archive", str(archive))
|
||||
verify = run_json(
|
||||
binary,
|
||||
"verify",
|
||||
"--archive",
|
||||
str(archive),
|
||||
"--public-key",
|
||||
str(keys / "signing.pub"),
|
||||
"--key",
|
||||
str(keys / "master.key"),
|
||||
)
|
||||
diff = run_json(
|
||||
binary,
|
||||
"diff",
|
||||
"--left",
|
||||
str(assets),
|
||||
"--right",
|
||||
str(archive),
|
||||
)
|
||||
export_client = run_json(
|
||||
binary,
|
||||
"export-client-config",
|
||||
"--key",
|
||||
str(keys / "master.key"),
|
||||
"--public-key",
|
||||
str(keys / "signing.pub"),
|
||||
"--key-id",
|
||||
"7",
|
||||
"--output",
|
||||
str(client_header),
|
||||
)
|
||||
export_runtime_json = run_json(
|
||||
binary,
|
||||
"export-runtime-key",
|
||||
"--key",
|
||||
str(keys / "master.key"),
|
||||
"--public-key",
|
||||
str(keys / "signing.pub"),
|
||||
"--key-id",
|
||||
"7",
|
||||
"--format",
|
||||
"json",
|
||||
"--output",
|
||||
str(runtime_json),
|
||||
)
|
||||
export_runtime_blob = run_json(
|
||||
binary,
|
||||
"export-runtime-key",
|
||||
"--key",
|
||||
str(keys / "master.key"),
|
||||
"--public-key",
|
||||
str(keys / "signing.pub"),
|
||||
"--key-id",
|
||||
"7",
|
||||
"--format",
|
||||
"blob",
|
||||
"--output",
|
||||
str(runtime_blob),
|
||||
)
|
||||
extract = run_json(
|
||||
binary,
|
||||
"extract",
|
||||
"--archive",
|
||||
str(archive),
|
||||
"--output",
|
||||
str(extracted),
|
||||
"--key",
|
||||
str(keys / "master.key"),
|
||||
)
|
||||
|
||||
compare_trees(assets, extracted)
|
||||
|
||||
runtime_obj = json.loads(runtime_json.read_text(encoding="utf-8"))
|
||||
if runtime_obj["key_id"] != 7 or runtime_obj["mapping_name"] != "Local\\M2PackSharedKeys":
|
||||
raise RuntimeError("runtime json payload mismatch")
|
||||
|
||||
if runtime_blob.stat().st_size != 84:
|
||||
raise RuntimeError(f"runtime blob size mismatch: {runtime_blob.stat().st_size}")
|
||||
|
||||
header_text = client_header.read_text(encoding="utf-8")
|
||||
if "M2PACK_RUNTIME_MASTER_KEY_REQUIRED = true" not in header_text:
|
||||
raise RuntimeError("client header missing runtime enforcement flag")
|
||||
if "M2PACK_SIGN_KEY_IDS = { 7 }" not in header_text:
|
||||
raise RuntimeError("client header missing key id slot")
|
||||
|
||||
summary = {
|
||||
"ok": True,
|
||||
"binary": str(binary),
|
||||
"file_count": build["file_count"],
|
||||
"listed_entries": len(listed["entries"]),
|
||||
"verify_ok": verify["ok"],
|
||||
"diff_changed": len(diff["changed"]),
|
||||
"diff_added": len(diff["added"]),
|
||||
"diff_removed": len(diff["removed"]),
|
||||
"extract_entry_count": extract["entry_count"],
|
||||
"runtime_key_id": runtime_obj["key_id"],
|
||||
"runtime_blob_size": runtime_blob.stat().st_size,
|
||||
"artifacts_root": str(tmp),
|
||||
"steps": {
|
||||
"keygen": keygen["ok"],
|
||||
"build": build["ok"],
|
||||
"list": listed["ok"],
|
||||
"verify": verify["ok"],
|
||||
"diff": diff["ok"],
|
||||
"export_client_config": export_client["ok"],
|
||||
"export_runtime_json": export_runtime_json["ok"],
|
||||
"export_runtime_blob": export_runtime_blob["ok"],
|
||||
"extract": extract["ok"],
|
||||
},
|
||||
}
|
||||
print(json.dumps(summary, indent=2))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
raise SystemExit(main())
|
||||
except Exception as exc:
|
||||
print(str(exc), file=sys.stderr)
|
||||
raise SystemExit(1)
|
||||
35
scripts/known_issues.py
Normal file
35
scripts/known_issues.py
Normal file
@@ -0,0 +1,35 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def default_known_issues_path(script_file: str) -> Path:
|
||||
return Path(script_file).resolve().parent.parent / "known_issues" / "runtime_known_issues.json"
|
||||
|
||||
|
||||
def load_known_issue_ids(script_file: str, category: str, configured_path: str | None) -> tuple[Path | None, set[str]]:
|
||||
if configured_path:
|
||||
path = Path(configured_path).resolve()
|
||||
else:
|
||||
candidate = default_known_issues_path(script_file)
|
||||
path = candidate if candidate.is_file() else None
|
||||
|
||||
if path is None:
|
||||
return None, set()
|
||||
|
||||
if not path.is_file():
|
||||
raise FileNotFoundError(f"known issues file not found: {path}")
|
||||
|
||||
data = json.loads(path.read_text(encoding="utf-8"))
|
||||
raw_values = data.get(category, [])
|
||||
if not isinstance(raw_values, list):
|
||||
raise ValueError(f"known issues category '{category}' must be a list")
|
||||
return path, {str(value) for value in raw_values}
|
||||
|
||||
|
||||
def classify_issue_ids(observed: set[str], known: set[str]) -> tuple[set[str], set[str], set[str]]:
|
||||
known_observed = observed & known
|
||||
unexpected = observed - known
|
||||
stale = known - observed
|
||||
return known_observed, unexpected, stale
|
||||
@@ -8,6 +8,8 @@ import sys
|
||||
from dataclasses import dataclass, asdict
|
||||
from pathlib import Path
|
||||
|
||||
from known_issues import classify_issue_ids, load_known_issue_ids
|
||||
|
||||
|
||||
BASE_MODEL_RE = re.compile(r'^BaseModelFileName\s+"([^"]+)"', re.IGNORECASE)
|
||||
EFFECT_SCRIPT_RE = re.compile(r'^EffectScriptName\s+"([^"]+)"', re.IGNORECASE)
|
||||
@@ -57,6 +59,17 @@ def parse_args() -> argparse.Namespace:
|
||||
action="store_true",
|
||||
help="Emit JSON output.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--known-issues",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Optional known issues baseline JSON. Defaults to repo known_issues/runtime_known_issues.json if present.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--strict-known-issues",
|
||||
action="store_true",
|
||||
help="Also fail when the known-issues baseline contains stale entries not observed anymore.",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
@@ -122,7 +135,7 @@ def parse_msm_references(path: Path) -> tuple[str | None, list[str], list[str],
|
||||
return base_model, effect_scripts, hit_effects, hit_sounds
|
||||
|
||||
|
||||
def validate_actor_dir(pack: str, actor_dir: Path, asset_index: set[str]) -> ActorDirCheck:
|
||||
def validate_actor_dir(pack: str, pack_dir: Path, actor_dir: Path, asset_index: set[str]) -> ActorDirCheck:
|
||||
motlist_path = actor_dir / "motlist.txt"
|
||||
motions = parse_motlist(motlist_path) if motlist_path.is_file() else []
|
||||
|
||||
@@ -161,7 +174,7 @@ def validate_actor_dir(pack: str, actor_dir: Path, asset_index: set[str]) -> Act
|
||||
|
||||
return ActorDirCheck(
|
||||
pack=pack,
|
||||
actor_dir=actor_dir.as_posix(),
|
||||
actor_dir=actor_dir.relative_to(pack_dir).as_posix(),
|
||||
motlist_present=motlist_path.is_file(),
|
||||
motlist_entries=len(motions),
|
||||
missing_msa=missing_msa,
|
||||
@@ -196,12 +209,16 @@ def main() -> int:
|
||||
packs = args.pack or ["Monster", "NPC", "PC"]
|
||||
checks: list[ActorDirCheck] = []
|
||||
failures: list[str] = []
|
||||
issue_map: dict[str, str] = {}
|
||||
asset_index = build_asset_index(runtime_assets)
|
||||
|
||||
for pack in packs:
|
||||
pack_dir = runtime_assets / pack
|
||||
if not pack_dir.is_dir():
|
||||
failures.append(f"missing pack dir: {pack}")
|
||||
issue_id = f"actor:pack_dir:{pack}"
|
||||
message = f"missing pack dir: {pack}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
continue
|
||||
|
||||
actor_dirs = collect_actor_dirs(pack_dir)
|
||||
@@ -209,26 +226,55 @@ def main() -> int:
|
||||
actor_dirs = actor_dirs[: args.limit]
|
||||
|
||||
for actor_dir in actor_dirs:
|
||||
check = validate_actor_dir(pack, actor_dir, asset_index)
|
||||
check = validate_actor_dir(pack, pack_dir, actor_dir, asset_index)
|
||||
checks.append(check)
|
||||
for motion in check.missing_msa:
|
||||
failures.append(f"{check.actor_dir}: missing motion file {motion}")
|
||||
issue_id = f"actor:motion_file:{check.actor_dir}:{motion.lower()}"
|
||||
message = f"{check.actor_dir}: missing motion file {motion}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
for gr2_name in check.missing_gr2_for_motions:
|
||||
failures.append(f"{check.actor_dir}: missing paired model {gr2_name}")
|
||||
issue_id = f"actor:paired_model:{check.actor_dir}:{gr2_name.lower()}"
|
||||
message = f"{check.actor_dir}: missing paired model {gr2_name}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
for base_model in check.missing_base_models:
|
||||
failures.append(f"{check.actor_dir}: missing base model {base_model}")
|
||||
issue_id = f"actor:base_model:{check.actor_dir}:{normalize_virtual_path(base_model)}"
|
||||
message = f"{check.actor_dir}: missing base model {base_model}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
for effect_script in check.missing_effect_scripts:
|
||||
failures.append(f"{check.actor_dir}: missing effect script {effect_script}")
|
||||
issue_id = f"actor:effect_script:{check.actor_dir}:{normalize_virtual_path(effect_script)}"
|
||||
message = f"{check.actor_dir}: missing effect script {effect_script}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
for hit_effect in check.missing_hit_effects:
|
||||
failures.append(f"{check.actor_dir}: missing hit effect {hit_effect}")
|
||||
issue_id = f"actor:hit_effect:{check.actor_dir}:{normalize_virtual_path(hit_effect)}"
|
||||
message = f"{check.actor_dir}: missing hit effect {hit_effect}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
for hit_sound in check.missing_hit_sounds:
|
||||
failures.append(f"{check.actor_dir}: missing hit sound {hit_sound}")
|
||||
issue_id = f"actor:hit_sound:{check.actor_dir}:{normalize_virtual_path(hit_sound)}"
|
||||
message = f"{check.actor_dir}: missing hit sound {hit_sound}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
|
||||
observed_issue_ids = set(issue_map.keys())
|
||||
known_path, known_issue_ids = load_known_issue_ids(__file__, "actor", args.known_issues)
|
||||
known_observed, unexpected_issue_ids, stale_known_issue_ids = classify_issue_ids(observed_issue_ids, known_issue_ids)
|
||||
|
||||
result = {
|
||||
"ok": not failures,
|
||||
"ok": not unexpected_issue_ids and (not args.strict_known_issues or not stale_known_issue_ids),
|
||||
"checked_actor_dirs": len(checks),
|
||||
"packs": packs,
|
||||
"failures": failures,
|
||||
"issue_ids": sorted(observed_issue_ids),
|
||||
"known_issue_ids": sorted(known_observed),
|
||||
"unexpected_issue_ids": sorted(unexpected_issue_ids),
|
||||
"stale_known_issue_ids": sorted(stale_known_issue_ids),
|
||||
"unexpected_failures": [issue_map[issue_id] for issue_id in sorted(unexpected_issue_ids)],
|
||||
"stale_known_failures": sorted(stale_known_issue_ids),
|
||||
"known_issues_path": str(known_path) if known_path else None,
|
||||
"checks": [asdict(check) for check in checks],
|
||||
}
|
||||
|
||||
@@ -236,8 +282,10 @@ def main() -> int:
|
||||
print(json.dumps(result, indent=2))
|
||||
else:
|
||||
print(f"ok={result['ok']} checked_actor_dirs={result['checked_actor_dirs']}")
|
||||
for failure in failures:
|
||||
for failure in result["unexpected_failures"]:
|
||||
print(f"FAIL: {failure}")
|
||||
for issue_id in result["stale_known_issue_ids"]:
|
||||
print(f"STALE: {issue_id}")
|
||||
|
||||
return 0 if result["ok"] else 1
|
||||
|
||||
|
||||
@@ -8,6 +8,8 @@ import sys
|
||||
from dataclasses import dataclass, asdict
|
||||
from pathlib import Path
|
||||
|
||||
from known_issues import classify_issue_ids, load_known_issue_ids
|
||||
|
||||
|
||||
QUOTED_STRING_RE = re.compile(r'"([^"]+)"')
|
||||
MESH_FILENAME_RE = re.compile(r'^meshfilename\s+"([^"]+)"', re.IGNORECASE)
|
||||
@@ -39,6 +41,17 @@ def parse_args() -> argparse.Namespace:
|
||||
action="store_true",
|
||||
help="Emit JSON output.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--known-issues",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Optional known issues baseline JSON. Defaults to repo known_issues/runtime_known_issues.json if present.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--strict-known-issues",
|
||||
action="store_true",
|
||||
help="Also fail when the known-issues baseline contains stale entries not observed anymore.",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
@@ -180,6 +193,7 @@ def main() -> int:
|
||||
asset_lookup = build_asset_lookup(runtime_assets)
|
||||
checks: list[EffectCheck] = []
|
||||
failures: list[str] = []
|
||||
issue_map: dict[str, str] = {}
|
||||
|
||||
effect_pack_dir = runtime_assets / "Effect"
|
||||
for path in sorted(effect_pack_dir.rglob("*")):
|
||||
@@ -195,12 +209,27 @@ def main() -> int:
|
||||
continue
|
||||
checks.append(check)
|
||||
for missing in check.missing_references:
|
||||
failures.append(f"{check.file}: missing reference {missing}")
|
||||
normalized_missing = resolve_reference(missing, Path(check.file).parent.as_posix().lower())
|
||||
issue_id = f"effect:reference:{check.file}:{normalized_missing}"
|
||||
message = f"{check.file}: missing reference {missing}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
|
||||
observed_issue_ids = set(issue_map.keys())
|
||||
known_path, known_issue_ids = load_known_issue_ids(__file__, "effect", args.known_issues)
|
||||
known_observed, unexpected_issue_ids, stale_known_issue_ids = classify_issue_ids(observed_issue_ids, known_issue_ids)
|
||||
|
||||
result = {
|
||||
"ok": not failures,
|
||||
"ok": not unexpected_issue_ids and (not args.strict_known_issues or not stale_known_issue_ids),
|
||||
"checked_files": len(checks),
|
||||
"failures": failures,
|
||||
"issue_ids": sorted(observed_issue_ids),
|
||||
"known_issue_ids": sorted(known_observed),
|
||||
"unexpected_issue_ids": sorted(unexpected_issue_ids),
|
||||
"stale_known_issue_ids": sorted(stale_known_issue_ids),
|
||||
"unexpected_failures": [issue_map[issue_id] for issue_id in sorted(unexpected_issue_ids)],
|
||||
"stale_known_failures": sorted(stale_known_issue_ids),
|
||||
"known_issues_path": str(known_path) if known_path else None,
|
||||
"checks": [asdict(check) for check in checks],
|
||||
}
|
||||
|
||||
@@ -208,8 +237,10 @@ def main() -> int:
|
||||
print(json.dumps(result, indent=2))
|
||||
else:
|
||||
print(f"ok={result['ok']} checked_files={result['checked_files']}")
|
||||
for failure in failures:
|
||||
for failure in result["unexpected_failures"]:
|
||||
print(f"FAIL: {failure}")
|
||||
for issue_id in result["stale_known_issue_ids"]:
|
||||
print(f"STALE: {issue_id}")
|
||||
|
||||
return 0 if result["ok"] else 1
|
||||
|
||||
|
||||
101
scripts/validate_runtime_gate.py
Executable file
101
scripts/validate_runtime_gate.py
Executable file
@@ -0,0 +1,101 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
from known_issues import default_known_issues_path
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Run the scenario validators with the shared runtime known-issues baseline."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--runtime-root",
|
||||
type=Path,
|
||||
required=True,
|
||||
help="Client runtime root containing assets/.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--known-issues",
|
||||
type=Path,
|
||||
default=None,
|
||||
help="Optional known issues baseline JSON. Defaults to repo known_issues/runtime_known_issues.json.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--strict-known-issues",
|
||||
action="store_true",
|
||||
help="Fail when the known-issues baseline contains stale entries not observed anymore.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--json",
|
||||
action="store_true",
|
||||
help="Emit JSON output.",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def run_validator(script_path: Path, runtime_root: Path, known_issues: Path, strict_known_issues: bool) -> dict:
|
||||
cmd = [
|
||||
sys.executable,
|
||||
str(script_path),
|
||||
"--runtime-root",
|
||||
str(runtime_root),
|
||||
"--known-issues",
|
||||
str(known_issues),
|
||||
"--json",
|
||||
]
|
||||
if strict_known_issues:
|
||||
cmd.append("--strict-known-issues")
|
||||
completed = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
if completed.returncode not in (0, 1):
|
||||
raise RuntimeError(
|
||||
f"{script_path.name} failed unexpectedly with rc={completed.returncode}: {completed.stderr or completed.stdout}"
|
||||
)
|
||||
return json.loads(completed.stdout)
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = parse_args()
|
||||
runtime_root = args.runtime_root.resolve()
|
||||
known_issues = (args.known_issues.resolve() if args.known_issues else default_known_issues_path(__file__))
|
||||
|
||||
script_dir = Path(__file__).resolve().parent
|
||||
validators = {
|
||||
"world": script_dir / "validate_runtime_scenarios.py",
|
||||
"actor": script_dir / "validate_actor_scenarios.py",
|
||||
"effect": script_dir / "validate_effect_scenarios.py",
|
||||
}
|
||||
|
||||
results = {
|
||||
name: run_validator(script_path, runtime_root, known_issues, args.strict_known_issues)
|
||||
for name, script_path in validators.items()
|
||||
}
|
||||
ok = all(result["ok"] for result in results.values())
|
||||
|
||||
summary = {
|
||||
"ok": ok,
|
||||
"runtime_root": str(runtime_root),
|
||||
"known_issues_path": str(known_issues),
|
||||
"validators": results,
|
||||
}
|
||||
|
||||
if args.json:
|
||||
print(json.dumps(summary, indent=2))
|
||||
else:
|
||||
print(f"ok={ok}")
|
||||
for name, result in results.items():
|
||||
print(
|
||||
f"{name}: ok={result['ok']} unexpected={len(result.get('unexpected_issue_ids', []))} "
|
||||
f"known={len(result.get('known_issue_ids', []))} stale={len(result.get('stale_known_issue_ids', []))}"
|
||||
)
|
||||
|
||||
return 0 if ok else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -8,6 +8,8 @@ import sys
|
||||
from dataclasses import dataclass, asdict
|
||||
from pathlib import Path
|
||||
|
||||
from known_issues import classify_issue_ids, load_known_issue_ids
|
||||
|
||||
|
||||
SETTING_TEXTURESET_RE = re.compile(r"^TextureSet\s+(.+)$", re.IGNORECASE)
|
||||
SETTING_ENV_RE = re.compile(r"^Environment\s+(.+)$", re.IGNORECASE)
|
||||
@@ -46,6 +48,17 @@ def parse_args() -> argparse.Namespace:
|
||||
action="store_true",
|
||||
help="Emit JSON output.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--known-issues",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Optional known issues baseline JSON. Defaults to repo known_issues/runtime_known_issues.json if present.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--strict-known-issues",
|
||||
action="store_true",
|
||||
help="Also fail when the known-issues baseline contains stale entries not observed anymore.",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
@@ -132,35 +145,65 @@ def main() -> int:
|
||||
|
||||
checks: list[MapCheck] = []
|
||||
failures: list[str] = []
|
||||
issue_map: dict[str, str] = {}
|
||||
|
||||
for pack_name in packs:
|
||||
pack_dir = runtime_assets / pack_name
|
||||
if not pack_dir.is_dir():
|
||||
failures.append(f"missing pack dir: {pack_name}")
|
||||
issue_id = f"world:pack_dir:{pack_name}"
|
||||
message = f"missing pack dir: {pack_name}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
continue
|
||||
|
||||
map_dirs = sorted([p for p in pack_dir.iterdir() if p.is_dir()])
|
||||
if not map_dirs:
|
||||
failures.append(f"no map dirs in pack: {pack_name}")
|
||||
issue_id = f"world:map_dirs:{pack_name}"
|
||||
message = f"no map dirs in pack: {pack_name}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
continue
|
||||
|
||||
for map_dir in map_dirs:
|
||||
check = validate_map_dir(pack_name, map_dir, runtime_assets)
|
||||
checks.append(check)
|
||||
if not check.setting_ok:
|
||||
failures.append(f"{pack_name}/{map_dir.name}: missing setting.txt")
|
||||
issue_id = f"world:setting:{pack_name}/{map_dir.name}"
|
||||
message = f"{pack_name}/{map_dir.name}: missing setting.txt"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
if not check.mapproperty_ok:
|
||||
failures.append(f"{pack_name}/{map_dir.name}: missing mapproperty.txt")
|
||||
issue_id = f"world:mapproperty:{pack_name}/{map_dir.name}"
|
||||
message = f"{pack_name}/{map_dir.name}: missing mapproperty.txt"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
if not check.textureset_ref or not check.textureset_exists:
|
||||
failures.append(f"{pack_name}/{map_dir.name}: missing textureset target for {check.textureset_ref!r}")
|
||||
issue_id = f"world:textureset:{pack_name}/{map_dir.name}:{normalize_virtual_path(check.textureset_ref or '')}"
|
||||
message = f"{pack_name}/{map_dir.name}: missing textureset target for {check.textureset_ref!r}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
if not check.environment_ref or not check.environment_exists:
|
||||
failures.append(f"{pack_name}/{map_dir.name}: missing environment target for {check.environment_ref!r}")
|
||||
issue_id = f"world:environment:{pack_name}/{map_dir.name}:{normalize_virtual_path(check.environment_ref or '')}"
|
||||
message = f"{pack_name}/{map_dir.name}: missing environment target for {check.environment_ref!r}"
|
||||
failures.append(message)
|
||||
issue_map[issue_id] = message
|
||||
|
||||
observed_issue_ids = set(issue_map.keys())
|
||||
known_path, known_issue_ids = load_known_issue_ids(__file__, "world", args.known_issues)
|
||||
known_observed, unexpected_issue_ids, stale_known_issue_ids = classify_issue_ids(observed_issue_ids, known_issue_ids)
|
||||
|
||||
result = {
|
||||
"ok": not failures,
|
||||
"ok": not unexpected_issue_ids and (not args.strict_known_issues or not stale_known_issue_ids),
|
||||
"checked_map_dirs": len(checks),
|
||||
"packs": packs,
|
||||
"failures": failures,
|
||||
"issue_ids": sorted(observed_issue_ids),
|
||||
"known_issue_ids": sorted(known_observed),
|
||||
"unexpected_issue_ids": sorted(unexpected_issue_ids),
|
||||
"stale_known_issue_ids": sorted(stale_known_issue_ids),
|
||||
"unexpected_failures": [issue_map[issue_id] for issue_id in sorted(unexpected_issue_ids)],
|
||||
"stale_known_failures": sorted(stale_known_issue_ids),
|
||||
"known_issues_path": str(known_path) if known_path else None,
|
||||
"checks": [asdict(check) for check in checks],
|
||||
}
|
||||
|
||||
@@ -168,8 +211,10 @@ def main() -> int:
|
||||
print(json.dumps(result, indent=2))
|
||||
else:
|
||||
print(f"ok={result['ok']} checked_map_dirs={result['checked_map_dirs']}")
|
||||
for failure in failures:
|
||||
for failure in result["unexpected_failures"]:
|
||||
print(f"FAIL: {failure}")
|
||||
for issue_id in result["stale_known_issue_ids"]:
|
||||
print(f"STALE: {issue_id}")
|
||||
|
||||
return 0 if result["ok"] else 1
|
||||
|
||||
|
||||
Reference in New Issue
Block a user