fix(render #53): cache-hit fast path must fire per-entity, not per-tuple

Task 10 (commit 0cbef3c) called ApplyCacheHit inside the per-(entity, partIdx)
foreach loop, but cachedEntry.Batches is flat across all MeshRefs of the
entity. For a 3-MeshRef static building on frame 2: 3 tuples times 6 cached
batches per call = 18 instances drawn instead of 6. Severe Z-fighting and
3x perf hit on every multi-part static entity (buildings, statues, multi-
MeshRef NPCs).

This is the symmetric mirror of the Task 9 bug fixed at 00fa8ae. Both
spec section 5.2 and the plan describe the foreach as per-entity, but
_walkScratch has been per-tuple since Task 6. The implementation
faithfully ported the buggy spec.

Fix: track lastHitEntityId; the cache-hit fast path fires only on the
first tuple of each entity, and subsequent tuples skip the iteration
body via continue. Adds a regression test pinning the per-entity
amplification invariant.

Caught by code review (subagent-driven-development) before Phase 3
dispatched. The bug was invisible in the no-multi-frame-test 1702/8
baseline; would have manifested as visible Z-fighting on every multi-
part building on second-and-subsequent frames once Task 13 perf gate
captured live runs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Erik 2026-05-10 19:15:20 +02:00
parent 0cbef3c8b3
commit f7e38c214d
2 changed files with 121 additions and 0 deletions

View file

@ -606,4 +606,83 @@ public sealed class WbDrawDispatcherBucketingTests
// After flush, scratch is cleared so the next entity starts fresh.
Assert.Empty(scratch);
}
[Fact]
public void ApplyCacheHit_PerTupleAmplification_DoesNotOccur()
{
// Regression test for the bug fixed at the commit landing alongside
// this test: Task 10's first attempt called ApplyCacheHit per-(entity,
// MeshRefIndex) tuple in Draw's foreach, but cachedEntry.Batches is
// flat across all MeshRefs of the entity. For a 3-MeshRef building on
// frame 2: 3 tuples × 6 cached batches per call = 18 instances drawn
// instead of 6. Severe Z-fighting and 3× perf hit on every multi-part
// static entity (buildings, statues, multi-MeshRef NPCs).
//
// This is the symmetric mirror of the Task 9 bug fixed at 00fa8ae —
// both came from spec §5.2 describing the foreach as per-entity when
// _walkScratch is per-tuple.
//
// The fix: track lastHitEntityId; the cache-hit fast path fires only
// on the FIRST tuple of each entity. Subsequent tuples of the same
// entity skip the iteration body via continue.
//
// This test simulates the inner-loop logic by directly invoking
// ApplyCacheHit + AppendInstanceToGroup the way Draw would, with N
// tuples for the same entity, asserting that groups[key].Count equals
// the cached batch count (6), NOT N × cached batch count (18).
// Set up a synthetic cache entry with 6 batches (representing 3
// MeshRefs × 2 batches each).
const int CachedBatchCount = 6;
var cache = new EntityClassificationCache();
var batches = new CachedBatch[CachedBatchCount];
for (int i = 0; i < CachedBatchCount; i++)
{
batches[i] = MakeCachedBatch(
ibo: 1u, firstIndex: (uint)i, indexCount: 6, texHandle: (ulong)(0x100 + i));
}
cache.Populate(entityId: 100, landblockHint: 0xA9B40000u, batches);
// Simulate Draw's per-entity loop: 3 tuples for the same entity.
// Track which entity has already cache-hit (mirrors the production
// lastHitEntityId pattern).
var groups = new Dictionary<GroupKey, List<Matrix4x4>>();
uint? lastHitEntityId = null;
var entityWorld = Matrix4x4.Identity; // simplest case for assertion clarity
const uint EntityId = 100;
const int MeshRefCount = 3;
void AppendInstance(GroupKey k, Matrix4x4 m)
{
if (!groups.TryGetValue(k, out var list))
{
list = new List<Matrix4x4>();
groups[k] = list;
}
list.Add(m);
}
for (int partIdx = 0; partIdx < MeshRefCount; partIdx++)
{
// Skip subsequent tuples of an entity that cache-hit (the fix).
if (lastHitEntityId == EntityId) continue;
if (cache.TryGet(EntityId, out var entry))
{
Assert.NotNull(entry);
WbDrawDispatcher.ApplyCacheHit(entry!, entityWorld, AppendInstance);
lastHitEntityId = EntityId;
}
}
// Assertion: each group's matrix count equals the cached batches matching
// that key, NOT (cached batches × MeshRef count). Here each batch has a
// unique key, so each group has exactly 1 matrix.
int totalMatrices = 0;
foreach (var (_, matrices) in groups) totalMatrices += matrices.Count;
Assert.Equal(CachedBatchCount, totalMatrices); // 6, NOT 18
// Sanity: 6 distinct keys (one per cached batch since FirstIndex differs).
Assert.Equal(CachedBatchCount, groups.Count);
}
}