fix(app): Phase A.1 — make LandblockStreamer synchronous (DatCollection isn't thread-safe)
Second hotfix attempt for the "ball of spikes" terrain corruption.
The previous _datLock fix was insufficient because dat reads happen
from many render-thread code paths I didn't enumerate (animation
tick, OnLiveMotionUpdated, OnLivePositionUpdated, the live spawn
hydration, ApplyLoadedTerrain) and locking each is invasive and
fragile.
DatReaderWriter's DatCollection is fundamentally not thread-safe:
DatBinReader's internal buffer position is shared per-database, so
two concurrent .Get<T> calls corrupt each other's read state. The
ArgumentOutOfRangeException at DatBinReader.ReadBytesInternal in
the failure log is the smoking gun — one read started reading a
LandBlock, another moved the reader's position, the first one
asked for the wrong number of bytes.
Until Phase A.3 introduces a thread-safe dat wrapper (or until we
preload all dats into pure in-memory dictionaries), the streamer
runs synchronously: EnqueueLoad invokes the load delegate inline
on the calling thread and writes the result to the outbox in a
single call. The render-thread DrainCompletions loop picks it up
on the same frame.
API surface unchanged — Channel-based outbox, EnqueueLoad/Unload,
DrainCompletions, Start (now no-op), Dispose all preserved. Move
back to async loading is a single-class change once dat thread
safety lands.
Cost: visible frame hitch when crossing landblock boundaries
(rendering the new landblock is now on the render thread). For
default 5×5 the hitch is one landblock per cardinal step, ~50ms
worst case. Acceptable for the MVP — correctness over hitches.
Updated the off-thread test to assert the new synchronous contract
(loader runs on the calling thread). The other 4 tests still pass
unchanged because their spin-drain pattern works with synchronous
delivery too.
The previous _datLock from commit c991fb2 stays in place as
defensive belt-and-suspenders — it's free in synchronous mode and
keeps the contract documented at every dat-reading entry point.
212 tests green.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
c991fb23ce
commit
531c9f9349
2 changed files with 52 additions and 31 deletions
|
|
@ -104,8 +104,16 @@ public class LandblockStreamerTests
|
|||
}
|
||||
|
||||
[Fact]
|
||||
public async Task Load_ExecutesLoaderOnBackgroundThread()
|
||||
public void Load_ExecutesLoaderSynchronously_OnCallingThread()
|
||||
{
|
||||
// Streamer was made synchronous after Phase A.1 visual verification
|
||||
// exposed concurrent dat reads as the cause of "ball of spikes"
|
||||
// terrain corruption — DatReaderWriter's DatCollection isn't
|
||||
// thread-safe and locking around every dat read on every render-
|
||||
// thread code path was too invasive. Until Phase A.3 introduces a
|
||||
// thread-safe dat wrapper, the load delegate runs on the calling
|
||||
// thread and the result is in the outbox by the time EnqueueLoad
|
||||
// returns. This test pins that contract.
|
||||
int testThreadId = System.Environment.CurrentManagedThreadId;
|
||||
int? loaderThreadId = null;
|
||||
var stubLandblock = new LoadedLandblock(
|
||||
|
|
@ -122,14 +130,11 @@ public class LandblockStreamerTests
|
|||
streamer.Start();
|
||||
streamer.EnqueueLoad(0x77770FFEu);
|
||||
|
||||
// Drain until we see the completion.
|
||||
for (int i = 0; i < SpinMaxIterations && loaderThreadId is null; i++)
|
||||
{
|
||||
streamer.DrainCompletions(LandblockStreamer.DefaultDrainBatchSize);
|
||||
if (loaderThreadId is null) await Task.Delay(SpinStepMs);
|
||||
}
|
||||
// Result is already in the outbox — no spinning needed.
|
||||
var drained = streamer.DrainCompletions(LandblockStreamer.DefaultDrainBatchSize);
|
||||
|
||||
Assert.NotNull(loaderThreadId);
|
||||
Assert.NotEqual(testThreadId, loaderThreadId.Value);
|
||||
Assert.Single(drained);
|
||||
Assert.IsType<LandblockStreamResult.Loaded>(drained[0]);
|
||||
Assert.Equal(testThreadId, loaderThreadId);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue