Multiplayer & Local Architecture Patterns

Multiplayer & Local Architecture Patterns

Scope note: MoonBark.GridPlacement handles grid state and placement logic. It does not implement network transport. This guide explains where the plugin’s seams are and how each topology maps to them. Connecting a transport layer (Godot ENet, WebSocket, Steam Networking, etc.) is your responsibility and is intentionally outside the plugin’s scope.


The Core Idea: Placement State as the “Physics Server”

In Godot’s built-in multiplayer model, the host machine runs the authoritative physics server. Remote clients do not simulate physics independently — they send inputs to the authority and receive state updates back.

MoonBark.GridPlacement uses the same mental model:

  • The occupancy service is the authoritative state for placement. It knows what is placed and which cells are occupied.
  • The authority (host or dedicated server) is the only process that calls into the placement pipeline to commit placements.
  • Remote clients send a placement intent (placeableId + grid position) and receive a placement notification (accepted or rejected, with the confirmed position) back.

Nothing in the client’s local view is trusted. The authority’s occupancy service is the ground truth.


Topology 1: Single-Player / Local

The simplest case. All systems run in one process.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
Player Input
PlacementInputBridge  ──►  PlacementValidationSystem
    │                             │
    │                      (occupancy check)
    │                             │
    ▼                             ▼
PlacementService  ──►  OccupancyService
PlacementService.EntityPlaced
PlacementSignalBus (Godot signals)
Visual nodes / UI

No seam overhead. This is the default demo setup.

Test reference: PlacementPipelineE2ETests, EventDrivenPlacementTests


Topology 2: Listen-Server (Client-Host)

One player also acts as the authoritative host. All other players are remote clients. This is the most common peer-to-peer multiplayer topology for small games.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
[Host Process]                        [Client Process(es)]
──────────────────────────────────    ──────────────────────────────
 Local player input
     ▼                                  Player captures input
 PlacementInputBridge                       │
     │                           (Intent Seam: RPC → host)
     ▼                                      │
 Validation + PlacementService   ◄───────────┘
 OccupancyService  ← ground truth
 PlacementService.EntityPlaced
 (Event Relay Seam: broadcast → all peers)
     ├──────────────────────────────────►  Update client visuals
     │                                     (spawn visual node,
     └──────► Host-player visuals             update minimap, etc.)

The Four Seams

SeamWhat happensWhere your networking code goes
Catalog SeamBefore gameplay, all peers must load the same catalog of placeables. String IDs must match.Lobby / loading screen: broadcast catalog IDs from host → clients load identical assets.
Intent SeamClient captures input → sends (placeableId, gridPosition) to host.rpc_id(host_id, "receive_placement_intent", placeable_id, position)
Authority SeamOnly the host calls InputBridge.ExecutePlacement() + PlacementService.Update(). Occupancy is never written by clients.Guard the authority method: only execute if multiplayer.is_server().
Event Relay SeamAfter authority resolves, broadcast the outcome to all peers.Subscribe to PlacementService.EntityPlaced (success) or the return value of ExecutePlacement() (failure); call rpc(...) on each client.

Minimal GDScript sketch (intent + relay)

This is the shape of the glue code. The plugin provides everything except the rpc calls:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
# On CLIENT — capture input, send intent
func _on_player_wants_to_place(placeable_id: String, grid_pos: Vector2i):
    rpc_id(1, "receive_placement_intent", placeable_id, grid_pos)

# On HOST — receive intent, let authority decide
@rpc("any_peer", "call_local", "reliable")
func receive_placement_intent(placeable_id: String, grid_pos: Vector2i):
    if not multiplayer.is_server():
        return
    # plugin's authority pipeline runs here (C# side)
    var accepted = placement_context.execute_intent(placeable_id, grid_pos)
    # relay outcome to all peers
    broadcast_placement_result.rpc(placeable_id, grid_pos, accepted)

# On ALL PEERS — receive relay, update visuals
@rpc("authority", "call_local", "reliable")
func broadcast_placement_result(placeable_id: String, grid_pos: Vector2i, accepted: bool):
    if accepted:
        spawn_visual_for(placeable_id, grid_pos)
    else:
        show_rejection_feedback(grid_pos)

Test reference: ListenServerAuthorityTests — proves intent relay, authority isolation, conflict resolution, and event relay using the real Core API with mock network stubs.


Topology 3: Dedicated Server (Headless)

The authoritative host is a separate headless process with no player. It runs the placement pipeline but not Godot’s rendering.

The plugin’s PlacementEventBus and PlacementService are pure C# with no Godot dependencies, so they run headless without the Godot runtime.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
[Dedicated Server (headless)]          [Client Process(es)]
──────────────────────────────────     ──────────────────────────────
 PlacementInputBridge                    Player captures input
 ValidationSystem                            │
 PlacementService                  ◄──────────┘ (Intent Seam)
 OccupancyService ← ground truth
 PlacementEventBus.OnPlacementSuccess
 (Event Relay Seam: broadcast → all clients)
     └──────────────────────────────────►  Update client visuals

The difference from listen-server is that the server has no local player — it only processes incoming intents and broadcasts outcomes.

Test reference: DedicatedServerAuthorityTests — proves headless placement authority using only the Core library, with no Godot runtime required.


Topology 4: Local Multiplayer (Same Device, Split-Screen)

Multiple players on the same machine. Each player has a separate IInputSource registered with a TargetingService. The occupancy service handles all players in a single update because each player has a distinct OwnerKey.

No network seams are needed. The occupancy service natively handles multiple concurrent placers.

Test reference: MultiPlacerConcurrentTests — proves 100+ simultaneous placers with independent validation per owner.


What the Plugin Gives You for Free

CapabilityRelevant type
Authoritative occupancy checkOccupancyService, IGridOccupancy
Conflict detection (two clients, same cell)PlacementValidator
Headless server loopPlacementEventBus, PlacementService (no Godot dependency)
Multiple concurrent placersOwnerKey on SelectedPlaceable
GDScript-compatible event bridgePlacementSignalBus
Save / restore placed statePlacementSnapshot

What the Plugin Does Not Provide

  • Network transport (ENet, WebSocket, Steam, etc.)
  • Lobby / matchmaking
  • Player authentication
  • Network clock synchronization
  • Lag compensation or rollback
  • Catalog content distribution (you must ensure all peers load the same assets)

These concerns belong to your game layer, not to a placement plugin.


Choosing Your Topology

ScenarioRecommended topologyKey test suite
Singleplayer builder / city simSingle-playerPlacementPipelineE2ETests
2–8 players, one hostsListen-serverListenServerAuthorityTests
Competitive / authoritative serverDedicated serverDedicatedServerAuthorityTests
Couch co-op, same deviceLocal multiplayerMultiPlacerConcurrentTests
Custom engine / Unity / headlessCustom authority pathGridAuthorityTests (Path C)