Matchmaking Queue
Simulates skill-based matchmaking with adjustable buckets, skill delta and max wait.
Level:Intermediate
Skill-based matchmaking with queue time limits
This scenario models a competitive game's matchmaking service. Players arrive with a skill rating drawn from a normal distribution and wait for an opponent. The service balances two goals:
- Fair matches – pair players within
skill_delta
rating points. - Fast matches – if a player waits longer than
max_wait
seconds, match them with the closest opponent regardless of rating.
Tweak the bucket width, allowed rating delta and maximum wait time to explore how queue length and match quality trade off.
import random
from dataclasses import dataclass
from typing import List
from tys import probe, progress
Player
simply remembers a user's rating and when they entered the queue.
@dataclass
class Player:
rating: float
joined: float
def closest_opponent(index: int, players: List[Player]) -> int | None:
"""Return the index of the rating closest to players[index]."""
p1 = players[index]
best_j = None
best_delta = float("inf")
for j, p2 in enumerate(players):
if j == index:
continue
delta = abs(p1.rating - p2.rating)
if delta < best_delta:
best_delta, best_j = delta, j
return best_j
def bucket_match(index: int, players: List[Player], bucket_width: float, skill_delta: float) -> int | None:
"""Find a partner in the same rating bucket within the allowed delta."""
p1 = players[index]
bucket = int(p1.rating // bucket_width)
for j in range(index + 1, len(players)):
p2 = players[j]
same_bucket = int(p2.rating // bucket_width) == bucket
if same_bucket and abs(p1.rating - p2.rating) <= skill_delta:
return j
return None
def record_probes(now: float, waiting: List[Player], wait_times: List[float], deltas: List[float]) -> None:
"""Emit probe metrics for queue length, wait time and rating delta."""
probe("queue_len", now, len(waiting))
if wait_times:
probe("avg_wait", now, sum(wait_times) / len(wait_times))
if deltas:
probe("avg_delta", now, sum(deltas) / len(deltas))
Run the matchmaking simulation with the supplied configuration.
def simulate(cfg: dict):
import simpy
env = simpy.Environment()
rng = random.Random(cfg.get("seed", 0))
Configuration parameters pulled from config.yaml
.
arrival_rate = cfg["arrival_rate"] # players per second
rating_mean = cfg["rating_mean"] # average skill level
rating_std = cfg["rating_std"] # variation in skill
bucket_width = cfg["rating_bucket"] # size of rating buckets
skill_delta = cfg["skill_delta"] # allowed rating difference
max_wait = cfg["max_wait"] # seconds before we force a match
sim_time = cfg["sim_time"] # total simulation time
Queued players and metrics we want to report.
waiting: List[Player] = [] # players waiting for a match
wait_times: List[float] = [] # how long each successful match waited
deltas: List[float] = [] # rating differences in matches
New players arrive according to an exponential distribution. Each gets a skill rating drawn from rating_mean
and rating_std
so the queue spreads across a bell curve of skill levels.
def spawner():
while True:
delay = rng.expovariate(arrival_rate)
yield env.timeout(delay)
rating = max(0.0, rng.gauss(rating_mean, rating_std))
waiting.append(Player(rating, env.now))
env.process(spawner())
Each second we scan the waiting list:
- Try to match players in the same rating bucket if the rating delta is within
skill_delta
. - If no suitable match is found and the player has waited longer than
max_wait
seconds, pick the closest rating regardless of bucket.
Probes record the evolving queue length, average wait time and rating delta so they can be plotted in the UI.
def matchmaker():
while True:
waiting.sort(key=lambda p: p.rating)
i = 0
while i < len(waiting) - 1:
p1 = waiting[i]
partner = bucket_match(i, waiting, bucket_width, skill_delta)
if partner is None and env.now - p1.joined >= max_wait:
partner = closest_opponent(i, waiting)
if partner is not None:
p2 = waiting.pop(partner)
waiting.pop(i)
wait_times.append((env.now - p1.joined + env.now - p2.joined) / 2)
deltas.append(abs(p1.rating - p2.rating))
else:
i += 1
record_probes(env.now, waiting, wait_times, deltas)
progress(min(int(100 * env.now / sim_time), 100))
yield env.timeout(1)
env.process(matchmaker())
done = env.event()
After sim_time
seconds resolve the results event with summary stats.
def finish():
yield env.timeout(sim_time)
done.succeed(
{
"avg_wait": sum(wait_times) / len(wait_times) if wait_times else 0.0,
"avg_rating_delta": sum(deltas) / len(deltas) if deltas else 0.0,
}
)
env.process(finish())
env.run(until=done)
return done.value
Declare the Python packages needed to run in the browser sandbox.
def requirements():
return {
"builtin": ["micropip", "pyyaml"],
"external": ["simpy==4.1.1"],
}
arrival_rate: 2
rating_mean: 1000
rating_std: 300
rating_bucket: 100
skill_delta: 50
max_wait: 30
sim_time: 300
seed: 42
queue_len
Samples | 301 @ 0.00–300.00 |
---|---|
Values | min 0.00, mean 9.98, median 10.00, max 19.00, σ 2.66 |
avg_wait
Samples | 299 @ 2.00–300.00 |
---|---|
Values | min 0.81, mean 4.79, median 5.22, max 5.88, σ 1.14 |
avg_delta
Samples | 299 @ 2.00–300.00 |
---|---|
Values | min 13.77, mean 25.38, median 27.53, max 30.12, σ 3.85 |