Dynamic Difficulty Adjustment
Adaptive encounters that track win rate to keep players challenged.
Level: Intermediate
simulation.py
Dynamic Difficulty Adjustment
This model keeps players in the "flow channel" by tuning encounter difficulty in response to recent outcomes. A running history of wins guides the adjustment so players stay challenged but not overwhelmed.
from tys import probe, progress
Simulate a sequence of encounters with adaptive difficulty.
def simulate(cfg: dict):
import simpy
import random
env = simpy.Environment()
Parameters controlling player skill and difficulty.
skill = cfg["initial_skill"] # starting player skill level
skill_gain = cfg["skill_gain"] # practice boost each round
difficulty = cfg["initial_difficulty"] # challenge level of encounters
adjust_rate = cfg["adjust_rate"] # how quickly difficulty changes
target_win = cfg["target_win_rate"] # desired win rate to hit flow
memory = cfg["memory"] # number of recent outcomes to track
frustr_thresh = cfg["frustration_thresh"] # time threshold before frustration
encounters = cfg["encounters"] # total encounters simulated
random.seed(cfg.get("seed", 1)) # deterministic runs for comparison
history = [] # recent win/loss record
retention = 1.0 # probability the player keeps playing
done = env.event()
Each tick represents one encounter.
def play():
nonlocal skill, difficulty, retention
for n in range(encounters):
p_win = skill / (skill + difficulty)
win = random.random() < p_win
time_to_victory = difficulty / skill
Frustration rises with long battles or a loss.
frustration = max(0, time_to_victory - frustr_thresh)
if not win:
frustration += 1
retention *= max(0.0, 1 - 0.1 * frustration)
Track recent results and adapt difficulty.
history.append(1 if win else 0)
if len(history) > memory:
history.pop(0)
win_rate = sum(history) / len(history)
difficulty *= 1 + adjust_rate * (win_rate - target_win)
skill += skill_gain
Record metrics for later analysis.
probe("skill", env.now, skill)
probe("difficulty", env.now, difficulty)
probe("win_rate", env.now, win_rate)
probe("frustration", env.now, frustration)
probe("retention", env.now, retention)
progress(int(100 * (n + 1) / encounters))
yield env.timeout(1)
done.succeed({
"final_skill": skill,
"final_difficulty": difficulty,
"retention": retention
})
env.process(play())
env.run(until=done)
return done.value
def requirements():
return {
"builtin": ["micropip", "pyyaml"],
"external": ["simpy==4.1.1"],
}
Default.yaml
initial_skill: 5
skill_gain: 0.5
initial_difficulty: 5
adjust_rate: 0.3
target_win_rate: 0.7
memory: 5
frustration_thresh: 1.0
encounters: 50
seed: 42
Charts (Default)
Final Results (Default)
Metric | Value |
---|---|
final_skill | 30.00 |
final_difficulty | 10.15 |
retention | 0.31 |
FAQ
- How does the game adjust difficulty?
- After each encounter the difficulty is scaled by 1 + adjust_rate * (win_rate - target_win) based on recent results.
- What factors contribute to player frustration?
- Long battle times and losses increase the frustration metric which reduces retention.
- How is retention calculated?
- Retention is multiplied by max(0, 1 - 0.1 * frustration) each round to reflect players quitting over time.