Dynamic Difficulty Adjustment

Adaptive encounters that track win rate to keep players challenged.

Level: Intermediate

gameplayreinforcing-loopbalancing-loopadaptivegame

  • Stocks: skill, difficulty, retention
  • Flows: practice gains, difficulty adaptation
  • Feedback Loops: skill improvement (reinforcing), challenge tuning (balancing)
  • Probes: skill, difficulty, win_rate, frustration, retention
simulation.py

Dynamic Difficulty Adjustment

This model keeps players in the "flow channel" by tuning encounter difficulty in response to recent outcomes. A running history of wins guides the adjustment so players stay challenged but not overwhelmed.


from tys import probe, progress

Simulate a sequence of encounters with adaptive difficulty.

def simulate(cfg: dict):

    import simpy
    import random
    env = simpy.Environment()

Parameters controlling player skill and difficulty.

    skill          = cfg["initial_skill"]          # starting player skill level
    skill_gain     = cfg["skill_gain"]             # practice boost each round
    difficulty     = cfg["initial_difficulty"]     # challenge level of encounters
    adjust_rate    = cfg["adjust_rate"]            # how quickly difficulty changes
    target_win     = cfg["target_win_rate"]        # desired win rate to hit flow
    memory         = cfg["memory"]                 # number of recent outcomes to track
    frustr_thresh  = cfg["frustration_thresh"]     # time threshold before frustration
    encounters     = cfg["encounters"]             # total encounters simulated
    random.seed(cfg.get("seed", 1))                # deterministic runs for comparison

    history   = []    # recent win/loss record
    retention = 1.0   # probability the player keeps playing

    done = env.event()

Each tick represents one encounter.

    def play():
        nonlocal skill, difficulty, retention
        for n in range(encounters):
            p_win = skill / (skill + difficulty)
            win   = random.random() < p_win
            time_to_victory = difficulty / skill

Frustration rises with long battles or a loss.

            frustration = max(0, time_to_victory - frustr_thresh)
            if not win:
                frustration += 1
            retention *= max(0.0, 1 - 0.1 * frustration)

Track recent results and adapt difficulty.

            history.append(1 if win else 0)
            if len(history) > memory:
                history.pop(0)
            win_rate = sum(history) / len(history)
            difficulty *= 1 + adjust_rate * (win_rate - target_win)

            skill += skill_gain

Record metrics for later analysis.

            probe("skill", env.now, skill)
            probe("difficulty", env.now, difficulty)
            probe("win_rate", env.now, win_rate)
            probe("frustration", env.now, frustration)
            probe("retention", env.now, retention)

            progress(int(100 * (n + 1) / encounters))
            yield env.timeout(1)

        done.succeed({
            "final_skill": skill,
            "final_difficulty": difficulty,
            "retention": retention
        })

    env.process(play())
    env.run(until=done)
    return done.value


def requirements():
    return {
        "builtin": ["micropip", "pyyaml"],
        "external": ["simpy==4.1.1"],
    }
Default.yaml
initial_skill: 5
skill_gain: 0.5
initial_difficulty: 5
adjust_rate: 0.3
target_win_rate: 0.7
memory: 5
frustration_thresh: 1.0
encounters: 50
seed: 42
Charts (Default)

skill

skill chartCSV
Samples50 @ 0.00–49.00
Valuesmin 5.50, mean 17.75, median 17.75, max 30.00, σ 7.22

difficulty

difficulty chartCSV
Samples50 @ 0.00–49.00
Valuesmin 2.57, mean 4.82, median 4.79, max 10.15, σ 1.57

win_rate

win_rate chartCSV
Samples50 @ 0.00–49.00
Valuesmin 0.00, mean 0.75, median 0.80, max 1.00, σ 0.21

frustration

frustration chartCSV
Samples50 @ 0.00–49.00
Valuesmin 0.00, mean 0.22, median 0.00, max 1.00, σ 0.41

retention

retention chartCSV
Samples50 @ 0.00–49.00
Valuesmin 0.31, mean 0.52, median 0.48, max 0.90, σ 0.18
Final Results (Default)
MetricValue
final_skill30.00
final_difficulty10.15
retention0.31
FAQ
How does the game adjust difficulty?
After each encounter the difficulty is scaled by 1 + adjust_rate * (win_rate - target_win) based on recent results.
What factors contribute to player frustration?
Long battle times and losses increase the frustration metric which reduces retention.
How is retention calculated?
Retention is multiplied by max(0, 1 - 0.1 * frustration) each round to reflect players quitting over time.