First-Price vs. Vickrey Auctions

Repeated sealed-bid auctions comparing first-price and Vickrey rules with learning agents.

Level:Intermediate

game-theoryauctionlearningincentivesimpy

  • Probes:revenue, allocative_efficiency, truthful_bidding
FAQ
How do bidders adapt their bids over time?
Each agent updates regrets for different bid multipliers after every round, making higher-regret strategies more likely next time.
What determines the auction winner?
The highest bid wins; if several bids tie the simulation selects a random winner among them.
How is payment computed in the Vickrey version?
The winner pays the second-highest bid, so bidding truthfully is a dominant strategy.
simulation.py

First-price vs. Vickrey sealed-bid auctions

This toy model pits first-price auctions against their Vickrey (second-price) counterpart to illustrate incentive compatibility and the idea of revenue equivalence.

  • Each round every bidder draws a private valuation from U(0,1).
  • They submit a sealed bid based on a simple regret-matching strategy.
  • The Auctioneer awards the item and charges according to the configured rule.

Over many repetitions we track average revenue, whether the highest valuation actually wins (allocative efficiency), and how often bidders play truthfully.


from tys import probe, progress


class Agent:
    """Bidder with regret-matching over a few shading options."""

    def __init__(self, idx: int, rng):
        self.idx = idx
        self.rng = rng

Possible bid multipliers: truthful 1.0, mild shade 0.8, heavy shade 0.6

        self.actions = [1.0, 0.8, 0.6]
        self.regret = [0.0] * len(self.actions)

    def _strategy(self):
        """Convert positive regrets into a probability distribution."""
        positives = [max(r, 0.0) for r in self.regret]
        total = sum(positives)
        if total > 0:
            return [r / total for r in positives]
        return [1.0 / len(self.actions)] * len(self.actions)

    def choose_action(self) -> int:
        probs = self._strategy()
        r = self.rng.random()
        cumulative = 0.0
        for i, p in enumerate(probs):
            cumulative += p
            if r <= cumulative:
                return i
        return len(self.actions) - 1

    def update(self, value: float, bids: list[float], winner: int, payment: float, rule: str, chosen: int):
        chosen_payoff = value - payment if winner == self.idx else 0.0
        for i, a in enumerate(self.actions):
            alt_bids = bids[:]
            alt_bids[self.idx] = value * a
            max_bid = max(alt_bids)
            winners = [j for j, b in enumerate(alt_bids) if b == max_bid]
            w = self.rng.choice(winners)
            if rule == "first_price":
                pay = alt_bids[w]
            else:
                sorted_bids = sorted(alt_bids, reverse=True)
                pay = sorted_bids[1] if len(sorted_bids) > 1 else 0.0
            payoff = value - pay if w == self.idx else 0.0
            regret = payoff - chosen_payoff
            if regret > 0:
                self.regret[i] += regret


def simulate(cfg: dict):
    """Run repeated auctions and record summary metrics."""

    import simpy
    import random

    env = simpy.Environment()

    num_agents = int(cfg["num_agents"])
    num_rounds = int(cfg["num_rounds"])
    rule = cfg["auction_rule"]  # ``first_price`` or ``vickrey``
    seed = int(cfg.get("seed", 1))

    rng = random.Random(seed)
    agents = [Agent(i, rng) for i in range(num_agents)]

    revenue_total = 0.0
    efficient_total = 0
    done = env.event()

Main auction loop: draw values, gather bids, determine payment.

    def run():
        nonlocal revenue_total, efficient_total
        for t in range(num_rounds):
            vals = [rng.random() for _ in range(num_agents)]
            bids = []
            choices = []
            for ag, val in zip(agents, vals):
                idx = ag.choose_action()
                choices.append(idx)
                bids.append(val * ag.actions[idx])

            max_bid = max(bids)
            winners = [i for i, b in enumerate(bids) if b == max_bid]
            winner = rng.choice(winners)

            if rule == "first_price":
                payment = bids[winner]
            else:  # Vickrey second-price
                sorted_bids = sorted(bids, reverse=True)
                payment = sorted_bids[1] if len(sorted_bids) > 1 else 0.0

            revenue_total += payment
            if vals[winner] == max(vals):
                efficient_total += 1

            truthful = sum(1 for idx in choices if agents[0].actions[idx] == 1.0) / num_agents

            probe("revenue", env.now, payment)
            probe("allocative_efficiency", env.now, efficient_total / (t + 1))
            probe("truthful_bidding", env.now, truthful)

            for i, ag in enumerate(agents):
                ag.update(vals[i], bids, winner, payment, rule, choices[i])

            progress(int(100 * (t + 1) / num_rounds))
            yield env.timeout(1)

        done.succeed({
            "avg_revenue": revenue_total / num_rounds,
            "efficiency": efficient_total / num_rounds,
        })

    env.process(run())
    env.run(until=done)
    return done.value


def requirements():
    return {
        "builtin": ["micropip", "pyyaml"],
        "external": ["simpy==4.1.1"],
    }

First Price.yaml
num_agents: 4
num_rounds: 100
auction_rule: first_price
seed: 42
Charts (First Price)

revenue

revenue chart
Samples100 @ 0.00–99.00
Valuesmin 0.20, mean 0.59, median 0.58, max 1.00, σ 0.15

allocative_efficiency

allocative_efficiency chart
Samples100 @ 0.00–99.00
Valuesmin 0.71, mean 0.86, median 0.85, max 1.00, σ 0.04

truthful_bidding

truthful_bidding chart
Samples100 @ 0.00–99.00
Valuesmin 0.00, mean 0.02, median 0.00, max 0.75, σ 0.10
Final Results (First Price)
MetricValue
avg_revenue0.59
efficiency0.85
Vickrey.yaml
num_agents: 4
num_rounds: 100
auction_rule: vickrey
seed: 42
Charts (Vickrey)

revenue

revenue chart
Samples100 @ 0.00–99.00
Valuesmin 0.12, mean 0.58, median 0.58, max 0.98, σ 0.22

allocative_efficiency

allocative_efficiency chart
Samples100 @ 0.00–99.00
Valuesmin 0.83, mean 0.91, median 0.91, max 1.00, σ 0.03

truthful_bidding

truthful_bidding chart
Samples100 @ 0.00–99.00
Valuesmin 0.00, mean 0.84, median 0.88, max 1.00, σ 0.20
Final Results (Vickrey)
MetricValue
avg_revenue0.58
efficiency0.91