Dependent Types
Why bother with all this? The honest answer is that most working programmers will never need dependent types, the same way most drivers will never need to understand engine timing. The dishonest answer is that dependent types will make you more productive. The true answer is somewhere stranger: ordinary type systems cannot express the constraints that actually matter. You can say “this is a list” but not “this is a list of exactly five elements.” You can say “this function returns an integer” but not “this function returns a positive integer smaller than its input.” Every time you write a comment explaining what a function really does, every time you add a runtime check for an invariant the compiler cannot see, every time a bug slips through because the types were not precise enough, you are paying the cost of an insufficiently expressive type system. The comment is a wish. The runtime check is a prayer. Dependent types are a contract.
Dependent types are the solution. They let types talk about values. The type Vector α 5 denotes a list of exactly five elements. The type Fin n represents a natural number provably less than n. Array bounds checking happens at compile time. Protocol state machines live in the types. The invariants you used to hope were true become things the compiler verifies.
Per Martin-Löf spent the 1970s in Sweden developing this type theory where types could depend on values, not just on other types. The idea seems simple enough: if List can be parameterized by an element type, why not also by its length? But this small step has profound consequences. Suddenly types become a specification language. A function returning Vector α n does not merely return a list; it returns a list of exactly n elements, verified at compile time. Polymorphic type systems like those in Haskell (built on System FC, an extension of System Fω with type equality coercions) and OCaml (an extended ML core with row polymorphism and first-class modules) stop at the water’s edge here. Dependent types let you wade in, expressing invariants that would otherwise live only in documentation or runtime checks. Lean’s type system, based on the Calculus of Inductive Constructions, provides the full machinery: types that compute, proofs that are programs, and specifications precise enough to replace testing with theorem proving.
Warning
This section is dense and notation-heavy. If the mathematical symbols start to blur together, that is not a personal failing. It is the appropriate response to notation that took logicians fifty years to stabilize and still varies between textbooks. Skip ahead and return later. Or don’t. Some people read this material linearly, some spiral in from the edges, some open to random pages and see what sticks. There is no wrong way to learn type theory except to believe you should have understood it faster. The notation table below is a reference, not a prerequisite.
Notation
| Concept | Mathematical Notation | Lean Syntax | Description |
|---|---|---|---|
| Function type | $\alpha \to \beta$ | α → β | Non-dependent function from α to β |
| Dependent function | $\Pi (x : \alpha), \beta(x)$ | (x : α) → β x | Function where return type depends on input |
| For all | $\forall x : \alpha, P(x)$ | ∀ x : α, P x | Universal quantification |
| Exists | $\exists x : \alpha, P(x)$ | ∃ x : α, P x | Existential quantification |
| Lambda abstraction | $\lambda x. t$ | fun x => t or λ x => t | Anonymous function |
| Equivalence | $a \equiv b$ | N/A | Definitional equality |
| Conjunction | $P \land Q$ | P ∧ Q | Logical AND |
| Disjunction | $P \lor Q$ | P ∨ Q | Logical OR |
| Negation | $\neg P$ | ¬P | Logical NOT |
| Type universe | $\text{Type}_n$ | Type n | Universe of types at level n |
| Proposition universe | $\text{Prop}$ | Prop | Universe of propositions |
| Sort hierarchy | $\text{Sort}_n$ | Sort n | Unified universe hierarchy |
| Quotient type | $\alpha/{\sim}$ | Quotient s | Type obtained by quotienting α by relation ∼ |
| Natural numbers | $\mathbb{N}$ | Nat or ℕ | Natural numbers type |
| Integers | $\mathbb{Z}$ | Int or ℤ | Integer type |
| Sigma type | $\Sigma (x : \alpha), \beta(x)$ | Σ x : α, β x | Dependent pair type |
| Product type | $\alpha \times \beta$ | α × β | Cartesian product |
Custom Notation
The notation command lets you extend Lean’s syntax with domain-specific operators:
-- Custom notation for domain-specific syntax
notation "⟪" a ", " b "⟫" => (a, b)
#eval ⟪1, 2⟫ -- (1, 2)
-- Prefix notation
prefix:max "∼" => fun (n : Nat) => n + 1
#eval ∼5 -- 6
Type System Overview
Lean’s type system supports definitional equality through several reduction rules. Two terms are definitionally equal when one reduces to the other, and the type checker treats them as interchangeable without proof.
Beta reduction ($\beta$) is function application. When you apply $(\lambda x. t)$ to an argument $s$, the result is $t$ with $s$ substituted for $x$. This is the computational heart of the lambda calculus.
Delta reduction ($\delta$) unfolds definitions. When you define def f x := x + 1, any occurrence of f 3 can be replaced with 3 + 1. The type checker sees through your naming conventions.
Iota reduction ($\iota$) handles pattern matching on inductive types. When a recursor meets a constructor, the match fires and computation proceeds. This is how Nat.rec applied to Nat.succ n knows to take the successor branch.
Zeta reduction ($\zeta$) substitutes let-bound variables. The expression $\text{let } x := s \text{ in } t$ reduces to $t[s/x]$. Local definitions are just convenient names.
example : (fun x => x + 1) 2 = 3 := rfl -- β-reduction
def myConst := 42
example : myConst = 42 := rfl -- δ-reduction
example : let x := 5; x + x = 10 := rfl -- ζ-reduction
Functions
Function types are a built-in feature of Lean. Functions map values from one type (the domain) to another type (the codomain). In Lean’s core language, all function types are dependent. Non-dependent function types are just a special case where the parameter does not appear in the codomain.
Non-Dependent vs Dependent Functions
Non-dependent functions have a fixed return type that does not vary based on the input value. These are the only kinds of functions available in languages like Haskell (not :: Bool -> Bool) and OCaml (let not : bool -> bool). In Lean, def not : Bool → Bool uses the arrow notation to indicate non-dependence. The return type is always Bool, regardless of which boolean you pass in.
Dependent functions have a return type that can depend on the actual value of the input. The type is written as $\Pi (x : \alpha), \beta(x)$ or (x : α) → β x in Lean syntax, where the parameter name x appears in the return type β x. This feature has no equivalent in Haskell or OCaml.
Key insight: Dependent functions can return values from completely different types based on their input! This is sometimes called a dependent product (or Pi type) because it corresponds to an indexed product of sets.
Tip
The name “dependent product” may seem backwards since we are building functions, not pairs. The terminology comes from set theory: a function $f : \Pi (x : A), B(x)$ assigns to each $x \in A$ an element of $B(x)$, which is precisely an element of the Cartesian product $\prod_{x \in A} B(x)$. The “product” is over all possible inputs.
Why Dependent Types Matter
Consider this function that cannot be typed in Haskell or OCaml:
-- Return type depends on runtime value - impossible in Haskell/OCaml
def dependentTwo (b : Bool) : if b then Unit × Unit else String :=
match b with
| true => ((), ()) -- Returns a pair when b is true
| false => "two" -- Returns a string when b is false
The return type literally changes based on the runtime value. Call two true and you get a Unit × Unit. Call two false and you get a String. This should feel slightly transgressive. A function that returns different types? In most languages, this is either impossible or requires erasing all type information and hoping for the best. Here, the type system tracks it precisely. The function is total, the types are known, the compiler is satisfied.
-- Dependent pattern matching: types change based on scrutinee
-- Return type depends on the boolean value
def boolToType (b : Bool) : Type :=
if b then Nat else String
def boolExample (b : Bool) : boolToType b :=
match b with
| true => (42 : Nat) -- returns Nat
| false => "hello" -- returns String
-- Pattern matching on Fin must handle all cases
def finToString : Fin 3 → String
| 0 => "zero"
| 1 => "one"
| 2 => "two"
This enables encoding invariants directly in types. For example, Vector α n encodes the length n in the type itself, making it impossible to write functions that violate length constraints. Your off-by-one errors become compile-time errors. The compiler catches at build time what you would otherwise discover in production, at 3am, with the on-call phone ringing.
Typing Rules for Functions
Two rules govern how types flow through function definitions and applications. The first is application: when you apply a function to an argument, the return type can mention the argument itself. If $f : \Pi (x : \alpha), \beta(x)$ and you apply it to some $a : \alpha$, the result $f \, a$ has type $\beta(a)$. The type of the output depends on the value of the input. This is the essence of dependent typing. A function Vector.head : (n : Nat) → Vector α (n + 1) → α applied to 3 yields a function expecting a Vector α 4. The 3 propagates into the type.
The second rule is abstraction: to construct a function, you assume a variable of the input type and produce a term of the output type. If $t : \beta$ under the assumption $x : \alpha$, then $\lambda x : \alpha. \, t$ has type $\Pi (x : \alpha), \beta$. The abstraction binds the variable and packages the assumption into the function type. When $\beta$ does not mention $x$, this collapses to the familiar non-dependent arrow $\alpha \to \beta$.
Beyond formation and elimination, functions satisfy eta-reduction: wrapping a function in a lambda that immediately applies it produces the same function. Formally, $\lambda x. \, f \, x \equiv f$ when $x$ does not appear free in $f$. This goes beyond simplification; it expresses extensionality: a function is determined by what it does to its arguments, not by how it is written.
Examples: Dependent and Non-Dependent Functions
-- Non-dependent function: return type is fixed
-- Similar to Haskell: not :: Bool -> Bool
-- or OCaml: let not : bool -> bool
def double (n : Nat) : Nat := n * 2
-- Another non-dependent example (like Haskell's const)
def constantFive (_ : Nat) : Nat := 5
-- Dependent function: return type depends on the input value
-- This has NO equivalent in Haskell or OCaml!
def makeVec (n : Nat) : Fin (n + 1) := ⟨n, Nat.lt_succ_self n⟩
-- A more dramatic dependent function example
-- The return TYPE changes based on the input VALUE
def two (b : Bool) : if b then Unit × Unit else String :=
match b with
| true => ((), ()) -- Returns a pair of units
| false => "two" -- Returns a string
-- This function returns different TYPES based on input:
-- two true : Unit × Unit
-- two false : String
-- Dependent pairs: the second type depends on the first value
def dependentPair : (n : Nat) × Fin n :=
⟨5, 3⟩ -- Second component must be less than first
-- Compare with non-dependent versions:
-- Haskell: (Int, Int) -- no constraint between components
-- OCaml: int * int -- no constraint between components
-- Lean enforces the invariant in the type!
-- Function composition (non-dependent)
def comp {α β γ : Type} (f : β → γ) (g : α → β) : α → γ :=
fun x => f (g x)
-- This is like Haskell's (.) or OCaml's (@@)
-- But Lean can also do DEPENDENT composition!
-- Dependent function composition
def depComp {α : Type} {β : α → Type} {γ : (x : α) → β x → Type}
(f : (x : α) → (y : β x) → γ x y)
(g : (x : α) → β x) :
(x : α) → γ x (g x) :=
fun x => f x (g x)
-- Multiple parameters via currying (named after Haskell B. Curry)
def curriedAdd : Nat → Nat → Nat := fun x y => x + y
-- Function extensionality: equal outputs for equal inputs
theorem funext_example (f g : Nat → Nat) (h : ∀ x, f x = g x) : f = g :=
funext h
Currying
Currying is the fundamental technique of transforming functions with multiple parameters into sequences of single-parameter functions. Named after logician Haskell Curry (yes, the programming language is also named after him), this approach is automatic in Lean. All multi-parameter functions are internally represented as curried functions. This enables partial application, where supplying fewer arguments than a function expects creates a new function waiting for the remaining arguments.
Note
The technique was actually discovered by Moses Schonfinkel in 1924, six years before Curry’s work. Academic naming conventions are not always fair. Schonfinkel’s life ended in obscurity in a Moscow hospital; Curry became a household name among programmers who have never heard of either.
The power of currying lies in its composability. You can create specialized functions by partially applying general ones, building up complex behavior from simple building blocks. While languages like Haskell make currying explicit, Lean handles it transparently, allowing you to work with multi-parameter functions naturally while still benefiting from the flexibility of curried representations.
/-!
### Currying
Currying is the transformation of functions with multiple parameters into
a sequence of functions, each taking a single parameter. In Lean, all
multi-parameter functions are automatically curried.
-/
-- Multi-parameter function (automatically curried)
def add3 (x y z : Nat) : Nat := x + y + z
-- Equivalent to nested lambdas
def add3' : Nat → Nat → Nat → Nat :=
fun x => fun y => fun z => x + y + z
-- Partial application creates new functions
def add10 : Nat → Nat → Nat := add3 10
def add10And5 : Nat → Nat := add3 10 5
example : add10 3 7 = 20 := rfl
example : add10And5 2 = 17 := rfl
-- Function.curry: Convert uncurried to curried
def uncurriedAdd : Nat × Nat → Nat := fun p => p.1 + p.2
def curriedVer := Function.curry uncurriedAdd
example : curriedVer 3 4 = 7 := rfl
-- Function.uncurry: Convert curried to uncurried
def addPair := Function.uncurry Nat.add
example : addPair (3, 4) = 7 := rfl
-- Currying with dependent types
def depCurry {α : Type} {β : α → Type} {γ : (a : α) → β a → Type}
(f : (p : (a : α) × β a) → γ p.1 p.2) :
(a : α) → (b : β a) → γ a b :=
fun a b => f ⟨a, b⟩
Function Extensionality
Function extensionality is a fundamental principle stating that two functions are equal if and only if they produce equal outputs for all equal inputs. This principle, while intuitively obvious, is not derivable from the other axioms of dependent type theory and must be added as an axiom in Lean. Without extensionality, we could only prove functions equal if they were syntactically identical: the same symbols in the same order.
The funext tactic in Lean implements this principle, allowing us to prove function equality by considering their behavior pointwise. This is essential for mathematical reasoning, where we often want to show that two different definitions actually describe the same function. The principle extends to dependent functions as well, where the output type can vary with the input.
/-!
### Extensionality
Function extensionality states that two functions are equal if they
produce equal outputs for all inputs. This is not provable from the
other axioms and is added as an axiom in Lean.
-/
-- funext: Basic function extensionality
theorem my_funext {α β : Type} (f g : α → β) :
(∀ x, f x = g x) → f = g :=
funext
-- Example: Proving function equality
def double' (n : Nat) : Nat := 2 * n
def double'' (n : Nat) : Nat := n + n
theorem doubles_equal : double' = double'' := by
funext n
simp [double', double'']
omega
-- Dependent function extensionality
theorem dep_funext {α : Type} {β : α → Type}
(f g : (x : α) → β x) :
(∀ x, f x = g x) → f = g :=
funext
-- Eta reduction: λ x, f x = f
theorem eta_reduction (f : Nat → Nat) : (fun x => f x) = f :=
funext fun _ => rfl
-- Functions equal by behavior, not syntax
def addOne : Nat → Nat := fun x => x + 1
def succFunc : Nat → Nat := Nat.succ
theorem addOne_eq_succ : addOne = succFunc := by
funext x
simp [addOne, succFunc]
Totality and Termination
Important
All functions in Lean must be total, meaning they must be defined for every possible input of the correct type. This requirement ensures logical consistency: a function that could fail or loop forever would make Lean’s logic unsound. Partiality is the enemy. The function that hangs on edge cases, the recursion that never terminates, the match that forgot a constructor: these are not just bugs but logical contradictions waiting to invalidate your theorems.
To achieve totality while allowing recursion, Lean uses well-founded recursion based on decreasing measures.
For structural recursion on inductive types, Lean automatically proves termination by observing that recursive calls operate on structurally smaller arguments. For more complex recursion patterns, you can specify custom termination measures using termination_by and provide proofs that these measures decrease with decreasing_by. This approach allows expressing any computable function while maintaining logical soundness. If you have ever written while (true) and hoped for the best, this is the universe collecting on that debt.
/-!
### Totality and Termination
All functions in Lean must be total (defined for all inputs) and
terminating. Lean uses well-founded recursion to ensure termination.
-/
-- Total function: defined for all natural numbers
def safeDivide (n : Nat) (m : Nat) : Nat :=
if m = 0 then 0 else n / m -- Returns 0 for division by zero
-- Structural recursion (automatically proven terminating)
def fact : Nat → Nat
| 0 => 1
| n + 1 => (n + 1) * fact n
-- Well-founded recursion with explicit termination proof
def gcd (a b : Nat) : Nat :=
if h : b = 0 then
a
else
have : a % b < b := Nat.mod_lt _ (Nat.pos_of_ne_zero h)
gcd b (a % b)
termination_by b
-- Mutual recursion with termination
mutual
def isEvenMut : Nat → Bool
| 0 => true
| n + 1 => isOddMut n
def isOddMut : Nat → Bool
| 0 => false
| n + 1 => isEvenMut n
end
-- Using decreasing_by for custom termination proof
def ackermann : Nat → Nat → Nat
| 0, n => n + 1
| m + 1, 0 => ackermann m 1
| m + 1, n + 1 => ackermann m (ackermann (m + 1) n)
termination_by m n => (m, n)
For non-structural recursion, you must provide a termination measure that decreases on each recursive call. The classic examples are the GCD algorithm (where the second argument decreases) and the Ackermann function (where the lexicographic pair decreases):
-- Termination proofs for recursive functions
-- Lean requires proof that recursion terminates
-- For structural recursion, it's automatic
def factorial : Nat → Nat
| 0 => 1
| n + 1 => (n + 1) * factorial n
-- For non-structural recursion, provide a termination measure
def gcd (a b : Nat) : Nat :=
if h : b = 0 then a
else
have : a % b < b := Nat.mod_lt a (Nat.pos_of_ne_zero h)
gcd b (a % b)
termination_by b
-- The measure must decrease on each recursive call
def ackermann : Nat → Nat → Nat
| 0, n => n + 1
| m + 1, 0 => ackermann m 1
| m + 1, n + 1 => ackermann m (ackermann (m + 1) n)
termination_by m n => (m, n)
Function API Reference
Lean’s standard library provides a rich collection of function combinators in the Function namespace. These combinators, familiar from functional programming, enable point-free style and function composition. The composition operator ∘ allows building complex functions from simpler ones, while combinators like const, flip, and id provide basic building blocks for function manipulation.
Function composition in Lean satisfies the expected mathematical properties: it is associative, and the identity function acts as a neutral element. These properties are not just theorems but computational facts. They hold definitionally, meaning Lean can verify them by pure computation without requiring proof steps.
/-!
### Function API Reference
Lean provides standard function combinators in the Function namespace.
-/
-- Function.comp: Function composition
def composed := Function.comp (· + 10) (· * 2)
example : composed 5 = 20 := rfl -- (5 * 2) + 10 = 20
-- Using ∘ notation for composition
open Function in
def composed' := (· + 10) ∘ (· * 2)
example : composed' 5 = 20 := rfl
-- Function.const: Constant function
def alwaysFive := Function.const Nat 5
example : alwaysFive 100 = 5 := rfl
example : alwaysFive 999 = 5 := rfl
-- id: Identity function
example : id 42 = 42 := rfl
example : (id ∘ id) = (id : Nat → Nat) := rfl
-- flip: Swap arguments
def subtract (a b : Nat) : Int := a - b
def flippedSubtract := flip subtract
example : subtract 10 3 = 7 := rfl
example : flippedSubtract 3 10 = 7 := rfl
-- Function composition laws
open Function in
theorem comp_assoc {α β γ δ : Type} (f : γ → δ) (g : β → γ) (h : α → β) :
(f ∘ g) ∘ h = f ∘ (g ∘ h) := rfl
open Function in
theorem id_comp {α β : Type} (f : α → β) :
id ∘ f = f := rfl
open Function in
theorem comp_id {α β : Type} (f : α → β) :
f ∘ id = f := rfl
Function Properties
Mathematical properties of functions (injectivity, surjectivity, and bijectivity) play crucial roles in both mathematics and computer science. An injective function maps distinct inputs to distinct outputs, a surjective function reaches every possible output, and a bijective function is both injective and surjective, establishing a one-to-one correspondence between domain and codomain.
These properties connect to the concept of inverses. A function has a left inverse if and only if it’s injective, a right inverse if and only if it’s surjective, and a two-sided inverse if and only if it’s bijective. Lean provides definitions and theorems for reasoning about these properties, enabling formal verification of mathematical and algorithmic correctness.
/-!
### Function Properties
Important mathematical properties of functions.
-/
-- Injective: Different inputs give different outputs
def isInjective {α β : Type} (f : α → β) : Prop :=
∀ x y, f x = f y → x = y
theorem double_injective : isInjective (fun n : Nat => 2 * n) := by
intro x y h
simp only [] at h
-- If 2*x = 2*y, then x = y
have : x * 2 = y * 2 := by rw [mul_comm x 2, mul_comm y 2]; exact h
exact Nat.eq_of_mul_eq_mul_right (by norm_num : 0 < 2) this
-- Using Lean's built-in Function.Injective
example : Function.Injective (fun n : Nat => 2 * n) := by
intro x y h
simp only [] at h
have : x * 2 = y * 2 := by rw [mul_comm x 2, mul_comm y 2]; exact h
exact Nat.eq_of_mul_eq_mul_right (by norm_num : 0 < 2) this
-- Surjective: Every output is reached by some input
def isSurjective {α β : Type} (f : α → β) : Prop :=
∀ b, ∃ a, f a = b
-- Not surjective: doubling doesn't produce odd numbers
theorem double_not_surjective :
¬Function.Surjective (fun n : Nat => 2 * n) := by
intro h
obtain ⟨n, hn⟩ := h 1
simp at hn
-- 2*n is always even, but 1 is odd
cases n with
| zero => simp at hn
| succ m => simp [Nat.mul_succ] at hn
-- Bijective: Both injective and surjective
def isBijective {α β : Type} (f : α → β) : Prop :=
Function.Injective f ∧ Function.Surjective f
-- Left inverse: g ∘ f = id
theorem has_left_inverse {α β : Type} (f : α → β) (g : β → α) :
Function.LeftInverse g f ↔ ∀ a, g (f a) = a := by
rfl
-- Right inverse: f ∘ g = id
theorem has_right_inverse {α β : Type} (f : α → β) (g : β → α) :
Function.RightInverse g f ↔ ∀ b, f (g b) = b := by
rfl
-- Example: Successor and predecessor
def succInt : Int → Int := (· + 1)
def predInt : Int → Int := (· - 1)
theorem succ_pred_inverse :
Function.LeftInverse predInt succInt ∧
Function.RightInverse predInt succInt := by
constructor <;> intro x <;> simp [succInt, predInt]
-- Inverse functions
structure IsInverse {α β : Type} (f : α → β) (g : β → α) : Prop where
left : Function.LeftInverse g f
right : Function.RightInverse g f
theorem inverse_bijective {α β : Type} (f : α → β) (g : β → α)
(h : IsInverse f g) : isBijective f := by
constructor
· -- Injective
intro x y hxy
have : g (f x) = g (f y) := by rw [hxy]
rw [h.left x, h.left y] at this
exact this
· -- Surjective
intro b
use g b
exact h.right b
Implicit and Auto Parameters
While not part of core type theory, Lean’s function types include indications of whether parameters are implicit. Implicit and explicit function types are definitionally equal. Implicit parameters are inferred from context, strict implicit parameters must be inferrable at the application site, and auto parameters are filled by type class resolution.
-- Implicit parameters (inferred from usage)
def implicitId {α : Type} (x : α) : α := x
-- Strict implicit (must be inferrable at application site)
def strictImplicit ⦃α : Type⦄ (x : α) : α := x
-- Auto parameters (filled by type class resolution)
def autoParam {α : Type} [Inhabited α] : α := default
-- Optional parameters with default values
def withDefault (n : Nat := 10) : Nat := n * 2
example : implicitId 5 = 5 := rfl
example : withDefault = 20 := rfl
example : withDefault 3 = 6 := rfl
Propositions
Propositions (Prop) are types representing logical statements. They feature proof irrelevance: any two proofs of the same proposition are definitionally equal. This means the specific proof does not matter, only that one exists. We covered this in the Type Theory article.
The Curry-Howard Correspondence Revisited
The Curry-Howard correspondence we encountered in earlier articles now reveals its full depth. With dependent types, the correspondence extends beyond simple propositional logic. Universal quantification becomes dependent function types. Existential quantification becomes dependent pair types (sigma types). The slogan “propositions are types, proofs are programs” turns out to be a precise mathematical equivalence.
| Logic | Type Theory | Lean Syntax |
|---|---|---|
| $\forall x : \alpha, P(x)$ | Dependent function $\Pi (x : \alpha), P(x)$ | ∀ x : α, P x or (x : α) → P x |
| $\exists x : \alpha, P(x)$ | Dependent pair $\Sigma (x : \alpha), P(x)$ | ∃ x : α, P x or Σ x : α, P x |
| Induction principle | Recursor | Nat.rec, List.rec, etc. |
| Proof by cases | Pattern matching | match ... with |
The dependent versions unify what simpler type systems treat separately. A proof of “for all natural numbers n, P(n) holds” is literally a function that takes any n : Nat and returns a proof of P n. A proof of “there exists a natural number n such that P(n)” is literally a pair: the witness n together with a proof of P n. This unification is not philosophical hand-waving; it is the operational semantics of Lean.
-- Basic logical connectives
theorem and_intro (P Q : Prop) (hp : P) (hq : Q) : P ∧ Q := ⟨hp, hq⟩
theorem or_elim (P Q R : Prop) (h : P ∨ Q) (hp : P → R) (hq : Q → R) : R :=
h.elim hp hq
theorem iff_intro (P Q : Prop) (hpq : P → Q) (hqp : Q → P) : P ↔ Q :=
⟨hpq, hqp⟩
-- Proof irrelevance demonstration
theorem proof_irrel_demo (P : Prop) (p1 p2 : P) : p1 = p2 := rfl
-- Classical logic (via choice)
open Classical in
theorem excluded_middle (P : Prop) : P ∨ ¬P := Classical.em P
Key Properties of Propositions
- Run-time irrelevance: Propositions are erased during compilation
- Impredicativity: Propositions can quantify over types from any universe
- Restricted elimination: With limited exceptions (subsingletons), propositions cannot eliminate to non-proposition types
- Extensionality: The
propextaxiom enables proving logically equivalent propositions are equal
Decidability
Decidable propositions bridge logic and computation, allowing propositions to be computed. A proposition $P$ is decidable when we can algorithmically determine $P \lor \neg P$:
$$\text{Decidable}(P) \triangleq P \lor \neg P$$
This connects to constructive mathematics where decidability provides computational content:
-- Custom decidable instance
instance decidableEven (n : Nat) : Decidable (n % 2 = 0) :=
if h : n % 2 = 0 then isTrue h else isFalse h
-- Using decidability for computation
def isEven (n : Nat) : Bool := decide (n % 2 = 0)
example : isEven 4 = true := rfl
example : isEven 5 = false := rfl
-- Subsingletons: types with at most one element
theorem subsingleton_prop (P : Prop) : Subsingleton P :=
⟨fun _ _ => rfl⟩
Universes
This section is optional. Universe levels rarely matter in day-to-day code. Feel free to skip and return if you encounter universe-related compiler errors.
Lean organizes types into a hierarchy of universes to prevent paradoxes, as we discussed in the Type Theory article. Every type belongs to exactly one universe level. The Sort operator constructs universes:
Tip
Universe levels rarely matter in practice. Lean infers them automatically in most cases, and you can write thousands of lines of code without thinking about them. They become relevant when you are doing highly polymorphic library design or when the compiler complains about universe inconsistencies. If that happens, the error messages will guide you. Until then, treat them as plumbing that handles itself.
- $\text{Sort } 0 = \text{Prop}$ (propositions)
- $\text{Sort } (u + 1) = \text{Type } u$ (data types)
- Every universe is an element of the next larger universe: $\text{Sort } u : \text{Sort } (u + 1)$
The universe of a Pi type $\Pi (x : \alpha), \beta$ depends on the universes of $\alpha$ and $\beta$:
- If $\beta : \text{Prop}$, then $\Pi (x : \alpha), \beta : \text{Prop}$ (impredicative)
- If $\beta : \text{Sort } v$, then $\Pi (x : \alpha), \beta : \text{Sort } (\max(u, v))$ where $\alpha : \text{Sort } u$
universe u v w
-- Universe polymorphism (explicit universe level)
def polyIdentity (α : Sort u) (a : α) : α := a
-- Universe level expressions
def maxLevel (α : Type u) (β : Type v) : Type (max u v) := α × β
-- Type 0 contains types, Type 1 contains Type 0, etc.
example : Type 0 = Sort 1 := rfl
example : Prop = Sort 0 := rfl
-- Impredicative Prop: functions into Prop stay in Prop
def propPredicate (P : Type u → Prop) : Prop := ∀ α, P α
-- Predicative Type: function types take maximum universe
def typePredicate (P : Type u → Type v) : Type (max (u+1) v) :=
∀ α, P α
Universe Polymorphism and Lifting
Definitions can take universe parameters, and universe levels support expressions with addition and maximum operations. Lean’s universes are non-cumulative: a type in Type u is not automatically in Type (u + 1).
-- PLift: lifts any type by exactly one level
def liftedFalse : Type := PLift False
-- ULift: lifts types by any amount
def liftedNat : Type u := ULift.{u} Nat
-- Lifting and unlifting values
def liftExample : ULift.{1} Nat := ⟨42⟩
example : liftExample.down = 42 := rfl
-- Non-cumulativity: types exist at exactly one level
def needsLifting (α : Type 1) : Type 2 := ULift.{2} α
Universe polymorphism lets definitions work at any universe level. The identity function, for instance, should work on values, on types, and on types of types. By parameterizing over universe levels, a single definition serves all purposes:
-- Universe polymorphism
-- Types live in universes: Type 0, Type 1, Type 2, ...
-- Type : Type would lead to paradoxes (Russell, Girard)
-- Universe-polymorphic identity function
universe u
def id' {α : Type u} (x : α) : α := x
-- Works at any universe level
example : id' 42 = 42 := rfl -- Type 0
example : id' Nat = Nat := rfl -- Type 1
example : id' (Type 0) = Type 0 := rfl -- Type 2
Two operators bridge universe gaps:
- PLift: Lifts any type (including propositions) by exactly one level
- ULift: Lifts non-proposition types by any number of levels
Inductive Types
Inductive types are Lean’s primary mechanism for introducing new types. Every type is either inductive or built from universes, functions, and inductive types.
Warning
The recursor that Lean generates for each inductive type is the induction principle in computational form. If you find yourself writing a proof by induction and wondering where the induction hypothesis comes from, the answer is: the recursor. Understanding recursors deeply is optional for using Lean but essential for understanding why Lean works.
Each inductive type has:
- A single type constructor (may take parameters)
- Any number of constructors introducing new values
- A derived recursor representing an induction principle
The general form of an inductive type declaration: $$\begin{aligned} &\textbf{inductive } C (\vec{\alpha} : \vec{U}) : \Pi (\vec{\beta} : \vec{V}), s \textbf{ where} \\ &\quad | , c_1 : \Pi (\vec{x_1} : \vec{T_1}), C , \vec{\alpha} , \vec{t_1} \\ &\quad | , c_2 : \Pi (\vec{x_2} : \vec{T_2}), C , \vec{\alpha} , \vec{t_2} \\ &\quad \vdots \end{aligned}$$
Where $\vec{\alpha}$ are parameters (fixed) and $\vec{\beta}$ are indices (can vary).
-- Basic inductive with multiple constructors
inductive Color : Type where
| red | green | blue
deriving Repr, DecidableEq
-- Parameterized inductive type
inductive Result (ε α : Type) : Type where
| ok : α → Result ε α
| error : ε → Result ε α
-- Anonymous constructor syntax for single-constructor types
structure Point where
x : Float
y : Float
def origin : Point := ⟨0.0, 0.0⟩
Indexed Families
The distinction between parameters and indices is fundamental. Parameters are fixed across the entire definition: if you declare inductive Foo (α : Type), then every constructor must produce a Foo α with that same α. Indices can vary: each constructor can target a different index value. In Vector α n, the type α is a parameter (all elements have the same type) but n is an index (constructors produce vectors of different lengths). The nil constructor produces Vector α 0. The cons constructor takes a Vector α n and produces Vector α (n + 1). The index changes; the parameter does not.
This distinction affects how Lean generates recursors and what pattern matching can learn. When you match on a Vector α n, Lean learns the specific value of the index n in each branch. Matching on nil tells you n = 0. Matching on cons tells you n = m + 1 for some m. This index refinement is what makes length-indexed vectors useful: the type system tracks information that flows from pattern matching.
For vectors (length-indexed lists), the signature is: $$\text{Vector} : \text{Type} \to \mathbb{N} \to \text{Type}$$
The recursor for indexed families captures the dependency: $$\text{Vector.rec} : \Pi (P : \Pi n, \text{Vector } \alpha , n \to \text{Sort } u), \ldots$$
-- Vector: length-indexed lists
inductive Vector (α : Type) : Nat → Type where
| nil : Vector α 0
| cons : ∀ {n}, α → Vector α n → Vector α (n + 1)
def vectorHead {α : Type} {n : Nat} : Vector α (n + 1) → α
| Vector.cons a _ => a
-- Even/odd indexed list (from manual)
inductive EvenOddList (α : Type) : Bool → Type where
| nil : EvenOddList α true
| cons : ∀ {isEven}, α → EvenOddList α isEven → EvenOddList α (!isEven)
def exampleEvenList : EvenOddList Nat true :=
EvenOddList.cons 1 (EvenOddList.cons 2 EvenOddList.nil)
The Fin n type represents natural numbers strictly less than n. It is perhaps the simplest useful indexed type: the index constrains which values can exist. A Fin 3 can only be 0, 1, or 2. Attempting to construct a Fin 3 with value 3 is a type error, not a runtime error.
-- Fin n: natural numbers less than n
-- Fin 3 has exactly three values: 0, 1, 2
example : Fin 3 := 0
example : Fin 3 := 1
example : Fin 3 := 2
-- example : Fin 3 := 3 -- Error: 3 is not less than 3
-- Fin carries a proof that the value is in bounds
def two : Fin 5 := ⟨2, by omega⟩
-- Safe array indexing using Fin
def safeIndex {α : Type} (arr : Array α) (i : Fin arr.size) : α :=
arr[i] -- No bounds check needed at runtime
Vectors generalize lists by tracking their length in the type. A Vec α n is a list of exactly n elements of type α. The head function can only be called on non-empty vectors because its type requires Vec α (n + 1). No runtime check needed; the type system enforces the precondition.
-- Vector: lists with length in the type
-- A vector is a list with its length tracked in the type
inductive Vec (α : Type) : Nat → Type where
| nil : Vec α 0
| cons {n : Nat} : α → Vec α n → Vec α (n + 1)
-- The length is known at compile time
def exampleVec : Vec Nat 3 := .cons 1 (.cons 2 (.cons 3 .nil))
-- Head is safe: can only call on non-empty vectors
def Vec.head {α : Type} {n : Nat} : Vec α (n + 1) → α
| .cons x _ => x
-- Tail preserves the length relationship
def Vec.tail {α : Type} {n : Nat} : Vec α (n + 1) → Vec α n
| .cons _ xs => xs
-- Map over a vector (preserves length)
def Vec.map {α β : Type} {n : Nat} (f : α → β) : Vec α n → Vec β n
| .nil => .nil
| .cons x xs => .cons (f x) (xs.map f)
Mutual and Nested Inductive Types
Multiple inductive types can be defined simultaneously when they reference each other. Nested inductive types are defined recursively through other type constructors.
mutual
inductive Tree (α : Type) : Type where
| leaf : α → Tree α
| node : Forest α → Tree α
inductive Forest (α : Type) : Type where
| nil : Forest α
| cons : Tree α → Forest α → Forest α
end
-- Nested inductive: recursive through other type constructors
inductive NestedTree (α : Type) : Type where
| leaf : α → NestedTree α
| node : List (NestedTree α) → NestedTree α
-- Recursors are automatically generated for pattern matching
def treeSize {α : Type} : Tree α → Nat
| Tree.leaf _ => 1
| Tree.node forest => forestSize forest
where forestSize : Forest α → Nat
| Forest.nil => 0
| Forest.cons t rest => treeSize t + forestSize rest
Structures
Structures are specialized single-constructor inductive types with no indices. They provide automatic projection functions, named-field syntax, update syntax, and inheritance:
structure Person where
name : String
age : Nat
email : Option String := none
deriving Repr
-- Inheritance
structure Student extends Person where
studentId : Nat
gpa : Float
-- Field access and update syntax
def alice : Person := { name := "Alice", age := 25 }
def olderAlice := { alice with age := 26 }
example : alice.name = "Alice" := rfl
example : olderAlice.age = 26 := rfl
Sigma Types and Subtypes
Sigma types (dependent pairs) package a value with data that depends on it. The notation Σ x : α, β x describes pairs where the second component’s type depends on the first component’s value. This is the dependent version of the product type α × β.
-- Sigma types: dependent pairs
-- A dependent pair where the second type depends on the first value
-- Σ (n : Nat), Fin n means "a natural number n paired with a Fin n"
def dependentPair : Σ n : Nat, Fin n := ⟨5, 3⟩
-- The second component's type depends on the first
example : dependentPair.fst = 5 := rfl
example : dependentPair.snd < dependentPair.fst := by decide
-- Contrast with non-dependent product
def regularPair : Nat × Nat := (5, 3) -- both components are Nat
Subtypes refine existing types with predicates. The type { x : α // P x } contains values of type α that satisfy predicate P. Each element bundles a value with a proof that it satisfies the constraint. This is how you express “positive integers” or “sorted lists” at the type level.
-- Subtypes: values with proofs
-- { x : Nat // x > 0 } is the type of positive naturals
def posNat : { n : Nat // n > 0 } := ⟨5, by omega⟩
-- Access the value and proof separately
example : posNat.val = 5 := rfl
example : posNat.property = (by omega : 5 > 0) := rfl
-- Functions can require positive inputs
def safeDivide (a : Nat) (b : { n : Nat // n > 0 }) : Nat :=
a / b.val
Equality as a Type
The equality type a = b is itself a dependent type: it depends on the values a and b. The only constructor is rfl : a = a, which proves that any value equals itself. Proofs of equality can be used to substitute equal values, and equality satisfies the expected properties of symmetry and transitivity.
-- Equality as a dependent type
-- The equality type: a = b is a type that is inhabited iff a equals b
-- rfl : a = a is the only constructor
example : 2 + 2 = 4 := rfl
-- Proofs of equality can be used to substitute
theorem subst_example (a b : Nat) (h : a = b) (P : Nat → Prop) (pa : P a) : P b :=
h ▸ pa -- transport pa along h
-- Equality is symmetric and transitive
theorem eq_symm (a b : Nat) (h : a = b) : b = a := h.symm
theorem eq_trans (a b c : Nat) (h1 : a = b) (h2 : b = c) : a = c := h1.trans h2
Quotient Types
Quotient types create new types by identifying elements via equivalence relations. Given a type $\alpha$ and an equivalence relation $\sim$ on $\alpha$, the quotient $\alpha/\sim$ is a type where $a = b$ in $\alpha/\sim$ whenever $a \sim b$. Elements related by the relation become equal in the quotient type. Equality is respected universally, and nothing in Lean’s logic can observe differences between equal terms.
Note
Mathematicians write $\mathbb{Z} = (\mathbb{N} \times \mathbb{N})/\!\sim$ and software engineers write
type Int = Quotient (Nat × Nat) equiv. Same idea, different notation. The integer $-3$ is not any particular pair of naturals but the equivalence class of all pairs $(a, b)$ where $a + 3 = b$: so $(0, 3)$, $(1, 4)$, $(2, 5)$, and infinitely many others. Two fields, one concept, a century of mutual incomprehension that turns out to be largely notational.
For example, the integers can be constructed as $\mathbb{Z} = (\mathbb{N} \times \mathbb{N})/\sim$ where $(a,b) \sim (c,d)$ iff $a + d = b + c$.
-- Simple modulo equivalence relation
def ModRel (n : Nat) : Nat → Nat → Prop :=
fun a b => a % n = b % n
-- Prove it's an equivalence relation
theorem ModRel.refl (n : Nat) : ∀ x, ModRel n x x :=
fun _ => rfl
theorem ModRel_symm (n : Nat) : ∀ x y, ModRel n x y → ModRel n y x :=
fun _ _ h => h.symm
theorem ModRel.trans (n : Nat) : ∀ x y z, ModRel n x y → ModRel n y z → ModRel n x z :=
fun _ _ _ hxy hyz => Eq.trans hxy hyz
-- Create setoid instance
instance ModSetoid (n : Nat) : Setoid Nat where
r := ModRel n
iseqv := {
refl := ModRel.refl n
symm := @ModRel_symm n
trans := @ModRel.trans n
}
-- Define the quotient type (integers modulo n)
def ZMod (n : Nat) : Type := Quotient (ModSetoid n)
Working with Quotients
Operations on quotients must respect the equivalence relation. The Quotient.lift functions ensure operations are well-defined, while Quotient.sound asserts equality of related elements.
The quotient axioms provide:
- Quotient.mk: $\alpha \to \alpha/{\sim}$ (constructor)
- Quotient.lift: If $f : \alpha \to \beta$ respects $\sim$, then $f$ lifts to $\alpha/{\sim} \to \beta$
- Quotient.sound: If $a \sim b$, then $[a] = [b]$ in $\alpha/{\sim}$
- Quotient.exact: If $[a] = [b]$ in $\alpha/{\sim}$, then $a \sim b$
namespace ZMod
-- Constructor respecting equivalence
def mk (n : Nat) (a : Nat) : ZMod n :=
Quotient.mk (ModSetoid n) a
-- Addition operation via lifting
def add {n : Nat} [NeZero n] : ZMod n → ZMod n → ZMod n :=
Quotient.lift₂
(fun a b => mk n ((a + b) % n))
(fun a₁ a₂ b₁ b₂ h₁ h₂ => by
apply Quotient.sound
simp only [ModSetoid] at h₁ h₂ ⊢
unfold ModRel at h₁ h₂ ⊢
rw [Nat.add_mod, h₁, h₂, ← Nat.add_mod]
rfl)
-- Quotient.sound: related elements are equal
theorem mk_eq_of_rel {n : Nat} (a b : Nat) (h : ModRel n a b) :
mk n a = mk n b :=
Quotient.sound h
-- Quotient induction principle
theorem ind_on {n : Nat} {P : ZMod n → Prop} (q : ZMod n)
(h : ∀ a, P (mk n a)) : P q :=
Quotient.ind h q
end ZMod
Combined Examples
The following examples combine dependent functions, indexed families, and proof terms. Each demonstrates how types can enforce invariants that would be invisible to simpler type systems:
-- Dependent pairs (Sigma types)
def DependentPair := Σ n : Nat, Fin n
def examplePair : DependentPair := ⟨3, 2⟩
-- Type families and dependent pattern matching
def typeFamily : Nat → Type
| 0 => Unit
| 1 => Bool
| _ => Nat
def familyValue : (n : Nat) → typeFamily n
| 0 => ()
| 1 => true
| n@(_ + 2) => n * 2
-- Well-founded recursion via termination_by
def factorial : Nat → Nat
| 0 => 1
| n + 1 => (n + 1) * factorial n
-- Custom well-founded recursion
def div2 : Nat → Nat
| 0 => 0
| 1 => 0
| n + 2 => 1 + div2 n
example : factorial 5 = 120 := rfl
example : div2 10 = 5 := rfl
The machinery presented here forms the foundation of everything that follows. Dependent types are why Lean can serve simultaneously as a programming language and a proof assistant. When you write a type signature like Vector α n → Vector α (n + 1), you are making a mathematical claim that the compiler will verify. Specifications that the machine enforces, invariants that cannot be violated, programs that are correct by construction.
Type-Indexed State Machines
State machines appear everywhere in software: network protocols, UI workflows, resource management, authentication flows. The traditional approach represents state as a runtime value and scatters checks throughout the code. “Is the connection open? Is the user logged in? Has the transaction started?” Each check is a potential bug: forget one and you have undefined behavior, check the wrong condition and you have a security hole.
Type-indexed state machines take a different approach. Instead of tracking state at runtime and checking it manually, we encode state in the type itself. The type checker then verifies that operations happen in the correct order. Invalid sequences become type errors, caught at compile time rather than runtime.
Consider a vending machine. The naive implementation tracks balance as a runtime value, checking at each operation whether funds suffice. Bugs lurk: what if someone calls vend before inserting coins? What if returnChange is called twice? These are not type errors in conventional languages. They are runtime failures waiting to happen.
inductive Product where
| HoneyComb
| SalmonJerky
| BerryMix
| GrubBar
| AcornCrunch
deriving Repr, DecidableEq
def Product.price : Product → Nat
| .HoneyComb => 150
| .SalmonJerky => 200
| .BerryMix => 100
| .GrubBar => 125
| .AcornCrunch => 175
def Product.name : Product → String
| .HoneyComb => "Honey Comb"
| .SalmonJerky => "Salmon Jerky"
| .BerryMix => "Berry Mix"
| .GrubBar => "Grub Bar"
| .AcornCrunch => "Acorn Crunch"
The Machine type is indexed by cents inserted. This index exists only in the type system. At runtime, Machine 0 and Machine 200 are identical unit values with no data. The number is a phantom type parameter that the compiler tracks but that costs nothing at runtime.
structure Machine (cents : Nat) where mk ::
def insertCoin (coin : Nat) {n : Nat} (_m : Machine n) : Machine (n + coin) := ⟨⟩
def insertDollar {n : Nat} (m : Machine n) : Machine (n + 100) := insertCoin 100 m
def vend (p : Product) {n : Nat} (_m : Machine n) (_h : n ≥ p.price) :
Product × Machine (n - p.price) := (p, ⟨⟩)
def returnChange {n : Nat} (_m : Machine n) : Nat × Machine 0 := (n, ⟨⟩)
def empty : Machine 0 := ⟨⟩
Study the type signatures carefully. insertCoin takes a Machine n and returns a Machine (n + coin). The balance increases by exactly the inserted amount. vend requires a proof \(n \geq p.price\) and returns a Machine (n - p.price). You cannot call vend without providing this proof, and the compiler will reject any attempt to vend with insufficient funds. returnChange resets to Machine 0 regardless of the input balance, modeling the fact that all remaining money is returned.
The key insight is that each operation transforms the type index in a way that reflects its effect on the state. The compiler tracks these transformations and ensures they compose correctly. If you try to write code that vends without inserting money, the type checker will demand a proof of \(0 \geq 100\) (or whatever the price is), which is unprovable because it is false.
def exampleTransaction : Product × Nat := Id.run do
let m := empty
let m := insertDollar m
let m := insertDollar m
let (snack, m) := vend .BerryMix m (by native_decide)
let (change, _) := returnChange m
return (snack, change)
The example shows a complete transaction. We start with an empty machine, insert two dollars (200 cents), vend a berry mix for 100 cents, and return the remaining 100 cents as change. At each step, the type system knows exactly how much money is in the machine. The by native_decide proof discharge works because \(200 \geq 100\) is decidably true.
This pattern scales to real systems. A file handle can be indexed by whether it is open or closed: read requires Handle Open and returns Handle Open, while close takes Handle Open and returns Handle Closed. Calling read on a closed handle becomes a type error. A network socket can track connection state: you cannot send on an unconnected socket because the types forbid it.
Authentication systems benefit particularly. A session token can be indexed by authentication level: Session Guest, Session User, Session Admin. Functions that require admin privileges take Session Admin and the compiler ensures you cannot access admin functionality without proper authentication. Privilege escalation bugs become impossible because the type system enforces the security policy.
The tradeoff is complexity. Type-indexed state machines require careful API design and more sophisticated type signatures. The proof obligations can become burdensome for complex protocols. But for systems where correctness matters (financial transactions, security boundaries, safety-critical code), the guarantee that invalid states are unrepresentable is worth the investment.
Constraint Satisfaction: N-Queens
The N-Queens puzzle asks: place N queens on an \(N \times N\) chessboard so that no two attack each other. Queens attack along rows, columns, and diagonals. The naive approach generates placements and filters invalid ones. The dependent type approach makes invalid placements unrepresentable.
A placement is a list of column positions, one per row. Two queens attack if they share a column or diagonal:
abbrev Placement := List Nat
def attacks (r1 c1 r2 c2 : Nat) : Bool :=
c1 == c2 || (if r1 ≥ r2 then r1 - r2 else r2 - r1) == (if c1 ≥ c2 then c1 - c2 else c2 - c1)
def isSafeAux (newRow col : Nat) : Placement → Nat → Bool
| [], _ => true
| qc :: rest, row => !attacks newRow col row qc && isSafeAux newRow col rest (row + 1)
def isSafe (col : Nat) (p : Placement) : Bool := isSafeAux p.length col p 0
A valid placement has the right length, all columns in bounds, and no attacking pairs:
def Valid (n : Nat) (p : Placement) : Prop :=
p.length = n ∧ p.all (· < n) ∧
∀ i j, i < j → j < p.length → !attacks i p[i]! j p[j]!
structure Board (n : Nat) where
placement : Placement
valid : Valid n placement
The Board n type bundles a placement with its validity proof. You cannot construct a Board 8 with queens that attack each other; the proof obligation cannot be discharged. The constraint is not checked at runtime but enforced at compile time.
theorem board_length {n : Nat} (b : Board n) : b.placement.length = n := b.valid.1
The theorem board_length extracts the length invariant from a valid board. The proof is trivial projection because the invariant is baked into the type. This is the dependent types payoff: properties that would require defensive runtime checks become facts the type system guarantees.
Most software is written fast, tested hopefully, and debugged frantically. Dependent types offer a different mode: slower to write, harder to learn, guarantees that survive contact with production. Whether the tradeoff makes sense depends on how much a bug costs. For most code, the answer is “not much.” For some code, the answer is “careers” or “lives.” Know which kind you are writing.
From Theory to Practice
You now understand the type-theoretic machinery. The next article turns to strategy: how to approach proofs systematically, read goal states, choose tactics, and develop the intuition for what technique applies where. Less “what does this mean” and more “how do I make this red squiggle go away.”