Effects
“A monad is just a monoid in the category of endofunctors, what’s the problem?” This infamous quip became the ur-meme of functional programming, spawning a thousand blog posts explaining monads via burritos, boxes, and space suits. The tragedy is that the concept is not hard. It just got wrapped in mystique before anyone explained it clearly.
Here is what matters: programs have effects. They might fail, consult state, perform IO, branch nondeterministically, or launch the missiles. In languages without effect tracking, any function call might do any of these things. You call getUsername() and hope it only reads from a database rather than initiating thermonuclear war. The type signature offers no guarantees. The question is how to represent effects in a way that lets us reason about composition and know, from the types alone, what a function might do. Monads are one answer. They capture a pattern for sequencing operations where each step produces both a result and some context. The bind operation chains these operations, threading the context automatically. Do notation makes the sequencing readable. The interface is minimal, the applications broad.
But monads are not the only answer, and treating them as sacred obscures the deeper point. Algebraic effect systems, linear types, graded monads, and effect handlers all attack the same problem from different angles. What they share is the conviction that effects should be visible in types and that composition should be governed by laws. The specific mechanism matters less than the principle: make the structure explicit so that humans and machines can reason about it.
Lean uses monads because they work well and the ecosystem inherited them from functional programming research of the 1990s. They are a good tool. But the goal is to capture effects algebraically, whatever form that takes. When you understand monads, you understand one particularly elegant solution to sequencing effectful computations. You also understand a template for how programming abstractions should work: a minimal interface, a set of laws, and the discipline to respect both.
The Option Monad
The simplest monad handles computations that might fail. You already understand this pattern: look something up, and if it exists, do something with it. If not, propagate the absence. Every programmer has written this code a hundred times. The monad just gives it a name and a uniform interface.
def safeDivide (x y : Nat) : Option Nat :=
if y == 0 then none else some (x / y)
def safeHead {α : Type} (xs : List α) : Option α :=
match xs with
| [] => none
| x :: _ => some x
#eval safeDivide 10 2 -- some 5
#eval safeDivide 10 0 -- none
#eval safeHead [1, 2] -- some 1
#eval safeHead ([] : List Nat) -- none
Chaining Without Monads
Without the abstraction, chaining fallible operations produces the pyramid of doom: nested conditionals, each handling failure explicitly, the actual logic buried under boilerplate. This is not hypothetical. This is what error handling looks like in languages without monadic structure. It is also what early JavaScript looked like before Promises, which are, of course, monads by another name.
def computation (xs : List Nat) : Option Nat :=
match safeHead xs with
| none => none
| some x =>
match safeDivide 100 x with
| none => none
| some y => some (y + 1)
#eval computation [5, 2, 3] -- some 21
#eval computation [0, 2, 3] -- none (division by zero)
#eval computation [] -- none (empty list)
The bind Operation
The bind operation (written >>=) is the heart of the monad. It takes a value in context and a function that produces a new value in context, and chains them together. For Option, this means: if the first computation succeeded, apply the function. If it failed, propagate the failure. The pattern generalizes far beyond failure, but failure is the clearest example.
def computation' (xs : List Nat) : Option Nat :=
safeHead xs >>= fun x =>
safeDivide 100 x >>= fun y =>
some (y + 1)
#eval computation' [5, 2, 3] -- some 21
#eval computation' [0, 2, 3] -- none
#eval computation' [] -- none
Do Notation
Do notation is syntactic sugar that makes monadic code look imperative. The left arrow ← desugars to bind, and the semicolon sequences operations. This is not a concession to programmers who cannot handle functional style. It is recognition that sequential composition is how humans think about processes, and fighting that serves no purpose. The abstraction remains while the syntax yields to ergonomics.
def computationDo (xs : List Nat) : Option Nat := do
let x ← safeHead xs
let y ← safeDivide 100 x
return y + 1
#eval computationDo [5, 2, 3] -- some 21
#eval computationDo [0, 2, 3] -- none
#eval computationDo [] -- none
def validateInput (name : String) (age : Nat) : Option (String × Nat) := do
if name.isEmpty then none
if age == 0 then none
return (name, age)
#eval validateInput "Alice" 30 -- some ("Alice", 30)
#eval validateInput "" 30 -- none
#eval validateInput "Bob" 0 -- none
In validateInput, the bare none on its own line short-circuits the computation. Within a do block for Option, writing none is equivalent to early return with failure. The remaining lines are not executed, and the whole expression evaluates to none.
Do Notation Desugaring
Do notation is syntactic sugar for bind chains. The compiler transforms your imperative-looking code into applications of >>= and pure. Understanding the desugaring helps when types do not match or when you want to optimize.
The key distinction is between two forms of let:
let x ← eperforms a monadic bind: it unwraps the value from the monadic context. Ife : Option Nat, thenx : Nat. Ifeevaluates tonone, the computation short-circuits immediately.let x := eis a pure let binding: no unwrapping occurs. The value is used exactly as-is. Ife : Option Nat, thenx : Option Nat.
The arrow ← is doing real work: it reaches into the monad and extracts the value, handling failure automatically. The walrus := just names a value.
A monadic bind extracts the value and passes it to the continuation:
do e1 >>= fun x =>
let x ← e1 ⟹ do es
es
A pure let binding has no monadic involvement:
do let x := e
let x := e ⟹ do es
es
An action without binding discards the result:
do e1 >>= fun () =>
e1 ⟹ do es
es
Pattern matching with a fallback handles failure:
do e1 >>= fun
let some x ← e1 ⟹ | some x => do es
| fallback | _ => fallback
es
The return keyword is just pure:
return e ⟹ pure e
The ← operator can appear as a prefix within expressions. Each occurrence is hoisted to a fresh binding, processed left-to-right, inside-to-outside:
do do
f (← e1) (← e2) ⟹ let x ← e1
es let y ← e2
f x y
es
This is not the same as f e1 e2. Consider the difference:
-- If e1 : Option Nat and e2 : Option Nat:
f e1 e2 -- f receives two Option Nat values
f (← e1) (← e2) -- f receives two Nat values (unwrapped)
Use ← when you want to extract the value from a monadic context within an expression. The arrow does the unwrapping. Without it, you pass the wrapped value.
Effects like early return, mutable state, and loops with break/continue transform the entire do block rather than desugaring locally, similar to monad transformers.
Note
Semicolons can replace newlines in do blocks:
do let x ← e1; let y ← e2; pure (x + y). This is rarely used since multiline format is “more readable.” Fifty years of programming language research and we still cannot agree on what makes syntax objectively good. Perhaps because syntax is more fashion and culture than science.
def withBind (xs : List Nat) : Option Nat :=
safeHead xs >>= fun x =>
safeDivide 100 x >>= fun y =>
pure (y + 1)
def withDoNotation (xs : List Nat) : Option Nat := do
let x ← safeHead xs
let y ← safeDivide 100 x
return y + 1
#eval withBind [5] -- some 21
#eval withDoNotation [5] -- some 21
def mixedBindings : Option Nat := do
let x ← some 10 -- monadic bind
let y := x + 5 -- pure let
let z ← some (y * 2) -- monadic bind
return z
#eval mixedBindings -- some 30
Mutable Variables in Do
The let mut syntax introduces mutable bindings that desugar to StateT, a monad transformer that adds mutable state to any monad. Assignment with := modifies the state. The compiler threads the state automatically, transforming imperative-looking code into pure functional operations. You do not need to understand StateT to use let mut because the desugaring is automatic.
def imperativeSum (xs : List Nat) : Nat := Id.run do
let mut total := 0
for x in xs do
total := total + x
return total
def functionalSum (xs : List Nat) : Nat :=
xs.foldl (· + ·) 0
#eval imperativeSum [1, 2, 3, 4, 5] -- 15
#eval functionalSum [1, 2, 3, 4, 5] -- 15
def countValid (xs : List Nat) : IO Nat := do
let mut count := 0
for x in xs do
if x > 0 then
count := count + 1
IO.println s!"Valid: {x}"
return count
When should you use Id.run do versus plain do?
doalone works when you are already inside a monad likeIOorOption. The do block produces a monadic value.Id.run dois needed when you want to use imperative syntax (let mut,forloops) but return a pure value. TheIdmonad is the “identity” monad: it adds no effects, just provides the machinery for state threading.
In imperativeSum, the return type is Nat, not IO Nat. Without Id.run, there would be no monad to thread the mutable state through. The Id monad provides exactly that scaffolding while adding nothing else. For IO operations, you work directly in the IO monad and the mutations interleave with side effects.
The Except Monad
Option tells you that something failed but not why. Except carries the reason. This is the difference between a function returning null and a function throwing an exception with a message. The monadic structure is identical, only the context changes. This uniformity is the point. Learn the pattern once, apply it to failure, to errors, to state, to nondeterminism, to parsing, to probability distributions. The shape is always the same.
inductive ValidationError where
| emptyName
| invalidAge (age : Nat)
| missingField (field : String)
deriving Repr
def validateName (name : String) : Except ValidationError String :=
if name.isEmpty then .error .emptyName
else .ok name
def validateAge (age : Nat) : Except ValidationError Nat :=
if age == 0 || age > 150 then .error (.invalidAge age)
else .ok age
def validatePerson (name : String) (age : Nat) : Except ValidationError (String × Nat) := do
let validName ← validateName name
let validAge ← validateAge age
return (validName, validAge)
#eval validatePerson "Alice" 30 -- Except.ok ("Alice", 30)
#eval validatePerson "" 30 -- Except.error ValidationError.emptyName
#eval validatePerson "Bob" 200 -- Except.error (ValidationError.invalidAge 200)
Combining Effects: Transformer Ordering
Real programs often need multiple effects at once: error handling and logging, state and failure. Monad transformers let you combine effects by stacking them. But the order of the stack matters: different orderings give different failure semantics.
Here is the minimal demonstration:
-- Two ways to combine State and Except - the order matters!
-- State outside Except: on error, state is LOST
abbrev Rollback := StateT Nat (Except Unit)
-- Except outside State: on error, state is PRESERVED
abbrev Audit := ExceptT Unit (StateM Nat)
def countThenFailRollback : Rollback Unit := do
modify (· + 1) -- count = 1
modify (· + 1) -- count = 2
throw () -- error!
modify (· + 1) -- never reached
def countThenFailAudit : Audit Unit := do
modify (· + 1) -- count = 1
modify (· + 1) -- count = 2
throw () -- error!
modify (· + 1) -- never reached
-- Rollback: error discards the state
#eval StateT.run countThenFailRollback 0
-- Except.error () ← count is gone!
-- Audit: error preserves the state
#eval StateT.run (ExceptT.run countThenFailAudit) 0
-- (Except.error (), 2) ← count = 2 preserved
With StateT on the outside (Rollback), an error discards the accumulated state. With ExceptT on the outside (Audit), the state persists even after failure. Same operations, different semantics.
To understand why, think about what each transformer does when you “run” it:
StateT.runtakes a computation and initial state, returns(result, finalState)ExceptT.runtakes a computation, returnsExcept Error Result
The outer transformer determines what you get back. If Except is outer, you get Except Error (Result × State), so the state is inside, preserved regardless of success. If StateT is outer, you get State → Except Error (Result × State), so on error the state is never returned.
ATM Example
Consider an ATM withdrawal, a pipeline of fallible operations that must be logged for compliance. Check the balance. Verify the daily limit. Dispense cash. Update the account. Each step can fail, and each step should be recorded. ATMs are where functional programming meets the brutal reality of mechanical cash dispensers.
inductive ATMError where
| insufficientFunds (requested available : Nat)
| dailyLimitExceeded (requested limit : Nat)
| dispenserJam (dispensedBeforeJam : Nat)
| cardRetained
deriving Repr
def ATMError.describe : ATMError → String
| .insufficientFunds req avail =>
s!"Insufficient funds: requested €{req}, available €{avail}"
| .dailyLimitExceeded req limit =>
s!"Daily limit exceeded: requested €{req}, limit €{limit}"
| .dispenserJam dispensed =>
s!"Dispenser jam after dispensing €{dispensed}"
| .cardRetained => "Card retained by machine"
structure AuditEntry where
timestamp : Nat
message : String
deriving Repr
structure AuditLog where
entries : List AuditEntry := []
nextTimestamp : Nat := 0
deriving Repr
def AuditLog.add (log : AuditLog) (msg : String) : AuditLog :=
{ entries := log.entries ++ [⟨log.nextTimestamp, msg⟩]
nextTimestamp := log.nextTimestamp + 1 }
def AuditLog.show (log : AuditLog) : List String :=
log.entries.map fun e => s!"[{e.timestamp}] {e.message}"
structure Account where
holder : String
balance : Nat
dailyWithdrawn : Nat := 0
dailyLimit : Nat := 500
deriving Repr
The withdrawal amount uses a dependent type PosNat to ensure it is positive. You cannot withdraw zero euros (pointless) or negative euros (the bank frowns upon this):
def PosNat := { n : Nat // n > 0 }
def PosNat.mk? (n : Nat) : Option PosNat :=
if h : n > 0 then some ⟨n, h⟩ else none
instance : Repr PosNat where
reprPrec p _ := repr p.val
Two Transformer Stacks
We define two stacks with different failure semantics:
abbrev RollbackATM := StateT AuditLog (Except ATMError)
abbrev AuditATM := ExceptT ATMError (StateM AuditLog)
The operations are identical in both stacks. Here is the audit version:
def logA (msg : String) : AuditATM Unit :=
modify (·.add msg)
def checkBalanceA (acct : Account) (amount : PosNat) : AuditATM Unit := do
logA s!"Balance check: €{acct.balance} available"
if amount.val > acct.balance then
logA s!"DENIED: Insufficient funds for €{amount.val}"
throw (.insufficientFunds amount.val acct.balance)
def checkLimitA (acct : Account) (amount : PosNat) : AuditATM Unit := do
let remaining := acct.dailyLimit - acct.dailyWithdrawn
logA s!"Daily limit check: €{remaining} remaining of €{acct.dailyLimit}"
if amount.val > remaining then
logA s!"DENIED: Would exceed daily limit"
throw (.dailyLimitExceeded amount.val acct.dailyLimit)
def dispenseCashA (amount : PosNat) : AuditATM Nat := do
logA s!"Dispensing €{amount.val}..."
if amount.val > 200 then
let dispensed := 100
logA s!"ERROR: Dispenser jam after €{dispensed} dispensed"
throw (.dispenserJam dispensed)
logA s!"Cash dispensed: €{amount.val}"
pure amount.val
def updateBalanceA (acct : Account) (amount : Nat) : AuditATM Account := do
let newBalance := acct.balance - amount
logA s!"Balance updated: €{acct.balance} → €{newBalance}"
pure { acct with balance := newBalance, dailyWithdrawn := acct.dailyWithdrawn + amount }
The complete withdrawal combines all steps:
def withdrawAudit (acct : Account) (amount : PosNat) : AuditATM Account := do
logA s!"=== Withdrawal started: {acct.holder} ==="
logA s!"Requested amount: €{amount.val}"
checkBalanceA acct amount
checkLimitA acct amount
let dispensed ← dispenseCashA amount
let newAcct ← updateBalanceA acct dispensed
logA s!"=== Withdrawal complete ==="
pure newAcct
Partial Failure
Consider what happens when the dispenser jams after partially dispensing cash. Alice requests €300. The machine gives her €100, then the dispenser jams.
def runAudit (acct : Account) (amount : PosNat)
: (Except ATMError Account) × AuditLog :=
let ((result, log)) :=
StateT.run (ExceptT.run (withdrawAudit acct amount)) {}
(result, log)
def runRollback (acct : Account) (amount : PosNat)
: Except ATMError (Account × AuditLog) :=
StateT.run (withdrawRollback acct amount) {}
With rollback semantics (RollbackATM), the audit log is lost. The bank’s records show nothing happened. But Alice has €100 in her hand, and there is no record of what occurred.
With audit semantics (AuditATM), the log is preserved:
[0] === Withdrawal started: Alice ===
[1] Requested amount: €300
[2] Balance check: €1000 available
[3] Daily limit check: €500 remaining of €500
[4] Dispensing €300...
[5] ERROR: Dispenser jam after 100 dispensed
Now compliance knows exactly what happened: Alice got €100, the machine jammed, and manual reconciliation is needed.
def demonstrateDifference : IO Unit := do
IO.println "=== Rollback Semantics (StateT outside Except) ==="
IO.println "On error, the audit log is LOST\n"
let amount : PosNat := ⟨300, by omega⟩
match runRollback alice amount with
| .ok (acct, log) =>
IO.println s!"Success! New balance: €{acct.balance}"
IO.println "Audit log:"
for entry in log.show do IO.println s!" {entry}"
| .error e =>
IO.println s!"FAILED: {e.describe}"
IO.println "Audit log: <LOST - we only have the error>"
IO.println "Compliance cannot determine what happened.\n"
IO.println "\n=== Audit Semantics (ExceptT outside StateM) ==="
IO.println "On error, the audit log is PRESERVED\n"
let (result, log) := runAudit alice amount
match result with
| .ok acct =>
IO.println s!"Success! New balance: €{acct.balance}"
| .error e =>
IO.println s!"FAILED: {e.describe}"
IO.println "Audit log (preserved!):"
for entry in log.show do IO.println s!" {entry}"
IO.println "\nCompliance can see exactly what happened."
Tip
Run from the repository:
lake exe atm
This is why banks use audit semantics for ATM transactions. Financial regulations require knowing what happened, including partial failures. The transformer ordering is a design decision with legal implications. Get it wrong and auditors will have questions. Get it right and the code is its own documentation.
The State Monad
The state monad threads mutable state through a pure computation. You get the ergonomics of mutation, the ability to read and write a value as you go, without actually mutating anything. Each computation takes a state and returns a new state alongside its result. The threading is automatic, hidden behind the monadic interface. This is not a trick. It is a different way of thinking about state: not as a mutable box but as a value that flows through your computation, transformed at each step.
Lean provides two related types: StateT σ m α is the general state transformer that adds state of type σ to any monad m. StateM σ α is defined as StateT σ Id α, state over the identity monad, for pure stateful computations. When you do not need other effects, StateM is simpler and sufficient. When you need state combined with IO, Option, or other monads, use StateT. They share the same interface (get, set, modify), and code written for one often works for the other.
Under the hood, a stateful computation is just a function σ → (α × σ). The following shows how you would build the primitives yourself:
namespace ManualState
abbrev State (σ α : Type) := σ → (α × σ)
def get {σ : Type} : State σ σ := fun s => (s, s)
def set {σ : Type} (newState : σ) : State σ Unit := fun _ => ((), newState)
def modify {σ : Type} (f : σ → σ) : State σ Unit := fun s => ((), f s)
def run {σ α : Type} (init : σ) (m : State σ α) : α × σ :=
m init
def counter : State Nat Nat := fun n => (n, n + 1)
#eval run 0 counter -- (0, 1)
#eval run 10 counter -- (10, 11)
end ManualState
The ManualState namespace isolates these definitions from the standard library. Inside, we use natural names: get returns the current state as the result, set ignores the old state and installs a new one, modify applies a function to transform the state.
StateM in Practice
Lean’s Init namespace (see Basics) provides StateM, StateT, ExceptT, and other monad transformers without explicit imports. As noted above, StateM σ α equals StateT σ Id α, the pure-state specialization. The operations get, set, and modify work exactly like our manual versions. Combined with do notation, stateful code looks almost identical to imperative code, except that the state is explicit in the type and the purity is preserved. You can run the same computation with different initial states and get reproducible results. You can reason about what the code does without worrying about hidden mutation elsewhere.
def tick : StateM Nat Unit := modify (· + 1)
def getTicks : StateM Nat Nat := get
def countOperations : StateM Nat Nat := do
tick
tick
tick
let count ← getTicks
return count
#eval countOperations.run 0 -- (3, 3)
#eval countOperations.run 10 -- (13, 13)
The List Monad
Lists as a monad represent nondeterministic computation: a value that could be many things at once. Bind explores all combinations, like nested loops but without the nesting. This is how you generate permutations, enumerate possibilities, or implement backtracking search. The abstraction is the same, only the interpretation differs. A monad does not care whether its context is failure, state, or multiplicity. It only knows how to sequence.
def pairs (xs : List Nat) (ys : List Nat) : List (Nat × Nat) :=
xs.flatMap fun x => ys.map fun y => (x, y)
#eval pairs [1, 2] [10, 20] -- [(1, 10), (1, 20), (2, 10), (2, 20)]
def pythagTriples (n : Nat) : List (Nat × Nat × Nat) :=
(List.range n).flatMap fun a =>
(List.range n).flatMap fun b =>
(List.range n).filterMap fun c =>
if a * a + b * b == c * c && a > 0 && b > 0 then
some (a, b, c)
else
none
#eval pythagTriples 15 -- [(3, 4, 5), (4, 3, 5), (5, 12, 13), ...]
Iteration Type Classes
The ForIn type class powers for loops. Any type with a ForIn instance can be iterated with for x in collection do. The mechanism is more flexible than it first appears: you can implement custom iteration patterns, control early exit, and work in any monad.
structure CountDown where
start : Nat
instance : ForIn Id CountDown Nat where
forIn cd init f := do
let mut acc := init
let mut i := cd.start
while i > 0 do
match ← f i acc with
| .done a => return a -- break
| .yield a => acc := a -- continue
i := i - 1
return acc
def sumCountDown (n : Nat) : Nat := Id.run do
let mut total := 0
for i in CountDown.mk n do
total := total + i
return total
#eval sumCountDown 5 -- 15 (5+4+3+2+1)
The ForInStep type controls loop flow. Returning .done value breaks out of the loop with the accumulated result. Returning .yield value continues to the next iteration. This desugars to monadic operations, so early return in a for loop is not a special case but an application of the general mechanism.
def printAll (xs : List String) : IO Unit := do
for x in xs do
IO.println x
def sumWithIndex (arr : Array Nat) : Nat := Id.run do
let mut total := 0
for h : i in [0:arr.size] do
total := total + arr[i]
return total
#eval sumWithIndex #[10, 20, 30] -- 60
def manualForIn (xs : List Nat) : Option Nat :=
ForIn.forIn xs 0 fun x acc =>
if x == 0 then some (.done acc) -- early exit
else some (.yield (acc + x)) -- continue
#eval manualForIn [1, 2, 3, 4] -- some 10
#eval manualForIn [1, 2, 0, 4] -- some 3 (stopped at 0)
The forIn function can be called directly when you need explicit control over the accumulator and continuation. The callback returns some (.done acc) to break or some (.yield acc) to continue. Returning none propagates failure in the Option monad. This is how Lean unifies iteration with monadic effects.
Collection Operations
Lists and arrays share a common vocabulary of operations. These functions compose naturally into data processing pipelines.
def langs : List String := ["Lean", "Haskell", "Rust", "OCaml"]
def types : List String := ["theorem", "lazy", "systems", "modules"]
-- zip: pair elements from two collections
#eval langs.zip types
-- [("Lean", "theorem"), ("Haskell", "lazy"), ...]
-- map: transform each element
#eval langs.map String.toUpper
-- ["LEAN", "HASKELL", "RUST", "OCAML"]
-- filter: keep elements matching predicate
#eval langs.filter (·.startsWith "R")
-- ["Rust"]
-- take/drop: slice prefix or suffix
#eval langs.take 2
-- ["Lean", "Haskell"]
#eval langs.drop 2
-- ["Rust", "OCaml"]
-- filterMap: filter and transform in one pass
#eval ["42", "bad", "17"].filterMap String.toNat?
-- [42, 17]
-- find?: first element matching predicate
#eval langs.find? (·.length > 4)
-- some "Haskell"
-- any/all: check predicates
#eval langs.any (·.startsWith "L") -- true
#eval langs.all (·.length > 3) -- true
-- zipIdx: pair with indices
#eval ["a", "b", "c"].zipIdx
-- [("a", 0), ("b", 1), ("c", 2)]
The operations compose cleanly: filter selects, map transforms, filterMap fuses both. find? returns Option because absence is a valid result, not an exception.
Folds
Folds are the fundamental iteration pattern. Every loop, every accumulation, every reduction can be expressed as a fold. Understanding folds means understanding how computation flows through a data structure.
A left fold processes elements left-to-right, accumulating from the left:
\[ \text{foldl}(f, z, [a, b, c]) = f(f(f(z, a), b), c) = ((z \oplus a) \oplus b) \oplus c \]
A right fold processes elements right-to-left, accumulating from the right:
\[ \text{foldr}(f, z, [a, b, c]) = f(a, f(b, f(c, z))) = a \oplus (b \oplus (c \oplus z)) \]
Here \(\oplus\) is just \(f\) written infix: \(a \oplus b = f(a, b)\).
For associative operations like addition, both folds give the same result. For non-associative operations, the parenthesization matters:
-- foldl: left fold, accumulator on the left
#eval [1, 2, 3, 4].foldl (· + ·) 0 -- 10
#eval ["a","b","c"].foldl (· ++ ·) "" -- "abc"
-- foldr: right fold, accumulator on the right
#eval [1, 2, 3, 4].foldr (· + ·) 0 -- 10
#eval ["a","b","c"].foldr (· ++ ·) "" -- "abc"
-- the difference: subtraction is not commutative
-- foldl f z [a,b,c] = f(f(f(z,a),b),c) = ((z⊕a)⊕b)⊕c
-- foldr f z [a,b,c] = f(a,f(b,f(c,z))) = a⊕(b⊕(c⊕z))
#eval ([1, 2, 3, 4] : List Int).foldl (· - ·) 0 -- -10: ((((0-1)-2)-3)-4)
#eval ([1, 2, 3, 4] : List Int).foldr (· - ·) 0 -- -2: (1-(2-(3-(4-0))))
-- foldl builds left-to-right (tail recursive)
#eval [1, 2, 3].foldl (fun acc x => x :: acc) [] -- [3, 2, 1]
-- foldr builds right-to-left (preserves structure)
#eval [1, 2, 3].foldr (fun x acc => x :: acc) [] -- [1, 2, 3]
-- practical uses
#eval [10, 25, 8, 42, 3].foldl max 0 -- 42
#eval [2, 3, 4].foldl (· * ·) 1 -- 24
#eval [1, 2, 3].foldl (fun acc x => acc + x * x) 0 -- 14
The cons example reveals the structural difference. Building a list with foldl reverses order because each new element is prepended to the growing accumulator. Building with foldr preserves order because the accumulator grows from the right. This is why map is typically defined using foldr: map f xs = foldr (fun x acc => f x :: acc) [] xs.
Left folds are tail-recursive and run in constant stack space. Right folds are not tail-recursive but can work with lazy data structures since they do not need to traverse to the end before producing output. In strict languages like Lean, prefer foldl for efficiency unless you need the structural properties of foldr.
The Monad Type Class
Under the hood, all monads implement the same interface. pure lifts a plain value into the monadic context. bind sequences two computations, passing the result of the first to the second. That is the entire interface. Everything else, the do notation, the specialized operations, the ergonomic helpers, builds on these two primitives. The simplicity is deliberate. A minimal interface means maximal generality.
class Monad' (M : Type → Type) extends Functor M where
pure' : {α : Type} → α → M α
bind' : {α β : Type} → M α → (α → M β) → M β
instance : Monad' Option where
map f
| none => none
| some x => some (f x)
pure' := some
bind' m f := match m with
| none => none
| some x => f x
instance : Monad' List where
map := List.map
pure' x := [x]
bind' m f := m.flatMap f
Monad Laws
Here is where the algebra becomes essential. Monads must satisfy three laws: left identity, right identity, and associativity. These are not suggestions. They are the contract that makes generic programming possible.
In the traditional bind/return formulation:
| Law | Lean | Math |
|---|---|---|
| Left Identity | pure a >>= f = f a | $\eta(a) \star f = f(a)$ |
| Right Identity | m >>= pure = m | $m \star \eta = m$ |
| Associativity | (m >>= f) >>= g = m >>= (λ x => f x >>= g) | $(m \star f) \star g = m \star (\lambda x. f(x) \star g)$ |
Note
For those with category theory background: the same laws look cleaner in the Kleisli category, where we compose monadic functions directly. If $f : A \to M B$ and $g : B \to M C$, their Kleisli composition is $g \circ f : A \to M C$:
Law Lean Math Left Identity pure >=> f = f$\eta \circ f = f$ Right Identity f >=> pure = f$f \circ \eta = f$ Associativity (f >=> g) >=> h = f >=> (g >=> h)$(h \circ g) \circ f = h \circ (g \circ f)$ The Kleisli formulation reveals that monads give you a category where objects are types and morphisms are functions $A \to M B$. The laws say
pureis the identity morphism and>=>is associative composition. A monad is a way of embedding effectful computation into the compositional structure of functions. You do not need this perspective to use monads effectively.
-- Left identity: pure a >>= f = f a
example (f : Nat → Option Nat) (a : Nat) :
(pure a >>= f) = f a := rfl
-- Right identity: m >>= pure = m
example (m : Option Nat) : (m >>= pure) = m := by
cases m <;> rfl
-- Associativity: (m >>= f) >>= g = m >>= (fun x => f x >>= g)
example (m : Option Nat) (f : Nat → Option Nat) (g : Nat → Option Nat) :
((m >>= f) >>= g) = (m >>= fun x => f x >>= g) := by
cases m <;> rfl
Tip
In Haskell (and most other languages), you can claim your monad follows the laws but the compiler takes your word for it. In Lean, you can prove it. The laws become theorems, and proving them means constructing values of the corresponding types. This is the Curry-Howard correspondence at work. The Proofs article shows how.
At this point someone usually asks what a monad “really is.” The answers have become a genre: a burrito, a spacesuit, a programmable semicolon, a monoid in the category of endofunctors. These metaphors are not wrong, but they are not enlightening either. A monad is the three laws above and nothing else. Everything follows from the laws. The metaphors are for people who want to feel like they understand before they do the work of understanding.
Note
For those who want the category theory (colloquially known as “abstract nonsense,” which is their term of endearment for their own field): a monad is a monoid object in the monoidal category of endofunctors under composition. Equivalently, it is a lax 2-functor from the terminal 2-category to Cat. The Kleisli category is the free algebra of the monad.
someis the identity morphism in the Kleisli category ofOption. In Haskell it is calledJust, which humorously is Just an endomorphism in the Kleisli category ofOption. If this clarified nothing, congratulations: you understood monads before and still do now. You do not need any of this to use monads effectively.
Early Return
Do notation supports early return, loops, and mutable references, all the imperative conveniences. Combined with monads, this gives you the syntax of imperative programming with the semantics of pure functions. You can write code that reads like Python and reasons like Haskell. This is not cheating. It is the whole point: capturing effects in types so that the compiler knows what your code might do, while letting you write in whatever style is clearest.
def findFirst {α : Type}
(p : α → Bool) (xs : List α) : Option α := do
for x in xs do
if p x then return x
none
#eval findFirst (· > 5) [1, 2, 3, 7, 4] -- some 7
#eval findFirst (· > 10) [1, 2, 3] -- none
def processUntilError (xs : List Nat) : Except String (List Nat) := do
let mut results := []
for x in xs do
if x == 0 then
throw "encountered zero"
results := results ++ [x * 2]
return results
#eval processUntilError [1, 2, 3] -- Except.ok [2, 4, 6]
#eval processUntilError [1, 0, 3] -- Except.error "encountered zero"
Combining Monadic Operations
Functions like mapM and filterMap combine monadic operations over collections. Map a fallible function over a list and either get all the results or the first failure. Filter a list with a predicate that consults external state. These combinators emerge naturally once you have the abstraction. They are not special cases but instances of a general pattern, composable because they respect the monad laws.
def mayFail (x : Nat) : Option Nat :=
if x == 0 then none else some (100 / x)
def processAll (xs : List Nat) : Option (List Nat) :=
xs.mapM mayFail
#eval processAll [1, 2, 4, 5] -- some [100, 50, 25, 20]
#eval processAll [1, 0, 4, 5] -- none
def filterValid (xs : List Nat) : List Nat :=
xs.filterMap mayFail
#eval filterValid [1, 0, 2, 0, 4] -- [100, 50, 25]
The Larger Pattern
Monads are one algebraic structure among many. Functors capture mapping. Applicatives capture independent combination. Monads capture dependent sequencing. Comonads capture context extraction. Arrows generalize computation graphs. Algebraic effects decompose monads into composable pieces. Each abstraction comes with laws, and those laws are the actual content. The specific names matter less than the discipline: identify a pattern, find its laws, and build an interface that enforces them.
The trajectory of programming language research has been toward making this structure explicit. Effects that C programmers handle with conventions, functional programmers handle with types. Invariants that documentation describes, dependent types enforce. Properties that tests sample, proofs establish. Each step reduces the burden on human memory and attention, encoding knowledge in artifacts that machines can check.
This matters because the economics of software are changing. When code is cheap to generate, correctness becomes the bottleneck. A language model can produce plausible implementations faster than any human, but “plausible” is not “correct.” The leverage shifts to whoever can specify precisely what correct means. Types, laws, contracts, proofs: these are the tools for specifying. Monads are a small example, one worked case of a pattern made precise. The concept itself was always simple. Sequencing with context. The value was never in the mystery but in the laws that let us reason compositionally about programs we increasingly do not write ourselves and cannot fully understand. (For more on where this is heading, see Artificial Intelligence.)
From Abstract to Concrete
Monads describe effects abstractly. The next article makes them concrete: actual file I/O, process management, environment variables, and the runtime machinery that connects your pure descriptions to the impure world. This completes the programming half of our journey.