Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Typechecker Zoo

This is a pet project of mine I’ve been working on for a while. We’re going to create minimal implementations of the most successful static type systems of the last 30 years. This will involve making toy implementations of programming languages and the core typechecking algorithms. These obviously have evolved a lot over the years, so we’ll start with the simple ones and proceed all the way up to modern dependent types.

We’re going to implement them all in Rust. For no particularly reason, other than it has a decent parser ecosystem and is easy to install. And I like the ironic synthesis of building pure functional languages in a language which is decidedly non-functional. It’s a bit of a heaven-and-hell thing going on, and I’ll leave it up to you to decide on the chirality of that metaphor.

This is going to be a more a fun weekend side project, than a formal introduction to these systems. If you want resources read TAPL and ATTAPL for the theory behind these systems. We’ll going for implementation, fun, and rock and roll. But I’ll link to the primary resources if you want the gory details.

The four little critters we’re going to build are:

Algorithm W (775 lines of code)

Robin Milner’s classic Hindley-Milner type inference algorithm from A Theory of Type Polymorphism in Programming. A toy polymorphic lambda calculus.

System F (1090 lines of code)

Second-order lambda calculus with parametric polymorphism using bidirectional type checking. A Mini-OCaml

An implementation of DK algorithm from Complete and Easy Bidirectional Typechecking for Higher-rank Polymorphism by Dunfield and Krishnaswami.

System F-ω (3196 lines of code)

Complete implementation of System F-ω with higher-kinded types, DK bidirectional type checking, existential type variables, polymorphic constructor applications, pattern matching, and datatypes. A Haskell-lite.

Uses the method of A Mechanical Formalization of Higher-Ranked Polymorphic Type Inference by Zhao et al.

Calculus of Constructions (6000 lines of code)

The Calculus of Constructions with a countable hierarchy of non-cumulative universes and inductive types. A teeny Lean-inspired dependent type checker.

Uses a bidirectional dependent typechecker outlined in A Universe Polymorphic Type System by Vladimir Voevodsky.

Type Systems Overview

So why do we build type systems? The answer is obviously because they are awesome and intellectually interesting. But second to that, because they are genuinely useful.

Our exploration starts with the lambda calculus, a formal system developed by Alonzo Church in the 1930s to express computation. It is the minimal, universal programming language. Its syntax consists of just three elements: variables, function abstractions (a way to define an anonymous function), and function application (the act of calling a function). For instance, the identity function, which simply returns its input, is written as \( \lambda x. x \). Despite this simplicity, any computable problem can be expressed and solved within the lambda calculus, making it Turing complete.

Next, we introduce type systems. A type system is a set of rules that assigns a property, known as a type, to the constructs of a program, such as variables, expressions, and functions. The primary purpose is to reduce bugs by preventing operations that don’t make sense, like dividing a number by a string. This process of verifying that a program obeys its language’s type rules is called type checking. Type checking can be performed at compile-time, known as static typing, or during program execution, known as dynamic typing. By enforcing these rules, type systems help ensure that different parts of a program connect in a consistent and error-free way.

This leads to the principle of well-typed programs. A program is called well-typed if it conforms to the rules of its type system. This can be expressed formally with a typing judgment, \( E \vdash M : A \), which asserts that in a given context \( E \), the program expression \( M \) has the type \( A \). The foundational promise of this approach was articulated by Robin Milner in his 1978 paper, A Theory of Type Polymorphism in Programming, with the phrase “Well-typed programs cannot go wrong”. This means a program that passes the type checker is guaranteed to be free of a certain class of runtime errors. Later extensions of this idea, such as “Well-typed programs don’t get stuck” and “Well-typed programs can’t be blamed,” further underscore the safety and reliability that robust type systems provide to software development.

A System of Types

At the heart of any type system is a set of formal rules for making logical deductions about a program. These deductions are called judgments. A judgment is an assertion that a piece of code has a certain property. The most common kind of judgment you will encounter is a typing judgment, which asserts that a given expression has a particular type within a specific context. We write this formally using a “turnstile” symbol \( \vdash \). The general form of a typing judgment looks like this:

\[ \Gamma \vdash e : T \]

This statement is read as, “In the context \( \Gamma \), the expression \( e \) has the type \( T \).” The context, represented by the Greek letter Gamma (\( \Gamma \)), is crucial. It acts as an environment that keeps track of the types of all the variables that are currently in scope. Our goal when type checking a program is to construct a valid derivation that proves this judgment holds for the entire program. Think of it as a giant dictionary that maps variable names to their types, that’s basically all it is.

The context \( \Gamma \) is essentially a map from variable names to their types. For example, \( x: \text{Int}, f: \text{Bool} \to \text{Int} \) is a context where the variable \( x \) has type \( \text{Int} \) and \( f \) has a function type from \( \text{Bool} \) to \( \text{Int} \). As we enter deeper scopes in a program, like inside a function body, we extend the context with new variable bindings. This is often written as \( \Gamma, x:T \), which means “the context \( \Gamma \) extended with a new binding stating that variable \( x \) has type \( T \).”

Typing judgments are derived using inference rules. An inference rule states that if you can prove a set of judgments, called the premises, then you can conclude another judgment, called the conclusion. Rules are written with the premises on top of a horizontal line and the conclusion below it. You should read them from top to bottom: “If all premises above the line are true, then the conclusion below the line is also true.” If a rule has no premises, it is an axiom, a self-evident truth that requires no prior proof to hold. These rules are the fundamental building blocks we use to reason about the types in a program.

When working with formal type systems, inference rules follow a consistent structural pattern that makes them easier to read and understand. Every rule has the form:

\[ \frac{\text{premises}}{\text{conclusion}} \text{(Rule-Name)} \]

This notation should be read as: “If all the premises above the line are true, then the conclusion below the line is also true.” The premises represent the conditions that must be satisfied, while the conclusion represents what we can deduce when those conditions hold. The rule name provides a convenient label for referencing the rule in discussions and proofs.

For example, an axiom rule that establishes the type of the literal zero might look like:

\[ \frac{}{\Gamma \vdash 0 : \text{Nat}} \text{(T-Zero)} \]

This rule has no premises above the line, making it an axiom. It simply states that in any context \( \Gamma \), the literal \( 0 \) has type \( \text{Nat} \). This is a fundamental fact that requires no further proof.

These axioms typically handle simple cases like variable lookups or literal values (sometimes called ground types). Rules with multiple premises, separated by spacing or explicit conjunction symbols, require all conditions to be satisfied simultaneously before the conclusion can be drawn.

A foundational rule in nearly every type system is the variable lookup rule, which lets us find the type of a variable from the context:

\[ \frac{x:T \in \Gamma}{\Gamma \vdash x : T} \]

This rule is an axiom because it has no premises above the line. It reads: “If the type binding \( x:T \) is present in the context \( \Gamma \), then we can conclude that in context \( \Gamma \), the expression \( x \) has type \( T \).” It formally defines the action of looking up a variable’s type in the current environment. For example the axiom that 1 has type \( \text{Int} \):

\[ \frac{}{\Gamma \vdash 1 : \text{Int}} \]

By defining a collection of these inference rules, we create a complete type system. Each rule defines how to determine the type of a specific kind of expression, like a function call, a literal value, or an if-then-else block. For instance, a rule for function application would require as its premises that we first prove the function itself has a function type \( T \to U \) and that its argument has the corresponding input type \( T \). If we can prove those premises, the rule allows us to conclude that the entire function application expression has the output type \( U \). By repeatedly applying these rules, we can build a derivation tree that starts from axioms about variables and literals and culminates in a single judgment about the type of our entire program, thereby proving it is well-typed.

Judgements

A common inference rule with multiple premises is the one for function application, which determines the type of a function call. In lambda calculus, this is simply an expression \( e_1 \ e_2 \), where \( e_1 \) is the function and \( e_2 \) is the argument. To assign a type to this expression, we must first determine the types of both the function and its argument. The rule, often called “application” or “elimination for functions” (\( \to E \)), is written as follows:

\[ \frac{\Gamma \vdash e_1 : T_1 \to T_2 \quad \Gamma \vdash e_2 : T_1}{\Gamma \vdash e_1 \ e_2 : T_2} \]

This rule has two premises above the line. It states that to conclude that the application \( e_1 \ e_2 \) has type \( T_2 \), we must first prove two things in the same context \( \Gamma \). First, we must show that \( e_1 \) has a function type, written as \( T_1 \to T_2 \), which means it takes an input of type \( T_1 \) and produces an output of type \( T_2 \). Second, we must show that the argument \( e_2 \) has the correct input type \( T_1 \). If both of these premises hold, the rule allows us to deduce that the entire expression \( e_1 \ e_2 \) results in the function’s output type, \( T_2 \).

Now, let’s look at an example that chains three judgments together to type check the expression \( f \ x \). We’ll work within a context \( \Gamma \) that contains bindings for both \( f \) and \( x \), specifically \( \Gamma = f:\text{Int} \to \text{Bool}, x:\text{Int} \). Our goal is to prove that \( \Gamma \vdash f \ x : \text{Bool} \).

Our derivation begins with two simple variable lookups, which are our axioms. These will form the premises of our application rule:

  1. First Judgment (Variable Lookup for \( f \)): We use the variable rule to find the type of \( f \). Since \( f:\text{Int} \to \text{Bool} \) is in our context \( \Gamma \), we can conclude: \[ \frac{f:\text{Int} \to \text{Bool} \in \Gamma}{\Gamma \vdash f : \text{Int} \to \text{Bool}} \]

  2. Second Judgment (Variable Lookup for \( x \)): Similarly, we look up the type of \( x \). The binding \( x:\text{Int} \) is in \( \Gamma \), so we can conclude: \[ \frac{x:\text{Int} \in \Gamma}{\Gamma \vdash x : \text{Int}} \]

  3. Third Judgment (Function Application): Now we have the necessary premises to use the function application rule. We substitute \( e_1 \) with \( f \), \( e_2 \) with \( x \), \( T_1 \) with \( \text{Int} \), and \( T_2 \) with \( \text{Bool} \). Since our first two judgments successfully proved the premises, we can now form the final conclusion: \[ \frac{\Gamma \vdash f : \text{Int} \to \text{Bool} \quad \Gamma \vdash x : \text{Int}}{\Gamma \vdash f \ x : \text{Bool}} \]

Putting all of this together, we can represent the full derivation as a tree of nested inference rules, showing how the final judgment is built from the axioms for variable lookup:

\[ \frac{ \frac{f:\text{Int} \to \text{Bool} \in \Gamma}{\Gamma \vdash f : \text{Int} \to \text{Bool}} \quad \frac{x:\text{Int} \in \Gamma}{\Gamma \vdash x : \text{Int}} }{ \Gamma \vdash f \ x : \text{Bool} } \]

This nested structure is called a derivation tree or inference tree. Each node in the tree corresponds to an application of an inference rule, and the leaves are axioms (variable lookups from the context). The tree visually demonstrates how the type checker starts from basic facts (the types of variables in the context) and applies rules step by step to reach a conclusion about the type of a complex expression. In this example, the root of the tree is the judgment \( \Gamma \vdash f \ x : \text{Bool} \), and its two children are the judgments for \( f \) and \( x \), each justified by their respective axioms. This process generalizes to larger programs, where the derivation tree grows to reflect the structure of the program and the logical flow of type information.

Terms and Types

Historically, many programming languages enforced a strict separation between different layers of abstraction, a concept known as stratification. In this model, you had a “term language” and a “type language” which were syntactically and conceptually distinct. The term language consists of the expressions that actually compute values and run at runtime, like \( 5 + 2 \) or \( \text{if } x \text{ then } y \text{ else } z \). The type language, on the other hand, consists of expressions that describe the terms, like \( \text{Int} \) or \( \text{Bool} \to \text{Bool} \). These two worlds were kept separate; a type could not appear where a term was expected, and vice versa.

This stratification could be extended further. To bring order to the type language itself, a third layer called the “kind language” was often introduced. A kind can be thought of as the “type of a type.” For example, a concrete type like \( \text{Int} \) has the simplest kind, \( * \) (often pronounced “type”). But a type constructor like \( \text{List} \) isn’t a type on its own; it’s something that takes a type and produces a new one. \( \text{List} \) takes \( \text{Int} \) to produce \( \text{List} \ \text{Int} \). Therefore, its kind is \( * \to * \). This creates a rigid hierarchy: terms are classified by types, and types are classified by kinds. This clear separation makes type checking simpler and more predictable.

However, a major trend in the design of modern, powerful type systems has been to move away from this strict stratification and instead unify the term and type languages into a single, consistent syntactic framework. In these unified systems, the line between what is a “value” (a term) and what is a “description” (a type) begins to blur. The language’s grammar allows expressions that can be interpreted at different “levels” or “universes.” For instance, you might have a universe \( \text{Type}_0 \) which contains simple types like \( \text{Bool} \). Then you would have a higher universe, \( \text{Type}_1 \), whose only member is \( \text{Type}_0 \). This allows you to write functions that operate on types themselves, a key feature of dependently typed languages. More on this later.

To make this concrete, let’s consider a simple, stratified language for arithmetic. We can define its term language (\( e \)) and its corresponding type language (\( \tau \)) separately. The terms are the expressions we can compute, and the types are the static labels we can assign to them. Their definitions might look like this:

\[ \begin{align*} \text{terms} \quad e &::= n \mid e_1 + e_2 \mid \text{iszero}(e) \mid \text{true} \mid \text{false} \mid \text{if } e_1 \text{ then } e_2 \text{ else } e_3 \\ \text{types} \quad \tau &::= \text{Nat} \ \mid \ \text{Bool} \end{align*} \]

Here, the term language \( e \) defines natural numbers (\( n \)), addition, a function to check for zero, boolean constants, and a conditional. The type language \( \tau \) is much simpler; it only contains the types \( \text{Nat} \) and \( \text{Bool} \). In this stratified system, an expression like \( \text{Nat} + 5 \) would be a syntax error because \( \text{Nat} \) belongs to the type language and cannot be used in a term-level operation like addition. In a more modern, unified system, this rigid distinction would be relaxed.

Type Checking and Type Reconstruction

While the process of verifying that a program adheres to its type rules is called type checking, a related and historically significant challenge is type reconstruction, more commonly known as type inference. The goal of type inference is to have the compiler automatically deduce the types of expressions without the programmer needing to write explicit type annotations. For many years, developing algorithms that could perform full type inference for increasingly expressive languages was an active and vital area of research. The promise was seductive: achieve all the safety guarantees of a static type system without the verbose, manual effort of annotating every variable and function.

In more recent decades, the focus on achieving complete type inference has diminished. The primary reason is that as type systems have grown more powerful and complex, full inference has become computationally intractable or, in many cases, fundamentally undecidable. Modern languages often include features like higher-rank polymorphism (passing polymorphic functions as arguments), GADTs, and various forms of type-level programming where types themselves can involve computation. For these systems, a general algorithm that can always infer the single “best” type for any given expression simply does not exist. Attempting to do so would lead to impossibly complex algorithms and can result in inferred types that are enormous and incomprehensible to the programmer.

As a result, many modern statically-typed languages have converged on a practical and elegant middle ground: bidirectional type checking. Instead of having a single mode that always tries to infer types, a bidirectional checker operates in two distinct modes: a “checking” mode and a “synthesis” mode.

  1. Checking Mode: In this mode, the algorithm verifies that an expression \( e \) conforms to a known, expected type \( \tau \). Information flows “down” from the context into the expression. We ask the question: “Can we prove that \( e \) has type \( \tau \)?”
  2. Synthesis Mode: In this mode, the algorithm computes, or “synthesizes,” a type for an expression \( e \) without any prior expectation. Information flows “up” from the expression’s components. Here, we ask: “What is the type of \( e \)?”

This duality provides a powerful framework. The language designer can specify which syntactic constructs require annotations and which do not. For example, the arguments of a top-level function might require explicit annotations (putting the checker in checking mode for the function body), but the types of local variables within that body can be inferred (synthesized). This approach neatly sidesteps the difficulties of full inference by requiring the programmer to provide annotations only at key boundaries where ambiguity would otherwise arise. It offers a “best of both worlds” scenario: the convenience of local inference with the clarity and power of explicit annotations for complex, polymorphic, or ambiguous parts of the code, representing a theoretical sweet spot that balances expressiveness, usability, and implementability. More on this later.

The Frontiers

A deep insight has been unfolding since the late 1970s, a revelation that three immense, seemingly distinct realms of human thought are echoes of a single, deeper reality. This “computational trinitarianism” reveals a profound interplay between different disciplines:

  • Computation
  • Formal logic
  • Category theory

Think of it as a three way harmony between Spaces, Logic and Computation.

These are not merely analogous; they are three perspectives on one underlying phenomenon. The key to this unification is a powerful act of abstraction. While mathematics seems to be about many different things like sets and shapes, it fundamentally relies on the notion of a map. Set theory is about functions, topology is about continuous maps, algebra is about homomorphisms, and logic is about implication.

With the right framework, we can abstract away the specific details. What matters are not the objects themselves, but the maps between them. When you redefine a concept in this universal language, something incredible happens: you can translate it into any other category. A “group” defined in the category of sets is just a normal group. But translate that same definition to the category of topological spaces, and you get a topological group. In the category of manifolds, you get a Lie group. This is the Rosetta Stone for the foundations of abstraction itself.

This is why this is an exciting time in software. These concepts elevate types from simple data labels into a rich language for encoding deep truths about a program’s behavior. These are no longer just academic curiosities; they are the architectural principles for the next generation of powerful and provably correct software.

This grand unification is an extension of the famous Curry-Howard correspondence, which revealed a stunning duality: a type is a logical proposition, and a program is a constructive proof of that proposition. Trinitarianism expands this into a three-fold revelation:

  • A Proposition in logic corresponds to a Type in a programming language, which corresponds to an Object (like a space) in a category.
  • A Proof of a proposition corresponds to a Program of a type, which corresponds to a Map (or morphism) in a category.
  • The Simplification of a proof corresponds to the Computation (or evaluation) of a program.

Under this paradigm, programming becomes the act of discovering and articulating mathematical truth. Consider a simple example: a function to get the first element of a list. A naive type signature might be head(List<T>) -> T. This is a weak proposition: “Give me a list of things, and I’ll give you a thing.” This proposition is false, as the program will crash on an empty list.

An expressive type system lets us state a more honest proposition: safeHead(List<T>) -> Maybe<T>. The Maybe<T> type embodies the possibility of failure. This type is a stronger proposition: “Given any list, I can provide a context that maybe contains a thing.” A program satisfying this type is a proof that you have handled all possibilities, including the empty one. The compiler, by checking the type, verifies your proof and mathematically guarantees your program cannot crash from this error.

Now for a more profound example. Imagine a type for a provably sorted list: SortedList<T>. A function to merge two such lists has a type that is a powerful theorem: merge(SortedList<T>, SortedList<T>) -> SortedList<T>. This proposition states, “If you give me two proofs that lists are sorted, I will produce a new proof that the resulting merged list is also sorted.” The compiler will not accept a program for this type unless its logic correctly preserves the “sorted” property. The type system becomes a partner in reasoning, ensuring deep logical properties hold true.

This idea of structure-preserving transformations is a core engine of modern mathematics. In algebraic topology, a functor acts as a map between entire universes of mathematical objects. The homology functor, for example, translates topological spaces into algebraic groups, allowing geometric problems to be solved with algebra. The insight of trinitarianism is that a well-typed program is a functor: a concrete, computable map that translates a logical proof into a computational process while perfectly preserving the underlying structure.

This theoretical bedrock unleashes extraordinary practical power. When types are precise logical statements, composing programs is like composing theorems, allowing us to build massive systems with certainty. The ultimate ambition this enables is program synthesis. The programmer’s role shifts from mechanic to architect. They specify the what by writing a rich, descriptive type (a theorem), and the system discovers the how by searching for a valid proof, which is the program itself.

These frontiers are critical in the age of AI-generated code. As we task AI with generating vast software systems, the central challenge becomes one of trust. How do we know the code is correct? The answer is to demand more from the AI. We require it to produce not just code, but code whose very type is a formal proof of its correctness. The type checker then becomes our ultimate, infallible arbiter of truth.

This transforms programming from a manual craft into a discipline of rigorous specification, allowing us to build software whose correctness is not just hoped for, but mathematically assured. This is the awesome power of modern type systems: the journey from preventing simple errors to proving deep truths, turning the art of programming into a direct collaboration with logic itself.

And that’s why type systems are awesome, and hopefully you come to see how much fun they can be!

The Lambda Calculus

The lambda calculus is computation in its purest form. It is not an exaggeration to say that this elegant, tiny system is the theoretical bedrock of nearly every functional programming language, and its influence is felt across the entire landscape of computer science. Developed by Alonzo Church in the 1930s as a tool for studying the foundations of mathematics, it was later understood to be a universal model of computation. At its core, the lambda calculus is shockingly minimal; its syntax defines only three kinds of expressions, or “terms.”

Formally, we can define the syntax of the pure lambda calculus as follows:

\[ e ::= x \mid \lambda x . e \mid e_1 \ e_2 \]

Let’s break this down:

  1. Variable (\(x\)): A name that acts as a placeholder for a value.
  2. Abstraction (\(\lambda x . e\)): This is an anonymous function definition. The \(\lambda x\) is the function’s parameter, and the expression \(e\) is its body. The \(\lambda\) is read as “lambda.”
  3. Application (\(e_1 \ e_2\)): This is the act of calling a function. The expression \(e_1\) is the function, and \(e_2\) is the argument being passed to it.

This formal definition maps almost directly onto a data structure in a language like Rust. We can represent these three core terms using an enum:

#![allow(unused)]
fn main() {
pub enum Expr {
    Var(String),
    Abs(String, Box<Expr>),
    App(Box<Expr>, Box<Expr>),
}
}
#![allow(unused)]
fn main() {
#[derive(Debug, Clone, PartialEq)]
pub enum Lit {
    Int(i64),
    Bool(bool),
}
#[derive(Debug, Clone, PartialEq)]
pub enum Type {
    Var(String),
    Arrow(Box<Type>, Box<Type>),
    Int,
    Bool,
    Tuple(Vec<Type>),
}
#[derive(Debug, Clone, PartialEq)]
pub struct Scheme {
    pub vars: Vec<String>, // Quantified type variables
    pub ty: Type,          // The type being quantified over
}
impl std::fmt::Display for Expr {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Expr::Var(name) => write!(f, "{}", name),
            Expr::Lit(Lit::Int(n)) => write!(f, "{}", n),
            Expr::Lit(Lit::Bool(b)) => write!(f, "{}", b),
            Expr::Abs(param, body) => write!(f, "λ{}.{}", param, body),
            Expr::App(func, arg) => match (func.as_ref(), arg.as_ref()) {
                (Expr::Abs(_, _), _) => write!(f, "({}) {}", func, arg),
                (_, Expr::App(_, _)) => write!(f, "{} ({})", func, arg),
                (_, Expr::Abs(_, _)) => write!(f, "{} ({})", func, arg),
                _ => write!(f, "{} {}", func, arg),
            },
            Expr::Let(var, value, body) => {
                write!(f, "let {} = {} in {}", var, value, body)
            }
            Expr::Tuple(exprs) => {
                write!(f, "(")?;
                for (i, expr) in exprs.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{}", expr)?;
                }
                write!(f, ")")
            }
        }
    }
}
impl std::fmt::Display for Type {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Type::Var(name) => write!(f, "{}", name),
            Type::Int => write!(f, "Int"),
            Type::Bool => write!(f, "Bool"),
            Type::Arrow(t1, t2) => {
                // Add parentheses around left side if it's an arrow to avoid ambiguity
                match t1.as_ref() {
                    Type::Arrow(_, _) => write!(f, "({}) → {}", t1, t2),
                    _ => write!(f, "{} → {}", t1, t2),
                }
            }
            Type::Tuple(types) => {
                write!(f, "(")?;
                for (i, ty) in types.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{}", ty)?;
                }
                write!(f, ")")
            }
        }
    }
}
impl std::fmt::Display for Scheme {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        if self.vars.is_empty() {
            write!(f, "{}", self.ty)
        } else {
            write!(f, "forall {}. {}", self.vars.join(" "), self.ty)
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum Expr {
    Var(String),
    App(Box<Expr>, Box<Expr>),
    Abs(String, Box<Expr>),
    Let(String, Box<Expr>, Box<Expr>),
    Lit(Lit),
    Tuple(Vec<Expr>),
}
}

Here, Var(name) corresponds to \(x\), Abs(param, body) corresponds to \(\lambda x . e\), and App(function, argument) corresponds to \(e_1 \ e_2\). The Box is a Rust detail that allows us to have recursive types of a known size.

The power of the lambda calculus emerges from a few fundamental concepts. The first is variable binding. In an abstraction like \(\lambda x . x + 1\), the variable \(x\) is said to be bound within the body of the lambda. Any variable that is not bound by an enclosing lambda is a free variable. This distinction is critical for understanding scope. This leads directly to the idea of alpha equivalence, which states that the name of a bound variable is irrelevant. The function \(\lambda x . x\) is semantically identical to \(\lambda y . y\); both are the identity function. An implementation must be able to recognize this equivalence to correctly handle variable naming.

The second core concept is beta reduction, which is the computational engine of the lambda calculus. It formally defines how function application works. When an abstraction is applied to an argument, we reduce the expression by substituting the argument for every free occurrence of the bound variable within the function’s body. For example, applying the function \(\lambda x . x + 1\) to the argument \(5\) is written as \((\lambda x . x + 1) \ 5\). The beta reduction rule tells us to replace \(x\) with \(5\) in the body \(x + 1\), yielding the result \(5 + 1\). This process of substitution is the fundamental mechanism of computation in this system. Implementations often avoid direct string substitution due to its complexity and risk of name collisions, instead using techniques like de Bruijn indices, which replace variable names with numbers representing their lexical distance to their binder.

While the pure lambda calculus is Turing complete, it is also notoriously difficult to use directly for practical programming. For example, representing the number \(3\) requires a complex expression like \(\lambda f . \lambda x . f (f (f x))\). To make programming more convenient, we almost always extend the core calculus with additional expression types. These can include let bindings for local variables, literals for concrete values like numbers and strings, and data structures like tuples.

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, PartialEq)]
pub enum Lit {
    Int(i64),
    Bool(bool),
}
#[derive(Debug, Clone, PartialEq)]
pub enum Type {
    Var(String),
    Arrow(Box<Type>, Box<Type>),
    Int,
    Bool,
    Tuple(Vec<Type>),
}
#[derive(Debug, Clone, PartialEq)]
pub struct Scheme {
    pub vars: Vec<String>, // Quantified type variables
    pub ty: Type,          // The type being quantified over
}
impl std::fmt::Display for Expr {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Expr::Var(name) => write!(f, "{}", name),
            Expr::Lit(Lit::Int(n)) => write!(f, "{}", n),
            Expr::Lit(Lit::Bool(b)) => write!(f, "{}", b),
            Expr::Abs(param, body) => write!(f, "λ{}.{}", param, body),
            Expr::App(func, arg) => match (func.as_ref(), arg.as_ref()) {
                (Expr::Abs(_, _), _) => write!(f, "({}) {}", func, arg),
                (_, Expr::App(_, _)) => write!(f, "{} ({})", func, arg),
                (_, Expr::Abs(_, _)) => write!(f, "{} ({})", func, arg),
                _ => write!(f, "{} {}", func, arg),
            },
            Expr::Let(var, value, body) => {
                write!(f, "let {} = {} in {}", var, value, body)
            }
            Expr::Tuple(exprs) => {
                write!(f, "(")?;
                for (i, expr) in exprs.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{}", expr)?;
                }
                write!(f, ")")
            }
        }
    }
}
impl std::fmt::Display for Type {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Type::Var(name) => write!(f, "{}", name),
            Type::Int => write!(f, "Int"),
            Type::Bool => write!(f, "Bool"),
            Type::Arrow(t1, t2) => {
                // Add parentheses around left side if it's an arrow to avoid ambiguity
                match t1.as_ref() {
                    Type::Arrow(_, _) => write!(f, "({}) → {}", t1, t2),
                    _ => write!(f, "{} → {}", t1, t2),
                }
            }
            Type::Tuple(types) => {
                write!(f, "(")?;
                for (i, ty) in types.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{}", ty)?;
                }
                write!(f, ")")
            }
        }
    }
}
impl std::fmt::Display for Scheme {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        if self.vars.is_empty() {
            write!(f, "{}", self.ty)
        } else {
            write!(f, "forall {}. {}", self.vars.join(" "), self.ty)
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum Expr {
    Var(String),
    App(Box<Expr>, Box<Expr>),
    Abs(String, Box<Expr>),
    Let(String, Box<Expr>, Box<Expr>),
    Lit(Lit),
    Tuple(Vec<Expr>),
}
}

This raises a crucial design question: should these new constructs be “wired in” as primitives with their own semantic rules, or should they be defined in terms of the pure calculus? This represents a fundamental tradeoff in language design. For example, a let expression, \(\text{let } x = e_1 \text{ in } e_2\), can be treated as syntactic sugar that desugars directly into a pure lambda calculus expression: \((\lambda x . e_2) \ e_1\). This approach keeps the core language minimal and elegant. The alternative is to make let a primitive construct. This means the type checker and evaluator must have specific logic to handle it. This “wired-in” approach often leads to better performance and more precise error messages, as the compiler has more specific knowledge about the construct’s intent. However, it increases the complexity of the core system. Many languages strike a balance: fundamental and performance-critical features like numeric literals are often wired in, while higher-level patterns like let bindings might be treated as syntactic sugar, offering a convenient syntax that ultimately translates back to the foundational concepts of lambda, application, and variable.

Core Implementation

Algorithm W represents one of the most elegant solutions to the type inference problem in functional programming languages. Developed by Robin Milner in 1978, it provides a sound and complete method for inferring the most general types in the Hindley-Milner type system. This chapter explores our Rust implementation, examining how the mathematical foundations translate into practical code that can handle lambda abstractions, function applications, let-polymorphism, and complex unification scenarios.

The core insight of Algorithm W lies in its systematic approach to type inference through constraint generation and unification. Rather than attempting to determine types through local analysis, the algorithm builds a global picture by generating type variables, collecting constraints, and then solving these constraints through unification. This approach ensures that we always find the most general type possible, a property crucial for supporting polymorphism in functional languages.

You’ll often see this algorithm (or the type system, confusingly enough) referred to by many names:

  • Hindley-Milner
  • Hindley-Damas-Milner
  • Damas-Milner
  • HM
  • Algorithm W

Typing Rules

Before diving into the implementation details, let’s establish the formal typing rules that govern the Hindley-Milner type system. We’ll be introducing mathematical symbols that capture the essence of type inference, but don’t worry, each symbol has a precise and intuitive meaning once you dive into the details.

  • \( \Gamma \) (Gamma) - The type environment, which maps variables to their types. It’s like a dictionary that remembers what we know about each variable’s type.

  • \( \vdash \) (Turnstile) - The “entails” or “proves” symbol. When we write \( \Gamma \vdash e : \tau \), we’re saying “in environment \( \Gamma \), expression \( e \) has type \( \tau \).”

  • \( \tau \) (Tau) - Represents monomorphic types like \( \text{Int} \), \( \text{Bool} \), or \( \text{Int} \to \text{Bool} \). These are concrete, fully-determined types.

  • \( \sigma \) (Sigma) - Represents polymorphic type schemes like \( \forall \alpha. \alpha \to \alpha \). These can be instantiated with different concrete types.

  • \( \forall \alpha \) (Forall Alpha) - Universal quantification over type variables. It means “for any type \( \alpha \).” This is how we express polymorphism.

  • \( \alpha, \beta, \gamma \) (Greek Letters) - Type variables that stand for unknown types during inference. Think of them as type-level unknowns that get solved.

  • \( [\tau/\alpha]\sigma \) - Type substitution, replacing all occurrences of type variable \( \alpha \) with type \( \tau \) in scheme \( \sigma \). This is how we instantiate polymorphic types.

  • \( S \) (Substitution) - A mapping from type variables to types, representing the solutions found by unification.

  • \( \text{gen}(\Gamma, \tau) \) - Generalization, which turns a monotype into a polytype by quantifying over type variables not present in the environment.

  • \( \text{inst}(\sigma) \) - Instantiation, which creates a fresh monotype from a polytype by replacing quantified variables with fresh type variables.

  • \( \text{ftv}(\tau) \) - Free type variables, the set of unbound type variables appearing in type \( \tau \).

  • \( \emptyset \) (Empty Set) - The empty substitution, representing no changes to types.

  • \( [\alpha \mapsto \tau] \) - A substitution that maps type variable \( \alpha \) to type \( \tau \).

  • \( S_1 \circ S_2 \) - Composition of substitutions, applying \( S_2 \) first, then \( S_1 \).

  • \( \notin \) (Not In) - Set membership negation, used in the occurs check to prevent infinite types.

Now that we have our symbolic toolkit, let’s see how these pieces work together to create the elegant machinery of Algorithm W.

Core Typing Rules

The variable rule looks up types from the environment: \[ \frac{x : σ \in Γ \quad τ = \text{inst}(σ)}{Γ ⊢ x : τ} \text{(T-Var)} \]

Lambda abstraction introduces new variable bindings: \[ \frac{Γ, x : α ⊢ e : τ \quad α \text{ fresh}}{Γ ⊢ λx. e : α → τ} \text{(T-Lam)} \]

Function application combines types through unification: \[ \frac{Γ ⊢ e₁ : τ₁ \quad Γ ⊢ e₂ : τ₂ \quad α \text{ fresh} \quad S = \text{unify}(τ₁, τ₂ → α)}{Γ ⊢ e₁ \, e₂ : S(α)} \text{(T-App)} \]

Let-polymorphism allows generalization: \[ \frac{Γ ⊢ e₁ : τ₁ \quad σ = \text{gen}(Γ, τ₁) \quad Γ, x : σ ⊢ e₂ : τ₂}{Γ ⊢ \text{let } x = e₁ \text{ in } e₂ : τ₂} \text{(T-Let)} \]

Literals have their corresponding base types: \[ \frac{}{Γ ⊢ n : \text{Int}} \text{(T-LitInt)} \]

\[ \frac{}{Γ ⊢ b : \text{Bool}} \text{(T-LitBool)} \]

These rules capture the essence of the Hindley-Milner type system, where we infer the most general types while supporting true polymorphism through let-generalization.

Abstract Syntax Trees

Our implementation begins with a careful modeling of both expressions and types as algebraic data types. The expression language extends the pure lambda calculus with practical constructs while maintaining the theoretical foundation.

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, PartialEq)]
pub enum Lit {
    Int(i64),
    Bool(bool),
}
#[derive(Debug, Clone, PartialEq)]
pub enum Type {
    Var(String),
    Arrow(Box<Type>, Box<Type>),
    Int,
    Bool,
    Tuple(Vec<Type>),
}
#[derive(Debug, Clone, PartialEq)]
pub struct Scheme {
    pub vars: Vec<String>, // Quantified type variables
    pub ty: Type,          // The type being quantified over
}
impl std::fmt::Display for Expr {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Expr::Var(name) => write!(f, "{}", name),
            Expr::Lit(Lit::Int(n)) => write!(f, "{}", n),
            Expr::Lit(Lit::Bool(b)) => write!(f, "{}", b),
            Expr::Abs(param, body) => write!(f, "λ{}.{}", param, body),
            Expr::App(func, arg) => match (func.as_ref(), arg.as_ref()) {
                (Expr::Abs(_, _), _) => write!(f, "({}) {}", func, arg),
                (_, Expr::App(_, _)) => write!(f, "{} ({})", func, arg),
                (_, Expr::Abs(_, _)) => write!(f, "{} ({})", func, arg),
                _ => write!(f, "{} {}", func, arg),
            },
            Expr::Let(var, value, body) => {
                write!(f, "let {} = {} in {}", var, value, body)
            }
            Expr::Tuple(exprs) => {
                write!(f, "(")?;
                for (i, expr) in exprs.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{}", expr)?;
                }
                write!(f, ")")
            }
        }
    }
}
impl std::fmt::Display for Type {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Type::Var(name) => write!(f, "{}", name),
            Type::Int => write!(f, "Int"),
            Type::Bool => write!(f, "Bool"),
            Type::Arrow(t1, t2) => {
                // Add parentheses around left side if it's an arrow to avoid ambiguity
                match t1.as_ref() {
                    Type::Arrow(_, _) => write!(f, "({}) → {}", t1, t2),
                    _ => write!(f, "{} → {}", t1, t2),
                }
            }
            Type::Tuple(types) => {
                write!(f, "(")?;
                for (i, ty) in types.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{}", ty)?;
                }
                write!(f, ")")
            }
        }
    }
}
impl std::fmt::Display for Scheme {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        if self.vars.is_empty() {
            write!(f, "{}", self.ty)
        } else {
            write!(f, "forall {}. {}", self.vars.join(" "), self.ty)
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum Expr {
    Var(String),
    App(Box<Expr>, Box<Expr>),
    Abs(String, Box<Expr>),
    Let(String, Box<Expr>, Box<Expr>),
    Lit(Lit),
    Tuple(Vec<Expr>),
}
}

The expression AST captures the essential constructs of our language. Variables (Var) and function abstractions (Abs) correspond directly to the lambda calculus. Function application (App) drives computation through beta reduction. The Let construct introduces local bindings with potential for polymorphic generalization, while literals (Lit) and tuples (Tuple) provide concrete data types that make the language practical for real programming tasks.

The type system mirrors this structure with its own AST that represents the types these expressions can have.

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, PartialEq)]
pub enum Expr {
    Var(String),
    App(Box<Expr>, Box<Expr>),
    Abs(String, Box<Expr>),
    Let(String, Box<Expr>, Box<Expr>),
    Lit(Lit),
    Tuple(Vec<Expr>),
}
#[derive(Debug, Clone, PartialEq)]
pub enum Lit {
    Int(i64),
    Bool(bool),
}
#[derive(Debug, Clone, PartialEq)]
pub struct Scheme {
    pub vars: Vec<String>, // Quantified type variables
    pub ty: Type,          // The type being quantified over
}
impl std::fmt::Display for Expr {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Expr::Var(name) => write!(f, "{}", name),
            Expr::Lit(Lit::Int(n)) => write!(f, "{}", n),
            Expr::Lit(Lit::Bool(b)) => write!(f, "{}", b),
            Expr::Abs(param, body) => write!(f, "λ{}.{}", param, body),
            Expr::App(func, arg) => match (func.as_ref(), arg.as_ref()) {
                (Expr::Abs(_, _), _) => write!(f, "({}) {}", func, arg),
                (_, Expr::App(_, _)) => write!(f, "{} ({})", func, arg),
                (_, Expr::Abs(_, _)) => write!(f, "{} ({})", func, arg),
                _ => write!(f, "{} {}", func, arg),
            },
            Expr::Let(var, value, body) => {
                write!(f, "let {} = {} in {}", var, value, body)
            }
            Expr::Tuple(exprs) => {
                write!(f, "(")?;
                for (i, expr) in exprs.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{}", expr)?;
                }
                write!(f, ")")
            }
        }
    }
}
impl std::fmt::Display for Type {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Type::Var(name) => write!(f, "{}", name),
            Type::Int => write!(f, "Int"),
            Type::Bool => write!(f, "Bool"),
            Type::Arrow(t1, t2) => {
                // Add parentheses around left side if it's an arrow to avoid ambiguity
                match t1.as_ref() {
                    Type::Arrow(_, _) => write!(f, "({}) → {}", t1, t2),
                    _ => write!(f, "{} → {}", t1, t2),
                }
            }
            Type::Tuple(types) => {
                write!(f, "(")?;
                for (i, ty) in types.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{}", ty)?;
                }
                write!(f, ")")
            }
        }
    }
}
impl std::fmt::Display for Scheme {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        if self.vars.is_empty() {
            write!(f, "{}", self.ty)
        } else {
            write!(f, "forall {}. {}", self.vars.join(" "), self.ty)
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum Type {
    Var(String),
    Arrow(Box<Type>, Box<Type>),
    Int,
    Bool,
    Tuple(Vec<Type>),
}
}

Type variables (Type::Var) serve as placeholders during inference, eventually getting instantiated to concrete types through unification. Arrow types (Type::Arrow) represent function types, encoding both parameter and return types. Base types like Int and Bool provide the foundation, while tuple types (Type::Tuple) support structured data. The recursive nature of these types allows us to express arbitrarily complex type structures, from simple integers to higher-order functions that manipulate other functions.

Type Inference Algorithm

Algorithm W operates on several fundamental data structures that capture the essential concepts of type inference. These type aliases provide names for the core abstractions and make the algorithm’s implementation more readable.

The type variable abstraction represents unknown types that will be resolved during inference. Term variables represent program variables that appear in expressions. The type environment maps term variables to their types, while substitutions map type variables to concrete types.

#![allow(unused)]
fn main() {
pub type TyVar = String;
pub type TmVar = String;
pub type Env = BTreeMap<TmVar, Scheme>;  // Now stores schemes, not types
pub type Subst = HashMap<TyVar, Type>;
}

These aliases encapsulate the fundamental data flow in Algorithm W. Type variables like t0, t1, and t2 serve as placeholders that get unified with concrete types as inference progresses. Term variables represent the actual identifiers in source programs. The environment now tracks polymorphic type schemes rather than just types, enabling proper let-polymorphism, while substitutions record the solutions discovered by unification.

The choice of String for both type and term variables reflects the simplicity of our implementation. In a full implementation, systems often use more complex representations like de Bruijn indices for type variables or interned strings for performance, but strings provide clarity for understanding the fundamental algorithms.

The heart of our Algorithm W implementation lies in the TypeInference struct, which maintains the state necessary for sound type inference across an entire program.

#![allow(unused)]
fn main() {
use std::collections::{BTreeMap, HashMap, HashSet};
use std::fmt;
use crate::ast::{Expr, Lit, Scheme, Type};
use crate::errors::{InferenceError, Result};
pub type TyVar = String;
pub type TmVar = String;
pub type Env = BTreeMap<TmVar, Scheme>;
pub type Subst = HashMap<TyVar, Type>;
#[derive(Debug)]
pub struct InferenceTree {
    pub rule: String,
    pub input: String,
    pub output: String,
    pub children: Vec<InferenceTree>,
}
impl InferenceTree {
    fn new(rule: &str, input: &str, output: &str, children: Vec<InferenceTree>) -> Self {
        Self {
            rule: rule.to_string(),
            input: input.to_string(),
            output: output.to_string(),
            children,
        }
    }
}
impl fmt::Display for InferenceTree {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        self.display_with_indent(f, 0)
    }
}
impl InferenceTree {
    fn display_with_indent(&self, f: &mut fmt::Formatter, indent: usize) -> fmt::Result {
        let prefix = "  ".repeat(indent);
        writeln!(
            f,
            "{}{}: {} => {}",
            prefix, self.rule, self.input, self.output
        )?;
        for child in &self.children {
            child.display_with_indent(f, indent + 1)?;
        }
        Ok(())
    }
}
impl Default for TypeInference {
    fn default() -> Self {
        Self::new()
    }
}
#[allow(clippy::only_used_in_recursion)]
impl TypeInference {
    pub fn new() -> Self {
        Self { counter: 0 }
    }

    fn fresh_tyvar(&mut self) -> TyVar {
        let var = format!("t{}", self.counter);
        self.counter += 1;
        var
    }

    fn pretty_env(&self, env: &Env) -> String {
        if env.is_empty() {
            "{}".to_string()
        } else {
            let entries: Vec<String> = env.iter().map(|(k, v)| format!("{}: {}", k, v)).collect();
            format!("{{{}}}", entries.join(", "))
        }
    }

    fn pretty_subst(&self, subst: &Subst) -> String {
        if subst.is_empty() {
            "{}".to_string()
        } else {
            let entries: Vec<String> = subst.iter().map(|(k, v)| format!("{}/{}", v, k)).collect();
            format!("{{{}}}", entries.join(", "))
        }
    }

    fn apply_subst(&self, subst: &Subst, ty: &Type) -> Type {
        match ty {
            Type::Var(name) => subst.get(name).cloned().unwrap_or_else(|| ty.clone()),
            Type::Arrow(t1, t2) => Type::Arrow(
                Box::new(self.apply_subst(subst, t1)),
                Box::new(self.apply_subst(subst, t2)),
            ),
            Type::Tuple(types) => {
                Type::Tuple(types.iter().map(|t| self.apply_subst(subst, t)).collect())
            }
            Type::Int | Type::Bool => ty.clone(),
        }
    }

    fn apply_subst_scheme(&self, subst: &Subst, scheme: &Scheme) -> Scheme {
        // Remove bindings for quantified variables to avoid capture
        let mut filtered_subst = subst.clone();
        for var in &scheme.vars {
            filtered_subst.remove(var);
        }
        Scheme {
            vars: scheme.vars.clone(),
            ty: self.apply_subst(&filtered_subst, &scheme.ty),
        }
    }

    fn apply_subst_env(&self, subst: &Subst, env: &Env) -> Env {
        env.iter()
            .map(|(k, v)| (k.clone(), self.apply_subst_scheme(subst, v)))
            .collect()
    }

    fn compose_subst(&self, s1: &Subst, s2: &Subst) -> Subst {
        let mut result = s1.clone();
        for (k, v) in s2 {
            result.insert(k.clone(), self.apply_subst(s1, v));
        }
        result
    }

    fn free_type_vars(&self, ty: &Type) -> HashSet<TyVar> {
        match ty {
            Type::Var(name) => {
                let mut set = HashSet::new();
                set.insert(name.clone());
                set
            }
            Type::Arrow(t1, t2) => {
                let mut set = self.free_type_vars(t1);
                set.extend(self.free_type_vars(t2));
                set
            }
            Type::Tuple(types) => {
                let mut set = HashSet::new();
                for t in types {
                    set.extend(self.free_type_vars(t));
                }
                set
            }
            Type::Int | Type::Bool => HashSet::new(),
        }
    }

    fn free_type_vars_scheme(&self, scheme: &Scheme) -> HashSet<TyVar> {
        let mut set = self.free_type_vars(&scheme.ty);
        // Remove quantified variables
        for var in &scheme.vars {
            set.remove(var);
        }
        set
    }

    fn free_type_vars_env(&self, env: &Env) -> HashSet<TyVar> {
        let mut set = HashSet::new();
        for scheme in env.values() {
            set.extend(self.free_type_vars_scheme(scheme));
        }
        set
    }

    fn generalize(&self, env: &Env, ty: &Type) -> Scheme {
        let type_vars = self.free_type_vars(ty);
        let env_vars = self.free_type_vars_env(env);
        let mut free_vars: Vec<_> = type_vars.difference(&env_vars).cloned().collect();
        free_vars.sort(); // Sort for deterministic behavior

        Scheme {
            vars: free_vars,
            ty: ty.clone(),
        }
    }

    fn instantiate(&mut self, scheme: &Scheme) -> Type {
        // Create fresh type variables for each quantified variable
        let mut subst = HashMap::new();
        for var in &scheme.vars {
            let fresh = self.fresh_tyvar();
            subst.insert(var.clone(), Type::Var(fresh));
        }

        self.apply_subst(&subst, &scheme.ty)
    }

    fn occurs_check(&self, var: &TyVar, ty: &Type) -> bool {
        match ty {
            Type::Var(name) => name == var,
            Type::Arrow(t1, t2) => self.occurs_check(var, t1) || self.occurs_check(var, t2),
            Type::Tuple(types) => types.iter().any(|t| self.occurs_check(var, t)),
            Type::Int | Type::Bool => false,
        }
    }

    fn unify(&self, t1: &Type, t2: &Type) -> Result<(Subst, InferenceTree)> {
        let input = format!("{} ~ {}", t1, t2);

        match (t1, t2) {
            (Type::Int, Type::Int) | (Type::Bool, Type::Bool) => {
                let tree = InferenceTree::new("Unify-Base", &input, "{}", vec![]);
                Ok((HashMap::new(), tree))
            }
            (Type::Var(v), ty) | (ty, Type::Var(v)) => {
                if ty == &Type::Var(v.clone()) {
                    let tree = InferenceTree::new("Unify-Var-Same", &input, "{}", vec![]);
                    Ok((HashMap::new(), tree))
                } else if self.occurs_check(v, ty) {
                    Err(InferenceError::OccursCheck {
                        var: v.clone(),
                        ty: ty.clone(),
                    })
                } else {
                    let mut subst = HashMap::new();
                    subst.insert(v.clone(), ty.clone());
                    let output = format!("{{{}/{}}}", ty, v);
                    let tree = InferenceTree::new("Unify-Var", &input, &output, vec![]);
                    Ok((subst, tree))
                }
            }
            (Type::Arrow(a1, a2), Type::Arrow(b1, b2)) => {
                let (s1, tree1) = self.unify(a1, b1)?;
                let a2_subst = self.apply_subst(&s1, a2);
                let b2_subst = self.apply_subst(&s1, b2);
                let (s2, tree2) = self.unify(&a2_subst, &b2_subst)?;
                let final_subst = self.compose_subst(&s2, &s1);
                let output = self.pretty_subst(&final_subst);
                let tree = InferenceTree::new("Unify-Arrow", &input, &output, vec![tree1, tree2]);
                Ok((final_subst, tree))
            }
            (Type::Tuple(ts1), Type::Tuple(ts2)) => {
                if ts1.len() != ts2.len() {
                    return Err(InferenceError::TupleLengthMismatch {
                        left_len: ts1.len(),
                        right_len: ts2.len(),
                    });
                }

                let mut subst = HashMap::new();
                let mut trees = Vec::new();

                for (t1, t2) in ts1.iter().zip(ts2.iter()) {
                    let t1_subst = self.apply_subst(&subst, t1);
                    let t2_subst = self.apply_subst(&subst, t2);
                    let (s, tree) = self.unify(&t1_subst, &t2_subst)?;
                    subst = self.compose_subst(&s, &subst);
                    trees.push(tree);
                }

                let output = self.pretty_subst(&subst);
                let tree = InferenceTree::new("Unify-Tuple", &input, &output, trees);
                Ok((subst, tree))
            }
            _ => Err(InferenceError::UnificationFailure {
                expected: t1.clone(),
                actual: t2.clone(),
            }),
        }
    }

    pub fn infer(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
        match expr {
            Expr::Lit(Lit::Int(_)) => self.infer_lit_int(env, expr),
            Expr::Lit(Lit::Bool(_)) => self.infer_lit_bool(env, expr),
            Expr::Var(name) => self.infer_var(env, expr, name),
            Expr::Abs(param, body) => self.infer_abs(env, expr, param, body),
            Expr::App(func, arg) => self.infer_app(env, expr, func, arg),
            Expr::Let(var, value, body) => self.infer_let(env, expr, var, value, body),
            Expr::Tuple(exprs) => self.infer_tuple(env, expr, exprs),
        }
    }

    /// T-LitInt: ─────────────────
    ///           Γ ⊢ n : Int
    fn infer_lit_int(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);
        let tree = InferenceTree::new("T-Int", &input, "Int", vec![]);
        Ok((HashMap::new(), Type::Int, tree))
    }

    /// T-LitBool: ─────────────────
    ///            Γ ⊢ b : Bool
    fn infer_lit_bool(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);
        let tree = InferenceTree::new("T-Bool", &input, "Bool", vec![]);
        Ok((HashMap::new(), Type::Bool, tree))
    }

    /// T-Var: x : σ ∈ Γ    τ = inst(σ)
    ///        ─────────────────────────
    ///               Γ ⊢ x : τ
    fn infer_var(
        &mut self,
        env: &Env,
        expr: &Expr,
        name: &str,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        match env.get(name) {
            Some(scheme) => {
                let instantiated = self.instantiate(scheme);
                let output = format!("{}", instantiated);
                let tree = InferenceTree::new("T-Var", &input, &output, vec![]);
                Ok((HashMap::new(), instantiated, tree))
            }
            None => Err(InferenceError::UnboundVariable {
                name: name.to_string(),
            }),
        }
    }

    /// T-Lam: Γ, x : α ⊢ e : τ    α fresh
    ///        ─────────────────────────────
    ///           Γ ⊢ λx. e : α → τ
    fn infer_abs(
        &mut self,
        env: &Env,
        expr: &Expr,
        param: &str,
        body: &Expr,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let param_type = Type::Var(self.fresh_tyvar());
        let mut new_env = env.clone();
        // Insert a monomorphic scheme for the parameter
        let param_scheme = Scheme {
            vars: vec![],
            ty: param_type.clone(),
        };
        new_env.insert(param.to_string(), param_scheme);

        let (s1, body_type, tree1) = self.infer(&new_env, body)?;
        let param_type_subst = self.apply_subst(&s1, &param_type);
        let result_type = Type::Arrow(Box::new(param_type_subst), Box::new(body_type));

        let output = format!("{}", result_type);
        let tree = InferenceTree::new("T-Abs", &input, &output, vec![tree1]);
        Ok((s1, result_type, tree))
    }

    /// T-App: Γ ⊢ e₁ : τ₁    Γ ⊢ e₂ : τ₂    α fresh    S = unify(τ₁, τ₂ → α)
    ///        ──────────────────────────────────────────────────────────────
    ///                            Γ ⊢ e₁ e₂ : S(α)
    fn infer_app(
        &mut self,
        env: &Env,
        expr: &Expr,
        func: &Expr,
        arg: &Expr,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let result_type = Type::Var(self.fresh_tyvar());

        let (s1, func_type, tree1) = self.infer(env, func)?;
        let env_subst = self.apply_subst_env(&s1, env);
        let (s2, arg_type, tree2) = self.infer(&env_subst, arg)?;

        let func_type_subst = self.apply_subst(&s2, &func_type);
        let expected_func_type = Type::Arrow(Box::new(arg_type), Box::new(result_type.clone()));

        let (s3, tree3) = self.unify(&func_type_subst, &expected_func_type)?;

        let final_subst = self.compose_subst(&s3, &self.compose_subst(&s2, &s1));
        let final_type = self.apply_subst(&s3, &result_type);

        let output = format!("{}", final_type);
        let tree = InferenceTree::new("T-App", &input, &output, vec![tree1, tree2, tree3]);
        Ok((final_subst, final_type, tree))
    }

    /// T-Let: Γ ⊢ e₁ : τ₁    σ = gen(Γ, τ₁)    Γ, x : σ ⊢ e₂ : τ₂
    ///        ──────────────────────────────────────────────────────
    ///                     Γ ⊢ let x = e₁ in e₂ : τ₂
    fn infer_let(
        &mut self,
        env: &Env,
        expr: &Expr,
        var: &str,
        value: &Expr,
        body: &Expr,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let (s1, value_type, tree1) = self.infer(env, value)?;
        let env_subst = self.apply_subst_env(&s1, env);
        let generalized_type = self.generalize(&env_subst, &value_type);

        let mut new_env = env_subst;
        new_env.insert(var.to_string(), generalized_type);

        let (s2, body_type, tree2) = self.infer(&new_env, body)?;

        let final_subst = self.compose_subst(&s2, &s1);
        let output = format!("{}", body_type);
        let tree = InferenceTree::new("T-Let", &input, &output, vec![tree1, tree2]);
        Ok((final_subst, body_type, tree))
    }

    /// T-Tuple: Γ ⊢ e₁ : τ₁    ...    Γ ⊢ eₙ : τₙ
    ///          ─────────────────────────────────────
    ///              Γ ⊢ (e₁, ..., eₙ) : (τ₁, ..., τₙ)
    fn infer_tuple(
        &mut self,
        env: &Env,
        expr: &Expr,
        exprs: &[Expr],
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let mut subst = HashMap::new();
        let mut types = Vec::new();
        let mut trees = Vec::new();
        let mut current_env = env.clone();

        for expr in exprs {
            let (s, ty, tree) = self.infer(&current_env, expr)?;
            subst = self.compose_subst(&s, &subst);
            current_env = self.apply_subst_env(&s, &current_env);
            types.push(ty);
            trees.push(tree);
        }

        let result_type = Type::Tuple(types);
        let output = format!("{}", result_type);
        let tree = InferenceTree::new("T-Tuple", &input, &output, trees);
        Ok((subst, result_type, tree))
    }
}
pub fn run_inference(expr: &Expr) -> Result<InferenceTree> {
    let mut inference = TypeInference::new();
    let env = BTreeMap::new();
    let (_, _, tree) = inference.infer(&env, expr)?;
    Ok(tree)
}
pub fn infer_type_only(expr: &Expr) -> Result<Type> {
    let mut inference = TypeInference::new();
    let env = BTreeMap::new();
    let (_, ty, _) = inference.infer(&env, expr)?;
    Ok(ty)
}
pub struct TypeInference {
    counter: usize,
}
}

The inference engine’s primary responsibility is generating fresh type variables, a process that ensures each unknown type gets a unique identifier. This counter-based approach provides a simple but effective way to avoid naming collisions during the inference process.

#![allow(unused)]
fn main() {
fn fresh_tyvar(&mut self) -> TyVar {
    let var = format!("t{}", self.counter);
    self.counter += 1;
    var
}
}

Fresh variable generation forms the foundation for Algorithm W’s systematic approach to handling unknowns. Each time we encounter an expression whose type we don’t yet know, we assign it a fresh type variable. These variables later get unified with concrete types as we discover more information about the program’s structure.

Substitution and Unification

Type substitutions represent the core computational mechanism of Algorithm W. A substitution maps type variables to concrete types, effectively “solving” part of our type inference puzzle.

The application of substitutions must handle the recursive structure of types correctly, ensuring that substitutions propagate through compound types like arrows and tuples.

#![allow(unused)]
fn main() {
fn apply_subst(&self, subst: &Subst, ty: &Type) -> Type {
    match ty {
        Type::Var(name) => subst.get(name).cloned().unwrap_or_else(|| ty.clone()),
        Type::Arrow(t1, t2) => Type::Arrow(
            Box::new(self.apply_subst(subst, t1)),
            Box::new(self.apply_subst(subst, t2)),
        ),
        Type::Tuple(types) => {
            Type::Tuple(types.iter().map(|t| self.apply_subst(subst, t)).collect())
        }
        Type::Int | Type::Bool => ty.clone(),
    }
}
}

Substitution application demonstrates how type information flows through our system. When we apply a substitution to an arrow type, we must apply it recursively to both the parameter and return types. This ensures that type information discovered in one part of a program correctly influences other parts.

Composition of substitutions allows us to combine multiple partial solutions into a more complete understanding of our program’s types.

#![allow(unused)]
fn main() {
fn compose_subst(&self, s1: &Subst, s2: &Subst) -> Subst {
    let mut result = s1.clone();
    for (k, v) in s2 {
        result.insert(k.clone(), self.apply_subst(s1, v));
    }
    result
}
}

The composition operation ensures that when we have multiple substitutions from different parts of our inference process, we can combine them into a single, consistent substitution that represents our cumulative knowledge about the program’s types.

Substitutions must also be applied to entire type environments when we discover new type information. This operation updates all the types in the environment according to the current substitution.

#![allow(unused)]
fn main() {
fn apply_subst_env(&self, subst: &Subst, env: &Env) -> Env {
    env.iter()
        .map(|(k, v)| (k.clone(), self.apply_subst_scheme(subst, v)))
        .collect()
}
}

Environment substitution is crucial for maintaining consistency as inference progresses. When we discover that a type variable should be instantiated to a concrete type, we must update not just individual types but entire environments to reflect this new knowledge.

Unification

Unification is the heart of type inference, solving constraints between types. The unification algorithm produces substitutions that make two types equivalent:

Reflexivity - identical types unify trivially: \[ \frac{}{\text{unify}(τ, τ) = \emptyset} \text{(U-Refl)} \]

Variable unification with occurs check: \[ \frac{α \notin \text{ftv}(τ)}{\text{unify}(α, τ) = [α ↦ τ]} \text{(U-VarL)} \]

\[ \frac{α \notin \text{ftv}(τ)}{\text{unify}(τ, α) = [α ↦ τ]} \text{(U-VarR)} \]

Arrow type unification decomposes into domain and codomain: \[ \frac{S₁ = \text{unify}(τ₁, τ₃) \quad S₂ = \text{unify}(S₁(τ₂), S₁(τ₄))}{\text{unify}(τ₁ → τ₂, τ₃ → τ₄) = S₂ ∘ S₁} \text{(U-Arrow)} \]

Tuple unification requires component-wise unification: \[ \frac{S₁ = \text{unify}(τ₁, τ₃) \quad S₂ = \text{unify}(S₁(τ₂), S₁(τ₄))}{\text{unify}((τ₁, τ₂), (τ₃, τ₄)) = S₂ ∘ S₁} \text{(U-Tuple)} \]

Base type unification succeeds only for identical types: \[ \frac{}{\text{unify}(\text{Int}, \text{Int}) = \emptyset} \text{(U-Int)} \]

\[ \frac{}{\text{unify}(\text{Bool}, \text{Bool}) = \emptyset} \text{(U-Bool)} \]

These unification rules ensure that type constraints are solved systematically while maintaining soundness through the occurs check.

When we have two types that must be equal (such as the parameter type of a function and the type of an argument being passed to it), unification determines whether this constraint can be satisfied and, if so, what substitution makes them equal.

#![allow(unused)]
fn main() {
fn unify(&self, t1: &Type, t2: &Type) -> Result<(Subst, InferenceTree)> {
    let input = format!("{} ~ {}", t1, t2);

    match (t1, t2) {
        (Type::Int, Type::Int) | (Type::Bool, Type::Bool) => {
            let tree = InferenceTree::new("Unify-Base", &input, "{}", vec![]);
            Ok((HashMap::new(), tree))
        }
        (Type::Var(v), ty) | (ty, Type::Var(v)) => {
            if ty == &Type::Var(v.clone()) {
                let tree = InferenceTree::new("Unify-Var-Same", &input, "{}", vec![]);
                Ok((HashMap::new(), tree))
            } else if self.occurs_check(v, ty) {
                Err(InferenceError::OccursCheck {
                    var: v.clone(),
                    ty: ty.clone(),
                })
            } else {
                let mut subst = HashMap::new();
                subst.insert(v.clone(), ty.clone());
                let output = format!("{{{}/{}}}", ty, v);
                let tree = InferenceTree::new("Unify-Var", &input, &output, vec![]);
                Ok((subst, tree))
            }
        }
        (Type::Arrow(a1, a2), Type::Arrow(b1, b2)) => {
            let (s1, tree1) = self.unify(a1, b1)?;
            let a2_subst = self.apply_subst(&s1, a2);
            let b2_subst = self.apply_subst(&s1, b2);
            let (s2, tree2) = self.unify(&a2_subst, &b2_subst)?;
            let final_subst = self.compose_subst(&s2, &s1);
            let output = self.pretty_subst(&final_subst);
            let tree = InferenceTree::new("Unify-Arrow", &input, &output, vec![tree1, tree2]);
            Ok((final_subst, tree))
        }
        (Type::Tuple(ts1), Type::Tuple(ts2)) => {
            if ts1.len() != ts2.len() {
                return Err(InferenceError::TupleLengthMismatch {
                    left_len: ts1.len(),
                    right_len: ts2.len(),
                });
            }

            let mut subst = HashMap::new();
            let mut trees = Vec::new();

            for (t1, t2) in ts1.iter().zip(ts2.iter()) {
                let t1_subst = self.apply_subst(&subst, t1);
                let t2_subst = self.apply_subst(&subst, t2);
                let (s, tree) = self.unify(&t1_subst, &t2_subst)?;
                subst = self.compose_subst(&s, &subst);
                trees.push(tree);
            }

            let output = self.pretty_subst(&subst);
            let tree = InferenceTree::new("Unify-Tuple", &input, &output, trees);
            Ok((subst, tree))
        }
        _ => Err(InferenceError::UnificationFailure {
            expected: t1.clone(),
            actual: t2.clone(),
        }),
    }
}
}

The unification algorithm handles several distinct cases, each representing a different constraint-solving scenario. When unifying two identical base types like Int with Int, no substitution is needed. When unifying a type variable with any other type, we create a substitution that maps the variable to that type, provided the occurs check passes.

The occurs check prevents infinite types by ensuring that a type variable doesn’t appear within the type it’s being unified with.

#![allow(unused)]
fn main() {
fn occurs_check(&self, var: &TyVar, ty: &Type) -> bool {
    match ty {
        Type::Var(name) => name == var,
        Type::Arrow(t1, t2) => self.occurs_check(var, t1) || self.occurs_check(var, t2),
        Type::Tuple(types) => types.iter().any(|t| self.occurs_check(var, t)),
        Type::Int | Type::Bool => false,
    }
}
}

This check is essential for soundness. Without it, we might generate infinite types like t0 = t0 -> Int, which would break our type system’s decidability.

For compound types like arrows, unification becomes recursive. We must unify corresponding subcomponents and then compose the resulting substitutions. This process ensures that complex types maintain their structural relationships while allowing for flexible instantiation of type variables.

The Main Inference Algorithm

The central infer method implements Algorithm W proper, analyzing expressions to determine their types while accumulating the necessary substitutions. Our implementation uses a modular approach where each syntactic construct has its own helper method implementing the corresponding typing rule.

#![allow(unused)]
fn main() {
pub fn infer(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
    match expr {
        Expr::Lit(Lit::Int(_)) => self.infer_lit_int(env, expr),
        Expr::Lit(Lit::Bool(_)) => self.infer_lit_bool(env, expr),
        Expr::Var(name) => self.infer_var(env, expr, name),
        Expr::Abs(param, body) => self.infer_abs(env, expr, param, body),
        Expr::App(func, arg) => self.infer_app(env, expr, func, arg),
        Expr::Let(var, value, body) => self.infer_let(env, expr, var, value, body),
        Expr::Tuple(exprs) => self.infer_tuple(env, expr, exprs),
    }
}
}

Each helper method corresponds directly to a formal typing rule, making the relationship between theory and implementation explicit.

Variable Lookup

\[ \frac{x : σ \in Γ \quad τ = \text{inst}(σ)}{Γ ⊢ x : τ} \text{(T-Var)} \]

#![allow(unused)]
fn main() {
/// T-Var: x : σ ∈ Γ    τ = inst(σ)
///        ─────────────────────────
///               Γ ⊢ x : τ
fn infer_var(
    &mut self,
    env: &Env,
    expr: &Expr,
    name: &str,
) -> Result<(Subst, Type, InferenceTree)> {
    let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

    match env.get(name) {
        Some(scheme) => {
            let instantiated = self.instantiate(scheme);
            let output = format!("{}", instantiated);
            let tree = InferenceTree::new("T-Var", &input, &output, vec![]);
            Ok((HashMap::new(), instantiated, tree))
        }
        None => Err(InferenceError::UnboundVariable {
            name: name.to_string(),
        }),
    }
}
}

Variable lookup requires instantiation of polymorphic types. When we find a variable in the environment, it might have a polymorphic type scheme like ∀α. α → α. We create a fresh monomorphic instance by replacing quantified variables with fresh type variables.

Lambda Abstraction

\[ \frac{Γ, x : α ⊢ e : τ \quad α \text{ fresh}}{Γ ⊢ λx. e : α → τ} \text{(T-Lam)} \]

#![allow(unused)]
fn main() {
/// T-Lam: Γ, x : α ⊢ e : τ    α fresh
///        ─────────────────────────────
///           Γ ⊢ λx. e : α → τ
fn infer_abs(
    &mut self,
    env: &Env,
    expr: &Expr,
    param: &str,
    body: &Expr,
) -> Result<(Subst, Type, InferenceTree)> {
    let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

    let param_type = Type::Var(self.fresh_tyvar());
    let mut new_env = env.clone();
    // Insert a monomorphic scheme for the parameter
    let param_scheme = Scheme {
        vars: vec![],
        ty: param_type.clone(),
    };
    new_env.insert(param.to_string(), param_scheme);

    let (s1, body_type, tree1) = self.infer(&new_env, body)?;
    let param_type_subst = self.apply_subst(&s1, &param_type);
    let result_type = Type::Arrow(Box::new(param_type_subst), Box::new(body_type));

    let output = format!("{}", result_type);
    let tree = InferenceTree::new("T-Abs", &input, &output, vec![tree1]);
    Ok((s1, result_type, tree))
}
}

Lambda abstractions introduce new variable bindings. We assign a fresh type variable to the parameter, extend the environment, and infer the body’s type. Any constraints discovered during body inference get propagated back through substitution.

Function Application

\[ \frac{Γ ⊢ e₁ : τ₁ \quad Γ ⊢ e₂ : τ₂ \quad α \text{ fresh} \quad S = \text{unify}(τ₁, τ₂ → α)}{Γ ⊢ e₁ \, e₂ : S(α)} \text{(T-App)} \]

#![allow(unused)]
fn main() {
/// T-App: Γ ⊢ e₁ : τ₁    Γ ⊢ e₂ : τ₂    α fresh    S = unify(τ₁, τ₂ → α)
///        ──────────────────────────────────────────────────────────────
///                            Γ ⊢ e₁ e₂ : S(α)
fn infer_app(
    &mut self,
    env: &Env,
    expr: &Expr,
    func: &Expr,
    arg: &Expr,
) -> Result<(Subst, Type, InferenceTree)> {
    let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

    let result_type = Type::Var(self.fresh_tyvar());

    let (s1, func_type, tree1) = self.infer(env, func)?;
    let env_subst = self.apply_subst_env(&s1, env);
    let (s2, arg_type, tree2) = self.infer(&env_subst, arg)?;

    let func_type_subst = self.apply_subst(&s2, &func_type);
    let expected_func_type = Type::Arrow(Box::new(arg_type), Box::new(result_type.clone()));

    let (s3, tree3) = self.unify(&func_type_subst, &expected_func_type)?;

    let final_subst = self.compose_subst(&s3, &self.compose_subst(&s2, &s1));
    let final_type = self.apply_subst(&s3, &result_type);

    let output = format!("{}", final_type);
    let tree = InferenceTree::new("T-App", &input, &output, vec![tree1, tree2, tree3]);
    Ok((final_subst, final_type, tree))
}
}

Application drives constraint generation. We infer types for both function and argument, then unify the function type with an arrow type constructed from the argument type and a fresh result type variable.

Let-Polymorphism

\[ \frac{Γ ⊢ e₁ : τ₁ \quad σ = \text{gen}(Γ, τ₁) \quad Γ, x : σ ⊢ e₂ : τ₂}{Γ ⊢ \text{let } x = e₁ \text{ in } e₂ : τ₂} \text{(T-Let)} \]

#![allow(unused)]
fn main() {
/// T-Let: Γ ⊢ e₁ : τ₁    σ = gen(Γ, τ₁)    Γ, x : σ ⊢ e₂ : τ₂
///        ──────────────────────────────────────────────────────
///                     Γ ⊢ let x = e₁ in e₂ : τ₂
fn infer_let(
    &mut self,
    env: &Env,
    expr: &Expr,
    var: &str,
    value: &Expr,
    body: &Expr,
) -> Result<(Subst, Type, InferenceTree)> {
    let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

    let (s1, value_type, tree1) = self.infer(env, value)?;
    let env_subst = self.apply_subst_env(&s1, env);
    let generalized_type = self.generalize(&env_subst, &value_type);

    let mut new_env = env_subst;
    new_env.insert(var.to_string(), generalized_type);

    let (s2, body_type, tree2) = self.infer(&new_env, body)?;

    let final_subst = self.compose_subst(&s2, &s1);
    let output = format!("{}", body_type);
    let tree = InferenceTree::new("T-Let", &input, &output, vec![tree1, tree2]);
    Ok((final_subst, body_type, tree))
}
}

Let expressions enable polymorphism through generalization. After inferring the bound expression’s type, we generalize it by quantifying over type variables not constrained by the environment. This allows polymorphic usage in the let body.

Literal Types

\[ \frac{}{Γ ⊢ n : \text{Int}} \text{(T-LitInt)} \]

\[ \frac{}{Γ ⊢ b : \text{Bool}} \text{(T-LitBool)} \]

#![allow(unused)]
fn main() {
/// T-LitInt: ─────────────────
///           Γ ⊢ n : Int
fn infer_lit_int(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
    let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);
    let tree = InferenceTree::new("T-Int", &input, "Int", vec![]);
    Ok((HashMap::new(), Type::Int, tree))
}
}
#![allow(unused)]
fn main() {
/// T-LitBool: ─────────────────
///            Γ ⊢ b : Bool
fn infer_lit_bool(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
    let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);
    let tree = InferenceTree::new("T-Bool", &input, "Bool", vec![]);
    Ok((HashMap::new(), Type::Bool, tree))
}
}

Literals have known types and require no constraint generation.

Generalization and Instantiation

The generalization and instantiation mechanisms handle let-polymorphism, allowing variables bound in let expressions to be used with multiple different types.

Understanding Generalization with Examples

Generalization turns concrete types into polymorphic type schemes. Consider this simple example:

let id = \x -> x in (id 42, id true)

When we infer the type of \x -> x, we get something like t0 → t0 where t0 is a type variable. Since t0 doesn’t appear anywhere else in the environment, we can generalize it to ∀t0. t0 → t0, making id polymorphic.

This is what allows id to be used with both 42 (type Int) and true (type Bool) in the same expression. Without generalization, the first use would fix t0 to Int, making the second use fail.

#![allow(unused)]
fn main() {
fn generalize(&self, env: &Env, ty: &Type) -> Scheme {
    let type_vars = self.free_type_vars(ty);
    let env_vars = self.free_type_vars_env(env);
    let mut free_vars: Vec<_> = type_vars.difference(&env_vars).cloned().collect();
    free_vars.sort(); // Sort for deterministic behavior

    Scheme {
        vars: free_vars,
        ty: ty.clone(),
    }
}
}

Generalization identifies type variables that could be made polymorphic by checking which ones don’t appear free in the current environment. If a type variable isn’t constrained by anything else in scope, it’s safe to quantify over it.

Understanding Instantiation with Examples

Instantiation creates fresh monomorphic versions from polymorphic types. When we use a polymorphic function like our identity function, we need to create a fresh copy of its type for each use.

Consider this expression:

let id = \x -> x in id id

Here we’re applying the polymorphic identity function to itself. The first id gets instantiated to (α → α) → (α → α) while the second id gets instantiated to α → α. These different instantiations allow the application to type-check successfully.

#![allow(unused)]
fn main() {
fn instantiate(&mut self, scheme: &Scheme) -> Type {
    // Create fresh type variables for each quantified variable
    let mut subst = HashMap::new();
    for var in &scheme.vars {
        let fresh = self.fresh_tyvar();
        subst.insert(var.clone(), Type::Var(fresh));
    }

    self.apply_subst(&subst, &scheme.ty)
}
}

Instantiation replaces quantified type variables with fresh type variables. This ensures that each use of a polymorphic function gets its own independent type constraints, preventing interference between different call sites.

Free Type Variables

Generalization depends on computing the free type variables in both individual types and entire environments. These operations identify which type variables could potentially be generalized versus those that are already constrained.

#![allow(unused)]
fn main() {
fn free_type_vars(&self, ty: &Type) -> HashSet<TyVar> {
    match ty {
        Type::Var(name) => {
            let mut set = HashSet::new();
            set.insert(name.clone());
            set
        }
        Type::Arrow(t1, t2) => {
            let mut set = self.free_type_vars(t1);
            set.extend(self.free_type_vars(t2));
            set
        }
        Type::Tuple(types) => {
            let mut set = HashSet::new();
            for t in types {
                set.extend(self.free_type_vars(t));
            }
            set
        }
        Type::Int | Type::Bool => HashSet::new(),
    }
}
}

The free type variables computation traverses type structures recursively, collecting all type variables that appear unbound. For compound types like arrows and tuples, it must traverse all subcomponents to ensure no variables are missed.

#![allow(unused)]
fn main() {
fn free_type_vars_env(&self, env: &Env) -> HashSet<TyVar> {
    let mut set = HashSet::new();
    for scheme in env.values() {
        set.extend(self.free_type_vars_scheme(scheme));
    }
    set
}
}

Computing free variables across entire environments requires examining every type in the environment and taking the union of their free variables. This gives us the complete set of type variables that are constrained by the current context.

Our complete implementation fully supports polymorphic instantiation by generating fresh type variables for each quantified variable in a scheme when it is instantiated. This mechanism is what allows the identity function to work on integers in one context and booleans in another, as demonstrated by expressions like let id = \x -> x in (id, id) which produces the type (t1 -> t1, t2 -> t2) showing proper polymorphic instantiation.

Error Handling and Inference Trees

Our implementation provides detailed error reporting and generates inference trees that show the step-by-step reasoning process.

#![allow(unused)]
fn main() {
use std::collections::{BTreeMap, HashMap, HashSet};
use std::fmt;
use crate::ast::{Expr, Lit, Scheme, Type};
use crate::errors::{InferenceError, Result};
pub type TyVar = String;
pub type TmVar = String;
pub type Env = BTreeMap<TmVar, Scheme>;
pub type Subst = HashMap<TyVar, Type>;
impl InferenceTree {
    fn new(rule: &str, input: &str, output: &str, children: Vec<InferenceTree>) -> Self {
        Self {
            rule: rule.to_string(),
            input: input.to_string(),
            output: output.to_string(),
            children,
        }
    }
}
impl fmt::Display for InferenceTree {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        self.display_with_indent(f, 0)
    }
}
impl InferenceTree {
    fn display_with_indent(&self, f: &mut fmt::Formatter, indent: usize) -> fmt::Result {
        let prefix = "  ".repeat(indent);
        writeln!(
            f,
            "{}{}: {} => {}",
            prefix, self.rule, self.input, self.output
        )?;
        for child in &self.children {
            child.display_with_indent(f, indent + 1)?;
        }
        Ok(())
    }
}
pub struct TypeInference {
    counter: usize,
}
impl Default for TypeInference {
    fn default() -> Self {
        Self::new()
    }
}
#[allow(clippy::only_used_in_recursion)]
impl TypeInference {
    pub fn new() -> Self {
        Self { counter: 0 }
    }

    fn fresh_tyvar(&mut self) -> TyVar {
        let var = format!("t{}", self.counter);
        self.counter += 1;
        var
    }

    fn pretty_env(&self, env: &Env) -> String {
        if env.is_empty() {
            "{}".to_string()
        } else {
            let entries: Vec<String> = env.iter().map(|(k, v)| format!("{}: {}", k, v)).collect();
            format!("{{{}}}", entries.join(", "))
        }
    }

    fn pretty_subst(&self, subst: &Subst) -> String {
        if subst.is_empty() {
            "{}".to_string()
        } else {
            let entries: Vec<String> = subst.iter().map(|(k, v)| format!("{}/{}", v, k)).collect();
            format!("{{{}}}", entries.join(", "))
        }
    }

    fn apply_subst(&self, subst: &Subst, ty: &Type) -> Type {
        match ty {
            Type::Var(name) => subst.get(name).cloned().unwrap_or_else(|| ty.clone()),
            Type::Arrow(t1, t2) => Type::Arrow(
                Box::new(self.apply_subst(subst, t1)),
                Box::new(self.apply_subst(subst, t2)),
            ),
            Type::Tuple(types) => {
                Type::Tuple(types.iter().map(|t| self.apply_subst(subst, t)).collect())
            }
            Type::Int | Type::Bool => ty.clone(),
        }
    }

    fn apply_subst_scheme(&self, subst: &Subst, scheme: &Scheme) -> Scheme {
        // Remove bindings for quantified variables to avoid capture
        let mut filtered_subst = subst.clone();
        for var in &scheme.vars {
            filtered_subst.remove(var);
        }
        Scheme {
            vars: scheme.vars.clone(),
            ty: self.apply_subst(&filtered_subst, &scheme.ty),
        }
    }

    fn apply_subst_env(&self, subst: &Subst, env: &Env) -> Env {
        env.iter()
            .map(|(k, v)| (k.clone(), self.apply_subst_scheme(subst, v)))
            .collect()
    }

    fn compose_subst(&self, s1: &Subst, s2: &Subst) -> Subst {
        let mut result = s1.clone();
        for (k, v) in s2 {
            result.insert(k.clone(), self.apply_subst(s1, v));
        }
        result
    }

    fn free_type_vars(&self, ty: &Type) -> HashSet<TyVar> {
        match ty {
            Type::Var(name) => {
                let mut set = HashSet::new();
                set.insert(name.clone());
                set
            }
            Type::Arrow(t1, t2) => {
                let mut set = self.free_type_vars(t1);
                set.extend(self.free_type_vars(t2));
                set
            }
            Type::Tuple(types) => {
                let mut set = HashSet::new();
                for t in types {
                    set.extend(self.free_type_vars(t));
                }
                set
            }
            Type::Int | Type::Bool => HashSet::new(),
        }
    }

    fn free_type_vars_scheme(&self, scheme: &Scheme) -> HashSet<TyVar> {
        let mut set = self.free_type_vars(&scheme.ty);
        // Remove quantified variables
        for var in &scheme.vars {
            set.remove(var);
        }
        set
    }

    fn free_type_vars_env(&self, env: &Env) -> HashSet<TyVar> {
        let mut set = HashSet::new();
        for scheme in env.values() {
            set.extend(self.free_type_vars_scheme(scheme));
        }
        set
    }

    fn generalize(&self, env: &Env, ty: &Type) -> Scheme {
        let type_vars = self.free_type_vars(ty);
        let env_vars = self.free_type_vars_env(env);
        let mut free_vars: Vec<_> = type_vars.difference(&env_vars).cloned().collect();
        free_vars.sort(); // Sort for deterministic behavior

        Scheme {
            vars: free_vars,
            ty: ty.clone(),
        }
    }

    fn instantiate(&mut self, scheme: &Scheme) -> Type {
        // Create fresh type variables for each quantified variable
        let mut subst = HashMap::new();
        for var in &scheme.vars {
            let fresh = self.fresh_tyvar();
            subst.insert(var.clone(), Type::Var(fresh));
        }

        self.apply_subst(&subst, &scheme.ty)
    }

    fn occurs_check(&self, var: &TyVar, ty: &Type) -> bool {
        match ty {
            Type::Var(name) => name == var,
            Type::Arrow(t1, t2) => self.occurs_check(var, t1) || self.occurs_check(var, t2),
            Type::Tuple(types) => types.iter().any(|t| self.occurs_check(var, t)),
            Type::Int | Type::Bool => false,
        }
    }

    fn unify(&self, t1: &Type, t2: &Type) -> Result<(Subst, InferenceTree)> {
        let input = format!("{} ~ {}", t1, t2);

        match (t1, t2) {
            (Type::Int, Type::Int) | (Type::Bool, Type::Bool) => {
                let tree = InferenceTree::new("Unify-Base", &input, "{}", vec![]);
                Ok((HashMap::new(), tree))
            }
            (Type::Var(v), ty) | (ty, Type::Var(v)) => {
                if ty == &Type::Var(v.clone()) {
                    let tree = InferenceTree::new("Unify-Var-Same", &input, "{}", vec![]);
                    Ok((HashMap::new(), tree))
                } else if self.occurs_check(v, ty) {
                    Err(InferenceError::OccursCheck {
                        var: v.clone(),
                        ty: ty.clone(),
                    })
                } else {
                    let mut subst = HashMap::new();
                    subst.insert(v.clone(), ty.clone());
                    let output = format!("{{{}/{}}}", ty, v);
                    let tree = InferenceTree::new("Unify-Var", &input, &output, vec![]);
                    Ok((subst, tree))
                }
            }
            (Type::Arrow(a1, a2), Type::Arrow(b1, b2)) => {
                let (s1, tree1) = self.unify(a1, b1)?;
                let a2_subst = self.apply_subst(&s1, a2);
                let b2_subst = self.apply_subst(&s1, b2);
                let (s2, tree2) = self.unify(&a2_subst, &b2_subst)?;
                let final_subst = self.compose_subst(&s2, &s1);
                let output = self.pretty_subst(&final_subst);
                let tree = InferenceTree::new("Unify-Arrow", &input, &output, vec![tree1, tree2]);
                Ok((final_subst, tree))
            }
            (Type::Tuple(ts1), Type::Tuple(ts2)) => {
                if ts1.len() != ts2.len() {
                    return Err(InferenceError::TupleLengthMismatch {
                        left_len: ts1.len(),
                        right_len: ts2.len(),
                    });
                }

                let mut subst = HashMap::new();
                let mut trees = Vec::new();

                for (t1, t2) in ts1.iter().zip(ts2.iter()) {
                    let t1_subst = self.apply_subst(&subst, t1);
                    let t2_subst = self.apply_subst(&subst, t2);
                    let (s, tree) = self.unify(&t1_subst, &t2_subst)?;
                    subst = self.compose_subst(&s, &subst);
                    trees.push(tree);
                }

                let output = self.pretty_subst(&subst);
                let tree = InferenceTree::new("Unify-Tuple", &input, &output, trees);
                Ok((subst, tree))
            }
            _ => Err(InferenceError::UnificationFailure {
                expected: t1.clone(),
                actual: t2.clone(),
            }),
        }
    }

    pub fn infer(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
        match expr {
            Expr::Lit(Lit::Int(_)) => self.infer_lit_int(env, expr),
            Expr::Lit(Lit::Bool(_)) => self.infer_lit_bool(env, expr),
            Expr::Var(name) => self.infer_var(env, expr, name),
            Expr::Abs(param, body) => self.infer_abs(env, expr, param, body),
            Expr::App(func, arg) => self.infer_app(env, expr, func, arg),
            Expr::Let(var, value, body) => self.infer_let(env, expr, var, value, body),
            Expr::Tuple(exprs) => self.infer_tuple(env, expr, exprs),
        }
    }

    /// T-LitInt: ─────────────────
    ///           Γ ⊢ n : Int
    fn infer_lit_int(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);
        let tree = InferenceTree::new("T-Int", &input, "Int", vec![]);
        Ok((HashMap::new(), Type::Int, tree))
    }

    /// T-LitBool: ─────────────────
    ///            Γ ⊢ b : Bool
    fn infer_lit_bool(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);
        let tree = InferenceTree::new("T-Bool", &input, "Bool", vec![]);
        Ok((HashMap::new(), Type::Bool, tree))
    }

    /// T-Var: x : σ ∈ Γ    τ = inst(σ)
    ///        ─────────────────────────
    ///               Γ ⊢ x : τ
    fn infer_var(
        &mut self,
        env: &Env,
        expr: &Expr,
        name: &str,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        match env.get(name) {
            Some(scheme) => {
                let instantiated = self.instantiate(scheme);
                let output = format!("{}", instantiated);
                let tree = InferenceTree::new("T-Var", &input, &output, vec![]);
                Ok((HashMap::new(), instantiated, tree))
            }
            None => Err(InferenceError::UnboundVariable {
                name: name.to_string(),
            }),
        }
    }

    /// T-Lam: Γ, x : α ⊢ e : τ    α fresh
    ///        ─────────────────────────────
    ///           Γ ⊢ λx. e : α → τ
    fn infer_abs(
        &mut self,
        env: &Env,
        expr: &Expr,
        param: &str,
        body: &Expr,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let param_type = Type::Var(self.fresh_tyvar());
        let mut new_env = env.clone();
        // Insert a monomorphic scheme for the parameter
        let param_scheme = Scheme {
            vars: vec![],
            ty: param_type.clone(),
        };
        new_env.insert(param.to_string(), param_scheme);

        let (s1, body_type, tree1) = self.infer(&new_env, body)?;
        let param_type_subst = self.apply_subst(&s1, &param_type);
        let result_type = Type::Arrow(Box::new(param_type_subst), Box::new(body_type));

        let output = format!("{}", result_type);
        let tree = InferenceTree::new("T-Abs", &input, &output, vec![tree1]);
        Ok((s1, result_type, tree))
    }

    /// T-App: Γ ⊢ e₁ : τ₁    Γ ⊢ e₂ : τ₂    α fresh    S = unify(τ₁, τ₂ → α)
    ///        ──────────────────────────────────────────────────────────────
    ///                            Γ ⊢ e₁ e₂ : S(α)
    fn infer_app(
        &mut self,
        env: &Env,
        expr: &Expr,
        func: &Expr,
        arg: &Expr,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let result_type = Type::Var(self.fresh_tyvar());

        let (s1, func_type, tree1) = self.infer(env, func)?;
        let env_subst = self.apply_subst_env(&s1, env);
        let (s2, arg_type, tree2) = self.infer(&env_subst, arg)?;

        let func_type_subst = self.apply_subst(&s2, &func_type);
        let expected_func_type = Type::Arrow(Box::new(arg_type), Box::new(result_type.clone()));

        let (s3, tree3) = self.unify(&func_type_subst, &expected_func_type)?;

        let final_subst = self.compose_subst(&s3, &self.compose_subst(&s2, &s1));
        let final_type = self.apply_subst(&s3, &result_type);

        let output = format!("{}", final_type);
        let tree = InferenceTree::new("T-App", &input, &output, vec![tree1, tree2, tree3]);
        Ok((final_subst, final_type, tree))
    }

    /// T-Let: Γ ⊢ e₁ : τ₁    σ = gen(Γ, τ₁)    Γ, x : σ ⊢ e₂ : τ₂
    ///        ──────────────────────────────────────────────────────
    ///                     Γ ⊢ let x = e₁ in e₂ : τ₂
    fn infer_let(
        &mut self,
        env: &Env,
        expr: &Expr,
        var: &str,
        value: &Expr,
        body: &Expr,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let (s1, value_type, tree1) = self.infer(env, value)?;
        let env_subst = self.apply_subst_env(&s1, env);
        let generalized_type = self.generalize(&env_subst, &value_type);

        let mut new_env = env_subst;
        new_env.insert(var.to_string(), generalized_type);

        let (s2, body_type, tree2) = self.infer(&new_env, body)?;

        let final_subst = self.compose_subst(&s2, &s1);
        let output = format!("{}", body_type);
        let tree = InferenceTree::new("T-Let", &input, &output, vec![tree1, tree2]);
        Ok((final_subst, body_type, tree))
    }

    /// T-Tuple: Γ ⊢ e₁ : τ₁    ...    Γ ⊢ eₙ : τₙ
    ///          ─────────────────────────────────────
    ///              Γ ⊢ (e₁, ..., eₙ) : (τ₁, ..., τₙ)
    fn infer_tuple(
        &mut self,
        env: &Env,
        expr: &Expr,
        exprs: &[Expr],
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let mut subst = HashMap::new();
        let mut types = Vec::new();
        let mut trees = Vec::new();
        let mut current_env = env.clone();

        for expr in exprs {
            let (s, ty, tree) = self.infer(&current_env, expr)?;
            subst = self.compose_subst(&s, &subst);
            current_env = self.apply_subst_env(&s, &current_env);
            types.push(ty);
            trees.push(tree);
        }

        let result_type = Type::Tuple(types);
        let output = format!("{}", result_type);
        let tree = InferenceTree::new("T-Tuple", &input, &output, trees);
        Ok((subst, result_type, tree))
    }
}
pub fn run_inference(expr: &Expr) -> Result<InferenceTree> {
    let mut inference = TypeInference::new();
    let env = BTreeMap::new();
    let (_, _, tree) = inference.infer(&env, expr)?;
    Ok(tree)
}
pub fn infer_type_only(expr: &Expr) -> Result<Type> {
    let mut inference = TypeInference::new();
    let env = BTreeMap::new();
    let (_, ty, _) = inference.infer(&env, expr)?;
    Ok(ty)
}
#[derive(Debug)]
pub struct InferenceTree {
    pub rule: String,
    pub input: String,
    pub output: String,
    pub children: Vec<InferenceTree>,
}
}

These inference trees serve both debugging and educational purposes. They make explicit the implicit reasoning that Algorithm W performs, showing how type information flows through the program and how constraints get generated and solved.

The public interface provides both tree-generating and type-only versions of inference, supporting different use cases from interactive development to automated tooling.

#![allow(unused)]
fn main() {
use std::collections::{BTreeMap, HashMap, HashSet};
use std::fmt;
use crate::ast::{Expr, Lit, Scheme, Type};
use crate::errors::{InferenceError, Result};
pub type TyVar = String;
pub type TmVar = String;
pub type Env = BTreeMap<TmVar, Scheme>;
pub type Subst = HashMap<TyVar, Type>;
#[derive(Debug)]
pub struct InferenceTree {
    pub rule: String,
    pub input: String,
    pub output: String,
    pub children: Vec<InferenceTree>,
}
impl InferenceTree {
    fn new(rule: &str, input: &str, output: &str, children: Vec<InferenceTree>) -> Self {
        Self {
            rule: rule.to_string(),
            input: input.to_string(),
            output: output.to_string(),
            children,
        }
    }
}
impl fmt::Display for InferenceTree {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        self.display_with_indent(f, 0)
    }
}
impl InferenceTree {
    fn display_with_indent(&self, f: &mut fmt::Formatter, indent: usize) -> fmt::Result {
        let prefix = "  ".repeat(indent);
        writeln!(
            f,
            "{}{}: {} => {}",
            prefix, self.rule, self.input, self.output
        )?;
        for child in &self.children {
            child.display_with_indent(f, indent + 1)?;
        }
        Ok(())
    }
}
pub struct TypeInference {
    counter: usize,
}
impl Default for TypeInference {
    fn default() -> Self {
        Self::new()
    }
}
#[allow(clippy::only_used_in_recursion)]
impl TypeInference {
    pub fn new() -> Self {
        Self { counter: 0 }
    }

    fn fresh_tyvar(&mut self) -> TyVar {
        let var = format!("t{}", self.counter);
        self.counter += 1;
        var
    }

    fn pretty_env(&self, env: &Env) -> String {
        if env.is_empty() {
            "{}".to_string()
        } else {
            let entries: Vec<String> = env.iter().map(|(k, v)| format!("{}: {}", k, v)).collect();
            format!("{{{}}}", entries.join(", "))
        }
    }

    fn pretty_subst(&self, subst: &Subst) -> String {
        if subst.is_empty() {
            "{}".to_string()
        } else {
            let entries: Vec<String> = subst.iter().map(|(k, v)| format!("{}/{}", v, k)).collect();
            format!("{{{}}}", entries.join(", "))
        }
    }

    fn apply_subst(&self, subst: &Subst, ty: &Type) -> Type {
        match ty {
            Type::Var(name) => subst.get(name).cloned().unwrap_or_else(|| ty.clone()),
            Type::Arrow(t1, t2) => Type::Arrow(
                Box::new(self.apply_subst(subst, t1)),
                Box::new(self.apply_subst(subst, t2)),
            ),
            Type::Tuple(types) => {
                Type::Tuple(types.iter().map(|t| self.apply_subst(subst, t)).collect())
            }
            Type::Int | Type::Bool => ty.clone(),
        }
    }

    fn apply_subst_scheme(&self, subst: &Subst, scheme: &Scheme) -> Scheme {
        // Remove bindings for quantified variables to avoid capture
        let mut filtered_subst = subst.clone();
        for var in &scheme.vars {
            filtered_subst.remove(var);
        }
        Scheme {
            vars: scheme.vars.clone(),
            ty: self.apply_subst(&filtered_subst, &scheme.ty),
        }
    }

    fn apply_subst_env(&self, subst: &Subst, env: &Env) -> Env {
        env.iter()
            .map(|(k, v)| (k.clone(), self.apply_subst_scheme(subst, v)))
            .collect()
    }

    fn compose_subst(&self, s1: &Subst, s2: &Subst) -> Subst {
        let mut result = s1.clone();
        for (k, v) in s2 {
            result.insert(k.clone(), self.apply_subst(s1, v));
        }
        result
    }

    fn free_type_vars(&self, ty: &Type) -> HashSet<TyVar> {
        match ty {
            Type::Var(name) => {
                let mut set = HashSet::new();
                set.insert(name.clone());
                set
            }
            Type::Arrow(t1, t2) => {
                let mut set = self.free_type_vars(t1);
                set.extend(self.free_type_vars(t2));
                set
            }
            Type::Tuple(types) => {
                let mut set = HashSet::new();
                for t in types {
                    set.extend(self.free_type_vars(t));
                }
                set
            }
            Type::Int | Type::Bool => HashSet::new(),
        }
    }

    fn free_type_vars_scheme(&self, scheme: &Scheme) -> HashSet<TyVar> {
        let mut set = self.free_type_vars(&scheme.ty);
        // Remove quantified variables
        for var in &scheme.vars {
            set.remove(var);
        }
        set
    }

    fn free_type_vars_env(&self, env: &Env) -> HashSet<TyVar> {
        let mut set = HashSet::new();
        for scheme in env.values() {
            set.extend(self.free_type_vars_scheme(scheme));
        }
        set
    }

    fn generalize(&self, env: &Env, ty: &Type) -> Scheme {
        let type_vars = self.free_type_vars(ty);
        let env_vars = self.free_type_vars_env(env);
        let mut free_vars: Vec<_> = type_vars.difference(&env_vars).cloned().collect();
        free_vars.sort(); // Sort for deterministic behavior

        Scheme {
            vars: free_vars,
            ty: ty.clone(),
        }
    }

    fn instantiate(&mut self, scheme: &Scheme) -> Type {
        // Create fresh type variables for each quantified variable
        let mut subst = HashMap::new();
        for var in &scheme.vars {
            let fresh = self.fresh_tyvar();
            subst.insert(var.clone(), Type::Var(fresh));
        }

        self.apply_subst(&subst, &scheme.ty)
    }

    fn occurs_check(&self, var: &TyVar, ty: &Type) -> bool {
        match ty {
            Type::Var(name) => name == var,
            Type::Arrow(t1, t2) => self.occurs_check(var, t1) || self.occurs_check(var, t2),
            Type::Tuple(types) => types.iter().any(|t| self.occurs_check(var, t)),
            Type::Int | Type::Bool => false,
        }
    }

    fn unify(&self, t1: &Type, t2: &Type) -> Result<(Subst, InferenceTree)> {
        let input = format!("{} ~ {}", t1, t2);

        match (t1, t2) {
            (Type::Int, Type::Int) | (Type::Bool, Type::Bool) => {
                let tree = InferenceTree::new("Unify-Base", &input, "{}", vec![]);
                Ok((HashMap::new(), tree))
            }
            (Type::Var(v), ty) | (ty, Type::Var(v)) => {
                if ty == &Type::Var(v.clone()) {
                    let tree = InferenceTree::new("Unify-Var-Same", &input, "{}", vec![]);
                    Ok((HashMap::new(), tree))
                } else if self.occurs_check(v, ty) {
                    Err(InferenceError::OccursCheck {
                        var: v.clone(),
                        ty: ty.clone(),
                    })
                } else {
                    let mut subst = HashMap::new();
                    subst.insert(v.clone(), ty.clone());
                    let output = format!("{{{}/{}}}", ty, v);
                    let tree = InferenceTree::new("Unify-Var", &input, &output, vec![]);
                    Ok((subst, tree))
                }
            }
            (Type::Arrow(a1, a2), Type::Arrow(b1, b2)) => {
                let (s1, tree1) = self.unify(a1, b1)?;
                let a2_subst = self.apply_subst(&s1, a2);
                let b2_subst = self.apply_subst(&s1, b2);
                let (s2, tree2) = self.unify(&a2_subst, &b2_subst)?;
                let final_subst = self.compose_subst(&s2, &s1);
                let output = self.pretty_subst(&final_subst);
                let tree = InferenceTree::new("Unify-Arrow", &input, &output, vec![tree1, tree2]);
                Ok((final_subst, tree))
            }
            (Type::Tuple(ts1), Type::Tuple(ts2)) => {
                if ts1.len() != ts2.len() {
                    return Err(InferenceError::TupleLengthMismatch {
                        left_len: ts1.len(),
                        right_len: ts2.len(),
                    });
                }

                let mut subst = HashMap::new();
                let mut trees = Vec::new();

                for (t1, t2) in ts1.iter().zip(ts2.iter()) {
                    let t1_subst = self.apply_subst(&subst, t1);
                    let t2_subst = self.apply_subst(&subst, t2);
                    let (s, tree) = self.unify(&t1_subst, &t2_subst)?;
                    subst = self.compose_subst(&s, &subst);
                    trees.push(tree);
                }

                let output = self.pretty_subst(&subst);
                let tree = InferenceTree::new("Unify-Tuple", &input, &output, trees);
                Ok((subst, tree))
            }
            _ => Err(InferenceError::UnificationFailure {
                expected: t1.clone(),
                actual: t2.clone(),
            }),
        }
    }

    pub fn infer(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
        match expr {
            Expr::Lit(Lit::Int(_)) => self.infer_lit_int(env, expr),
            Expr::Lit(Lit::Bool(_)) => self.infer_lit_bool(env, expr),
            Expr::Var(name) => self.infer_var(env, expr, name),
            Expr::Abs(param, body) => self.infer_abs(env, expr, param, body),
            Expr::App(func, arg) => self.infer_app(env, expr, func, arg),
            Expr::Let(var, value, body) => self.infer_let(env, expr, var, value, body),
            Expr::Tuple(exprs) => self.infer_tuple(env, expr, exprs),
        }
    }

    /// T-LitInt: ─────────────────
    ///           Γ ⊢ n : Int
    fn infer_lit_int(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);
        let tree = InferenceTree::new("T-Int", &input, "Int", vec![]);
        Ok((HashMap::new(), Type::Int, tree))
    }

    /// T-LitBool: ─────────────────
    ///            Γ ⊢ b : Bool
    fn infer_lit_bool(&mut self, env: &Env, expr: &Expr) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);
        let tree = InferenceTree::new("T-Bool", &input, "Bool", vec![]);
        Ok((HashMap::new(), Type::Bool, tree))
    }

    /// T-Var: x : σ ∈ Γ    τ = inst(σ)
    ///        ─────────────────────────
    ///               Γ ⊢ x : τ
    fn infer_var(
        &mut self,
        env: &Env,
        expr: &Expr,
        name: &str,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        match env.get(name) {
            Some(scheme) => {
                let instantiated = self.instantiate(scheme);
                let output = format!("{}", instantiated);
                let tree = InferenceTree::new("T-Var", &input, &output, vec![]);
                Ok((HashMap::new(), instantiated, tree))
            }
            None => Err(InferenceError::UnboundVariable {
                name: name.to_string(),
            }),
        }
    }

    /// T-Lam: Γ, x : α ⊢ e : τ    α fresh
    ///        ─────────────────────────────
    ///           Γ ⊢ λx. e : α → τ
    fn infer_abs(
        &mut self,
        env: &Env,
        expr: &Expr,
        param: &str,
        body: &Expr,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let param_type = Type::Var(self.fresh_tyvar());
        let mut new_env = env.clone();
        // Insert a monomorphic scheme for the parameter
        let param_scheme = Scheme {
            vars: vec![],
            ty: param_type.clone(),
        };
        new_env.insert(param.to_string(), param_scheme);

        let (s1, body_type, tree1) = self.infer(&new_env, body)?;
        let param_type_subst = self.apply_subst(&s1, &param_type);
        let result_type = Type::Arrow(Box::new(param_type_subst), Box::new(body_type));

        let output = format!("{}", result_type);
        let tree = InferenceTree::new("T-Abs", &input, &output, vec![tree1]);
        Ok((s1, result_type, tree))
    }

    /// T-App: Γ ⊢ e₁ : τ₁    Γ ⊢ e₂ : τ₂    α fresh    S = unify(τ₁, τ₂ → α)
    ///        ──────────────────────────────────────────────────────────────
    ///                            Γ ⊢ e₁ e₂ : S(α)
    fn infer_app(
        &mut self,
        env: &Env,
        expr: &Expr,
        func: &Expr,
        arg: &Expr,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let result_type = Type::Var(self.fresh_tyvar());

        let (s1, func_type, tree1) = self.infer(env, func)?;
        let env_subst = self.apply_subst_env(&s1, env);
        let (s2, arg_type, tree2) = self.infer(&env_subst, arg)?;

        let func_type_subst = self.apply_subst(&s2, &func_type);
        let expected_func_type = Type::Arrow(Box::new(arg_type), Box::new(result_type.clone()));

        let (s3, tree3) = self.unify(&func_type_subst, &expected_func_type)?;

        let final_subst = self.compose_subst(&s3, &self.compose_subst(&s2, &s1));
        let final_type = self.apply_subst(&s3, &result_type);

        let output = format!("{}", final_type);
        let tree = InferenceTree::new("T-App", &input, &output, vec![tree1, tree2, tree3]);
        Ok((final_subst, final_type, tree))
    }

    /// T-Let: Γ ⊢ e₁ : τ₁    σ = gen(Γ, τ₁)    Γ, x : σ ⊢ e₂ : τ₂
    ///        ──────────────────────────────────────────────────────
    ///                     Γ ⊢ let x = e₁ in e₂ : τ₂
    fn infer_let(
        &mut self,
        env: &Env,
        expr: &Expr,
        var: &str,
        value: &Expr,
        body: &Expr,
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let (s1, value_type, tree1) = self.infer(env, value)?;
        let env_subst = self.apply_subst_env(&s1, env);
        let generalized_type = self.generalize(&env_subst, &value_type);

        let mut new_env = env_subst;
        new_env.insert(var.to_string(), generalized_type);

        let (s2, body_type, tree2) = self.infer(&new_env, body)?;

        let final_subst = self.compose_subst(&s2, &s1);
        let output = format!("{}", body_type);
        let tree = InferenceTree::new("T-Let", &input, &output, vec![tree1, tree2]);
        Ok((final_subst, body_type, tree))
    }

    /// T-Tuple: Γ ⊢ e₁ : τ₁    ...    Γ ⊢ eₙ : τₙ
    ///          ─────────────────────────────────────
    ///              Γ ⊢ (e₁, ..., eₙ) : (τ₁, ..., τₙ)
    fn infer_tuple(
        &mut self,
        env: &Env,
        expr: &Expr,
        exprs: &[Expr],
    ) -> Result<(Subst, Type, InferenceTree)> {
        let input = format!("{} ⊢ {} ⇒", self.pretty_env(env), expr);

        let mut subst = HashMap::new();
        let mut types = Vec::new();
        let mut trees = Vec::new();
        let mut current_env = env.clone();

        for expr in exprs {
            let (s, ty, tree) = self.infer(&current_env, expr)?;
            subst = self.compose_subst(&s, &subst);
            current_env = self.apply_subst_env(&s, &current_env);
            types.push(ty);
            trees.push(tree);
        }

        let result_type = Type::Tuple(types);
        let output = format!("{}", result_type);
        let tree = InferenceTree::new("T-Tuple", &input, &output, trees);
        Ok((subst, result_type, tree))
    }
}
pub fn run_inference(expr: &Expr) -> Result<InferenceTree> {
    let mut inference = TypeInference::new();
    let env = BTreeMap::new();
    let (_, _, tree) = inference.infer(&env, expr)?;
    Ok(tree)
}
pub fn infer_type_only(expr: &Expr) -> Result<Type> {
    let mut inference = TypeInference::new();
    let env = BTreeMap::new();
    let (_, ty, _) = inference.infer(&env, expr)?;
    Ok(ty)
}
}

Example Usage

To see Algorithm W in action, let’s type-check a polymorphic function that demonstrates let-polymorphism and generalization:

$ cargo run -- "let const = \\x -> \\y -> x in const 42 true"

This produces the following clean output showing the complete inference process:

Parsed expression: let const = λx.λy.x in const 42 true

Type inference successful!
Final type: Int

Inference trace:
T-Let: {} ⊢ let const = λx.λy.x in const 42 true ⇒ => Int
  T-Abs: {} ⊢ λx.λy.x ⇒ => t0 → t1 → t0
    T-Abs: {x: t0} ⊢ λy.x ⇒ => t1 → t0
      T-Var: {x: t0, y: t1} ⊢ x ⇒ => t0
  T-App: {const: forall t0 t1. t0 → t1 → t0} ⊢ const 42 true ⇒ => Int
    T-App: {const: forall t0 t1. t0 → t1 → t0} ⊢ const 42 ⇒ => t5 → Int
      T-Var: {const: forall t0 t1. t0 → t1 → t0} ⊢ const ⇒ => t4 → t5 → t4
      T-Int: {const: forall t0 t1. t0 → t1 → t0} ⊢ 42 ⇒ => Int
      Unify-Arrow: t4 → t5 → t4 ~ Int → t3 => {t5 → Int/t3, Int/t4}
        Unify-Var: t4 ~ Int => {Int/t4}
        Unify-Var: t5 → Int ~ t3 => {t5 → Int/t3}
    T-Bool: {const: forall t0 t1. t0 → t1 → t0} ⊢ true ⇒ => Bool
    Unify-Arrow: t5 → Int ~ Bool → t2 => {Int/t2, Bool/t5}
      Unify-Var: t5 ~ Bool => {Bool/t5}
      Unify-Var: Int ~ t2 => {Int/t2}

This trace shows several key aspects of Algorithm W:

  1. Generalization: The lambda \x -> \y -> x initially gets type t0 → t1 → t0, but when bound to const in the let-expression, it’s generalized to ∀t0 t1. t0 → t1 → t0.

  2. Instantiation: When const is used in the application, it gets instantiated with fresh type variables t4 and t5, allowing it to be used polymorphically.

  3. Unification: The constraints from applying const to 42 and true get solved through unification, determining that t4 = Int, t5 = Bool, and the final result type is Int.

The final result is Int, showing that const 42 true correctly returns the first argument (42) regardless of the second argument’s type (true).

These interface functions demonstrate how Algorithm W can be embedded into larger systems. The tree-generating version supports educational tools and debuggers, while the type-only version provides the minimal interface needed for type checking during compilation.

Examples

The simplest cases involve literals and variables where type inference is straightforward. Integer literals have type Int, boolean literals have type Bool, and variables receive the types assigned to them in the typing environment.

Let’s examine how our implementation handles these basic cases using the test suite that validates our Algorithm W implementation.

#![allow(unused)]
fn main() {
Basic literals
42
true
false

Variables and identity
\x -> x
\f -> f
\x -> \y -> x
\x -> \y -> y

Application
(\x -> x) 42
(\x -> x) true
(\f -> \x -> f x) (\y -> y) 42

Let expressions
let x = 42 in x
let f = \x -> x in f
let id = \x -> x in id 42
let id = \x -> x in id true
let f = \x -> x in let g = \y -> y in f (g 42)

Tuples
(42, true)
(true, false, 42)
(\x -> (x, x)) 42
let pair = \x -> \y -> (x, y) in pair 42 true

Higher-order functions
\f -> \x -> f x
\f -> \x -> f (f x)
\f -> \g -> \x -> f (g x)
let twice = \f -> \x -> f (f x) in twice
let compose = \f -> \g -> \x -> f (g x) in compose

Complex examples
let K = \x -> \y -> x in K
let S = \f -> \g -> \x -> f x (g x) in S
let Y = \f -> (\x -> f (x x)) (\x -> f (x x)) in Y

Polymorphic examples (simplified for our basic implementation)
let id = \x -> x in (id, id)
let const = \x -> \y -> x in const
let flip = \f -> \x -> \y -> f y x in flip

Error cases (should fail)
\x -> x x
(\x -> x x) (\x -> x x)
}

These examples demonstrate the foundation of type inference. The literal 42 immediately receives type Int without any complex reasoning required. Boolean values like true and false similarly receive type Bool. The type system’s handling of these base cases forms the building blocks for more complex inference scenarios.

Function Types and Application

Function types represent the core of lambda calculus and functional programming. When we encounter a lambda abstraction, Algorithm W assigns a fresh type variable to the parameter and infers the type of the body. The resulting function type connects the parameter type with the return type through an arrow.

Function application drives the constraint generation that makes Algorithm W powerful. When applying a function to an argument, the algorithm generates constraints that the function’s type must be an arrow from the argument’s type to some result type. Unification then solves these constraints.

The identity function \x -> x provides the simplest example of polymorphic type inference. Algorithm W assigns a fresh type variable to the parameter x, then discovers that the body is just x itself. The resulting type t0 -> t0 captures the essence of the identity function: it accepts any type and returns the same type.

More complex function applications demonstrate how constraints propagate through the system. When we apply the identity function to the integer 42, the unification process discovers that the type variable t0 must be Int, yielding the final type Int for the entire expression.

Let Polymorphism

Let expressions introduce one of the most features of the Hindley-Milner type system: let-polymorphism. This mechanism allows variables bound in let expressions to be used with different types in different contexts, enabling flexible code reuse without sacrificing type safety.

The classic example involves binding the identity function in a let expression and then using it with different types. Algorithm W generalizes the type of the bound expression by abstracting over type variables that don’t appear free in the current environment. This generalization allows the same binding to be instantiated with fresh type variables at each use site.

Consider the expression let f = \x -> x in (f 42, f true). First, Algorithm W infers that the identity function has type t0 -> t0. During generalization, since t0 doesn’t appear in the environment, the system treats this as a polymorphic type that can be instantiated differently at each use.

When the algorithm encounters the first application f 42, it creates a fresh instance of the polymorphic type, say t1 -> t1, and unifies this with Int -> t2 (where t2 is the expected result type). This unification succeeds with t1 = Int and t2 = Int.

For the second application f true, Algorithm W creates another fresh instance t3 -> t3 and unifies this with Bool -> t4. This succeeds with t3 = Bool and t4 = Bool. The final result is a tuple type (Int, Bool), demonstrating how the same function can be used polymorphically.

Tuple Types and Complex Data Structures

Tuples provide a way to combine multiple values with potentially different types. Our Algorithm W implementation handles tuples by inferring the type of each component and combining them into a tuple type.

The expression (42, true) demonstrates basic tuple construction. Algorithm W infers that the first component has type Int and the second has type Bool, yielding the tuple type (Int, Bool). This extends naturally to nested tuples like ((1, 2), (true, false)) which receives type ((Int, Int), (Bool, Bool)).

Tuples interact interestingly with polymorphism. The expression let f = \x -> (x, x) in f 42 shows how a polymorphic function can construct tuples. The function f has type t0 -> (t0, t0), creating a tuple where both components have the same type as the input. When applied to 42, this yields type (Int, Int).

Type Error Detection

Algorithm W’s constraint-based approach makes it excellent at detecting type errors and providing meaningful error messages. When unification fails, the algorithm can identify exactly where and why types are incompatible.

Attempting to apply a non-function value like 42 true generates a type error because the integer 42 has type Int, but function application requires an arrow type. The unification of Int with Bool -> t0 fails, producing a clear error message.

More subtle errors arise from inconsistent uses of polymorphic functions. While Algorithm W handles let-polymorphism elegantly, it correctly rejects attempts to use functions in incompatible ways within the same scope.

Complex Inference Scenarios

Real-world programs often involve complex combinations of functions, applications, and data structures that test the full power of Algorithm W. These scenarios demonstrate how the algorithm handles intricate constraint propagation and substitution.

Higher-order functions provide particularly interesting examples. The expression \f -> \x -> f (f x) creates a function that applies another function twice. Algorithm W assigns fresh type variables and builds constraints that capture the relationships between all the types involved.

Let’s trace through this inference process. The outer lambda receives a fresh parameter type t0 for f. The inner lambda receives type t1 for x. The application f x requires that f has type t1 -> t2 for some fresh type t2. The outer application f (f x) then requires that f also has type t2 -> t3 for the final result type t3.

Unification resolves these constraints by discovering that t2 = t1 and the final type is (t1 -> t1) -> t1 -> t1. This captures the essence of function composition: given a function from some type to itself, produce a function that applies it twice.

The Y-Combinator

The Y-combinator represents a fundamental limitation of the Hindley-Milner type system. This famous fixed-point combinator cannot be typed in our system, illustrating an important boundary between what can be expressed with simple types and what requires more advanced type system features.

The Y-combinator is defined as:

$ cargo run -- "\\f -> (\\x -> f (x x)) (\\x -> f (x x))"
Type inference error: Occurs check failed: variable 't2' occurs in type t2 → t4

The failure occurs when attempting to type the self-application x x. When the type checker encounters this expression, it must assign the variable x a type that is simultaneously a function type (to be applied) and an argument type (to be passed to itself). This creates the constraint that some type variable t must equal t → τ for some type τ. The occurs check prevents this infinite type, ensuring the type system remains decidable and sound.

This limitation is not a bug but a fundamental characteristic of the Hindley-Milner type system. The Y-combinator requires more advanced type system features such as recursive types or explicit fixed-point operators. Real functional programming languages typically provide built-in recursion constructs like let rec that are handled specially by the type checker, avoiding the need for explicit fixed-point combinators at the term level.

The inability to type the Y-combinator illustrates an important trade-off in programming language design between expressiveness and decidability. While more powerful type systems exist that can handle self-application, they come at the cost of increased complexity and potentially undecidable type checking. Algorithm W chooses decidability and simplicity, making it practical for real programming languages while accepting certain expressiveness limitations.

Limitations: No Mutual Recursion

Our Algorithm W implementation has a significant limitation: it does not support mutual recursion. Mutually recursive definitions occur when two or more functions call each other, creating a circular dependency that requires careful handling during type inference.

Consider this example that would fail in our implementation:

let even = \n -> if n == 0 then true else odd (n - 1) in
let odd = \n -> if n == 0 then false else even (n - 1) in
odd 5

The problem is that when we encounter the definition of even, the function odd is not yet in scope, so the reference to odd in the body of even fails. Standard Algorithm W processes let-bindings sequentially, which makes mutual recursion impossible without additional machinery.

Supporting mutual recursion requires more techniques:

  1. Dependency Analysis: The type checker must analyze the dependency graph of definitions to identify strongly connected components (groups of mutually recursive functions).

  2. Simultaneous Inference: All functions in a mutually recursive group must be typed together, typically by assigning fresh type variables to each function initially, then solving all constraints simultaneously.

  3. Generalization Delays: The generalization step (which introduces polymorphism) must be delayed until after all mutual dependencies are resolved.

These extensions significantly complicate the implementation and are beyond the scope of our simple Algorithm W demonstration.

Performance (Or Lack Thereof)

While Algorithm W is theoretically elegant, practical implementations must consider performance characteristics. The algorithm’s complexity depends on the size of expressions and the complexity of types involved. Most real programs involve relatively simple type constraints that Algorithm W handles efficiently.

Our implementation demonstrates the core algorithms without the optimizations found in production compilers. Industrial-strength implementations often employ techniques like type-directed compilation, constraint caching, and incremental type checking to handle large codebases efficiently.

System F Type Checker

While the simple lambda calculus with HM-style type schemes introduces safety and limited polymorphism, it is also very rigid and not very expressive. So now we move onto System F, also known as the polymorphic lambda calculus, smashes this limitation by introducing parametric polymorphism: the ability to write a single piece of code that is generic over types. This is the theoretical foundation for generics in languages like Rust, Java, and C++, and it represents a monumental leap in expressive power.

To achieve this, System F extends both the term and type languages. The most crucial addition to the type language is the universal quantifier, \(\forall\) (forall), which allows us to express the type of a polymorphic function.

\[ \begin{align*} \text{types} \quad \tau &::= \alpha \mid \tau_1 \to \tau_2 \mid \forall \alpha . \tau \mid \text{Int} \mid \text{Bool} \\ \text{expressions} \quad e &::= x \mid \lambda x : \tau . e \mid e_1 \ e_2 \mid \Lambda \alpha . e \mid e[\tau] \mid \dots \end{align*} \]

The type \(\forall \alpha . \tau\) is represented by Type::Forall(String, Box<Type>). The term language now includes two new constructs specifically for handling polymorphism:

  1. Type Abstraction (\(\Lambda \alpha . e\)): This creates a polymorphic function. The capital lambda (\(\Lambda\)) signifies that we are abstracting over a type variable \(\alpha\), not a term variable. This expression is represented by Expr::TAbs(String, Box<Expr>).
  2. Type Application (\(e[\tau]\)): This specializes a polymorphic function by applying it to a concrete type \(\tau\). It’s how we use a generic function. This is represented by Expr::TApp(Box<Expr>, Box<Type>).

A key difference from the untyped lambda calculus is that our term-level abstractions (\(\lambda\)) are now explicitly annotated: \(\lambda x : \tau . e\). The programmer must declare the type of the function’s parameter. This is a hallmark of System F; its power comes at the cost of full type inference, necessitating these annotations.

The quintessential example of System F’s power is the polymorphic identity function, id. We can now write a single identity function that works for any type \(\alpha\). We create it with a type abstraction and give its parameter x the generic type \(\alpha\):

\[ \text{id} = \Lambda \alpha . \lambda x : \alpha . x \]

The type of this function is \(\forall \alpha . \alpha \to \alpha\). This type says: “For all types \(\alpha\), I am a function that takes an argument of type \(\alpha\) and returns a value of type \(\alpha\).”

To use this polymorphic function, we apply it to a type using type application. For example, to get an identity function specifically for integers, we apply id to the type \(\text{Int}\):

\[ \text{id}[\text{Int}] \]

This is a computation that happens at the type-checking level. The result of this type application is a new, specialized expression, \(\lambda x : \text{Int} . x\), which has the corresponding specialized type \(\text{Int} \to \text{Int}\). We can then apply this specialized function to a value, like \((\lambda x : \text{Int} . x) \ 5\), which finally reduces to \(5\). This separation of type-level application (\(e[\tau]\)) and term-level application (\(e_1 \ e_2\)) is fundamental to System F. It provides a powerful and safe way to write generic code, but as we have seen, it forces the programmer to be more explicit with annotations, a tradeoff that directly motivates the bidirectional algorithms used in more modern language implementations.

Abstract Syntax Trees

Let’s look at the full abstract syntax tree for both the type language (enum Type) and the term language (enum Expr):

The Type Level

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, PartialEq)]
pub enum Expr {
    Var(String),
    App(Box<Expr>, Box<Expr>),
    Abs(String, Box<Type>, Box<Expr>),
    TApp(Box<Expr>, Box<Type>),
    TAbs(String, Box<Expr>),
    Ann(Box<Expr>, Box<Type>), // Type annotation: e : T
    LitInt(i64),
    LitBool(bool),
    // New constructs
    Let(String, Box<Expr>, Box<Expr>),           // let x = e1 in e2
    IfThenElse(Box<Expr>, Box<Expr>, Box<Expr>), // if e1 then e2 else e3
    BinOp(BinOp, Box<Expr>, Box<Expr>),          // Binary operations
}
#[derive(Debug, Clone, PartialEq)]
pub enum BinOp {
    // Arithmetic
    Add, // +
    Sub, // -
    Mul, // *
    Div, // /
    // Boolean
    And, // &&
    Or,  // ||
    // Comparison
    Eq, // ==
    Ne, // !=
    Lt, // <
    Le, // <=
    Gt, // >
    Ge, // >=
}
impl std::fmt::Display for Type {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Type::Var(name) => write!(f, "{}", name),
            Type::ETVar(name) => write!(f, "^{}", name),
            Type::Int => write!(f, "Int"),
            Type::Bool => write!(f, "Bool"),
            Type::Arrow(t1, t2) => {
                // Add parentheses around left side if it's an arrow to avoid ambiguity
                match t1.as_ref() {
                    Type::Arrow(_, _) => write!(f, "({}) -> {}", t1, t2),
                    _ => write!(f, "{} -> {}", t1, t2),
                }
            }
            Type::Forall(var, ty) => write!(f, "∀{}. {}", var, ty),
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum Type {
    Var(String),                 // α (ordinary type variable)
    ETVar(String),               // ^α (existential type variable)
    Arrow(Box<Type>, Box<Type>), // A -> B
    Forall(String, Box<Type>),   // ∀α. A
    Int,                         // Int
    Bool,                        // Bool
}
}

The Type enum defines the grammar for all possible types in our language. It’s the vocabulary we use to describe our expressions.

  • Type::Var(String): This represents a simple type variable, like \(\alpha\) or \(\beta\). These are placeholders for types that will be specified later, often used in polymorphic functions.

  • Type::ETVar(String): This represents an “existential type variable,” often written as \(^\alpha\). These are a special kind of variable used internally by the type checker, particularly in bidirectional algorithms. They act as placeholders for an unknown type that the algorithm needs to infer.

  • Type::Arrow(Box<Type>, Box<Type>): This is the function type, \(\tau_1 \to \tau_2\). It represents a function that takes an argument of the first type and returns a result of the second type.

  • Type::Forall(String, Box<Type>): This is the universal quantifier, \(\forall \alpha . \tau\). It is the cornerstone of System F and represents a polymorphic type. It reads: “for all types \(\alpha\), the following type \(\tau\) holds.” The String is the name of the type variable \(\alpha\) being bound.

  • Type::Int and Type::Bool: These are primitive, or base, types. They are concrete types that are built directly into the language, representing 64-bit integers and booleans, respectively.

The Value Level

The Expr enum defines the grammar for all runnable expressions or terms. This is the code that performs computations.

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, PartialEq)]
pub enum BinOp {
    // Arithmetic
    Add, // +
    Sub, // -
    Mul, // *
    Div, // /
    // Boolean
    And, // &&
    Or,  // ||
    // Comparison
    Eq, // ==
    Ne, // !=
    Lt, // <
    Le, // <=
    Gt, // >
    Ge, // >=
}
#[derive(Debug, Clone, PartialEq)]
pub enum Type {
    Var(String),                 // α (ordinary type variable)
    ETVar(String),               // ^α (existential type variable)
    Arrow(Box<Type>, Box<Type>), // A -> B
    Forall(String, Box<Type>),   // ∀α. A
    Int,                         // Int
    Bool,                        // Bool
}
impl std::fmt::Display for Type {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Type::Var(name) => write!(f, "{}", name),
            Type::ETVar(name) => write!(f, "^{}", name),
            Type::Int => write!(f, "Int"),
            Type::Bool => write!(f, "Bool"),
            Type::Arrow(t1, t2) => {
                // Add parentheses around left side if it's an arrow to avoid ambiguity
                match t1.as_ref() {
                    Type::Arrow(_, _) => write!(f, "({}) -> {}", t1, t2),
                    _ => write!(f, "{} -> {}", t1, t2),
                }
            }
            Type::Forall(var, ty) => write!(f, "∀{}. {}", var, ty),
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum Expr {
    Var(String),
    App(Box<Expr>, Box<Expr>),
    Abs(String, Box<Type>, Box<Expr>),
    TApp(Box<Expr>, Box<Type>),
    TAbs(String, Box<Expr>),
    Ann(Box<Expr>, Box<Type>), // Type annotation: e : T
    LitInt(i64),
    LitBool(bool),
    // New constructs
    Let(String, Box<Expr>, Box<Expr>),           // let x = e1 in e2
    IfThenElse(Box<Expr>, Box<Expr>, Box<Expr>), // if e1 then e2 else e3
    BinOp(BinOp, Box<Expr>, Box<Expr>),          // Binary operations
}
}
  • Var(String): A term-level variable, like x or f. It refers to a value that is in scope, such as a function parameter or a let-bound variable.

  • App(Box<Expr>, Box<Expr>): This is function application, \(e_1 \ e_2\). It represents calling the function \(e_1\) with the argument \(e_2\).

  • Abs(String, Box<Type>, Box<Expr>): This is a typed lambda abstraction, \(\lambda x : \tau . e\). It defines an anonymous function. Unlike in the pure lambda calculus, the parameter (String) must have an explicit type annotation (Box<Type>). The final Box<Expr> is the function’s body.

  • TApp(Box<Expr>, Box<Type>): This is type application, \(e[\tau]\). This is the mechanism for specializing a polymorphic function. The expression \(e\) must have a Forall type, and this construct applies it to a specific type \(\tau\), effectively “filling in” the generic type parameter.

  • TAbs(String, Box<Expr>): This is type abstraction, \(\Lambda \alpha . e\). This is how we create a polymorphic function. It introduces a new type variable (String) that can be used within the expression body (Box<Expr>).

  • Ann(Box<Expr>, Box<Type>): This represents a type annotation, \(e : T\). It’s an explicit instruction to the type checker, asserting that the expression \(e\) should have the type \(T\). This is invaluable in a bidirectional system for guiding the inference process and resolving ambiguity.

  • LitInt(i64) and LitBool(bool): These are literal values. They represent concrete, primitive values that are “wired into” the language, corresponding to the base types Int and Bool. They are the simplest form of expression, representing a constant value.

Implementation

Ok, now we move beyond the trivial type systems from the 1970s and into the fun stuff from the 1980s! System F’s bidirectional type checking represents a approach to handling polymorphic types without requiring full type annotations everywhere. The bidirectional approach splits type checking into two complementary modes: inference (synthesizing types from expressions) and checking (verifying that expressions conform to expected types). This division allows the system to gracefully handle situations where types are partially known or completely unknown, making the language more ergonomic while preserving type safety.

Typing Rules

Before diving into the implementation details, let’s establish the formal typing rules that govern System F. Buckle up, because we’re about to embark into the fun magical land of type-level wizardry! We’ll be introducing a few new symbols that might look intimidating at first, but they’re really not that scary once you get used to them.

  • \( \Gamma \) (Gamma) - The typing context, which is like a dictionary that maps variables to their types and keeps track of what we know so far.

  • \( \vdash \) (Turnstile) - The “proves” or “entails” symbol. When we write \( \Gamma \vdash e \Rightarrow A \), we’re saying “given context \( \Gamma \), expression \( e \) synthesizes type \( A \).”

  • \( \Rightarrow \) (Double Right Arrow) - Inference mode, where we’re asking “what type does this expression have?” The type checker figures it out for us.

  • \( \Leftarrow \) (Double Left Arrow) - Checking mode, where we’re saying “please verify this expression has the expected type.” We already know what type we want.

  • \( \forall \) (Forall) - Universal quantification, meaning “for any type.” When we see \( \forall \alpha. A \), it means “for any type \( \alpha \), we have type \( A \).”

  • \( \hat{\alpha} \) (Hat Alpha) - Existential type variables, which are like type-level unknowns that the system solves during inference. Think of them as placeholders that get filled in later.

  • \( \bullet \) (Bullet) - The application judgment symbol used in our inference rules. When we write \( A \bullet e \Rightarrow B \), we’re saying “applying type \( A \) to expression \( e \) yields type \( B \).”

  • \( <: \) (Subtype) - The subtyping relation, expressing that one type is “more specific” than another. For example, \( \text{Int} <: \forall \alpha. \alpha \) would mean Int is a subtype of the polymorphic type.

  • \( [B/\alpha]A \) - Type substitution, replacing all occurrences of type variable \( \alpha \) with type \( B \) in type \( A \). This is how we instantiate polymorphic types.

Now that we’ve equipped ourselves with this symbolic toolkit, let’s see how these pieces combine to create the elegant machinery of System F type checking.

Basic Rules

The variable rule looks up types from the context:

\[ \frac{x : A \in \Gamma}{\Gamma \vdash x \Rightarrow A} \text{(T-Var)} \]

Application checks that the function type matches the argument:

\[ \frac{\Gamma \vdash e_1 \Rightarrow A \to B \quad \Gamma \vdash e_2 \Leftarrow A}{\Gamma \vdash e_1 ; e_2 \Rightarrow B} \text{(T-App)} \]

Lambda abstraction introduces a new variable binding:

\[ \frac{\Gamma, x : A \vdash e \Leftarrow B}{\Gamma \vdash \lambda x. e \Leftarrow A \to B} \text{(T-Abs)} \]

Polymorphic Rules

Universal introduction allows us to generalize over type variables:

\[ \frac{\Gamma, \alpha \vdash e \Leftarrow A}{\Gamma \vdash e \Leftarrow \forall \alpha. A} \text{(T-ForallI)} \]

Universal elimination instantiates polymorphic types:

\[ \frac{\Gamma \vdash e \Rightarrow \forall \alpha. A}{\Gamma \vdash e \Rightarrow [B/\alpha]A} \text{(T-ForallE)} \]

Type annotation allows switching from checking to inference mode:

\[ \frac{\Gamma \vdash e \Leftarrow A}{\Gamma \vdash (e : A) \Rightarrow A} \text{(T-Instr)} \]

Primitive Type Rules

Integer literals have type Int:

\[ \frac{}{\Gamma \vdash n \Rightarrow \text{Int}} \text{(T-LitInt)} \]

Boolean literals have type Bool:

\[ \frac{}{\Gamma \vdash \text{true} \Rightarrow \text{Bool}} \text{(T-LitBool)} \]

Control Flow Rules

Let bindings introduce local variables:

\[ \frac{\Gamma \vdash e_1 \Rightarrow A \quad \Gamma, x : A \vdash e_2 \Rightarrow B}{\Gamma \vdash \text{let } x = e_1 \text{ in } e_2 \Rightarrow B} \text{(T-Let)} \]

Conditional expressions require Bool conditions and matching branch types:

\[ \frac{\Gamma \vdash e_1 \Leftarrow \text{Bool} \quad \Gamma \vdash e_2 \Rightarrow A \quad \Gamma \vdash e_3 \Leftarrow A}{\Gamma \vdash \text{if } e_1 \text{ then } e_2 \text{ else } e_3 \Rightarrow A} \text{(T-If)} \]

Binary Operation Rules

Arithmetic operations take two integers and return an integer:

\[ \frac{\Gamma \vdash e_1 \Leftarrow \text{Int} \quad \Gamma \vdash e_2 \Leftarrow \text{Int}}{\Gamma \vdash e_1 \oplus e_2 \Rightarrow \text{Int}} \text{(T-Arith)} \]

Boolean operations take two booleans and return a boolean:

\[ \frac{\Gamma \vdash e_1 \Leftarrow \text{Bool} \quad \Gamma \vdash e_2 \Leftarrow \text{Bool}}{\Gamma \vdash e_1 \land e_2 \Rightarrow \text{Bool}} \text{(T-Bool)} \]

Comparison operations take two integers and return a boolean:

\[ \frac{\Gamma \vdash e_1 \Leftarrow \text{Int} \quad \Gamma \vdash e_2 \Leftarrow \text{Int}}{\Gamma \vdash e_1 < e_2 \Rightarrow \text{Bool}} \text{(T-Cmp)} \]

Equality operations are polymorphic and work on any type:

\[ \frac{\Gamma \vdash e_1 \Rightarrow A \quad \Gamma \vdash e_2 \Leftarrow A}{\Gamma \vdash e_1 = e_2 \Rightarrow \text{Bool}} \text{(T-Eq)} \]

Bidirectional Rules

The mode switch allows inference results to be checked:

\[ \frac{\Gamma \vdash e \Rightarrow A}{\Gamma \vdash e \Leftarrow A} \text{(T-Sub)} \]

Existential variables are introduced for unknown types:

\[ \frac{\Gamma, \hat{\alpha} \vdash e \Rightarrow A}{\Gamma \vdash e \Rightarrow [\hat{\alpha}/\alpha]A} \text{(T-InstL)} \]

Application Inference Rules

Application inference handles complex cases where the function type is not immediately known:

Application with arrow types: \[ \frac{\Gamma \vdash e_2 \Leftarrow A}{\Gamma \vdash A \to B \bullet e_2 \Rightarrow B} \text{(T-AppArrow)} \]

Application with existential variables: \[ \frac{\Gamma[\hat{\alpha} := \hat{\alpha_1} \to \hat{\alpha_2}], \hat{\alpha_1}, \hat{\alpha_2} \vdash e_2 \Leftarrow \hat{\alpha_1}}{\Gamma \vdash \hat{\alpha} \bullet e_2 \Rightarrow \hat{\alpha_2}} \text{(T-AppEVar)} \]

In these rules, \( \Rightarrow \) indicates inference mode (synthesizing a type), while \( \Leftarrow \) indicates checking mode (verifying against an expected type). The hat notation \( \hat{\alpha} \) denotes existential type variables that the system solves during inference.

Core Data Structures

Context and Environment Management

The bidirectional algorithm maintains a context that tracks multiple kinds of bindings and constraints. Our context system needs to handle not just term variables and their types, but also type variables, existential variables, and the relationships between them.

#![allow(unused)]
fn main() {
use std::collections::HashSet;
use std::fmt;
use crate::ast::{BinOp, Expr, Type};
use crate::errors::{TypeError, TypeResult};
pub type TyVar = String;
pub type TmVar = String;
impl fmt::Display for Entry {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Entry::VarBnd(x, ty) => write!(f, "{}: {}", x, ty),
            Entry::TVarBnd(a) => write!(f, "{}", a),
            Entry::ETVarBnd(a) => write!(f, "^{}", a),
            Entry::SETVarBnd(a, ty) => write!(f, "^{} = {}", a, ty),
            Entry::Mark(a) => write!(f, "${}", a),
        }
    }
}
#[derive(Debug, Clone)]
pub struct Context(Vec<Entry>);
impl Default for Context {
    fn default() -> Self {
        Self::new()
    }
}
impl Context {
    pub fn new() -> Self {
        Context(Vec::new())
    }

    pub fn push(&mut self, entry: Entry) {
        self.0.push(entry);
    }

    pub fn find<F>(&self, predicate: F) -> Option<&Entry>
    where
        F: Fn(&Entry) -> bool, {
        self.0.iter().find(|entry| predicate(entry))
    }

    pub fn break3<F>(&self, predicate: F) -> (Vec<Entry>, Option<Entry>, Vec<Entry>)
    where
        F: Fn(&Entry) -> bool, {
        if let Some(pos) = self.0.iter().position(predicate) {
            let left = self.0[..pos].to_vec();
            let middle = self.0[pos].clone();
            let right = self.0[pos + 1..].to_vec();
            (left, Some(middle), right)
        } else {
            (self.0.clone(), None, Vec::new())
        }
    }

    pub fn from_parts(left: Vec<Entry>, middle: Entry, right: Vec<Entry>) -> Self {
        let mut ctx = left;
        ctx.push(middle);
        ctx.extend(right);
        Context(ctx)
    }
}
impl fmt::Display for Context {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        let entries: Vec<String> = self.0.iter().rev().map(|e| e.to_string()).collect();
        write!(f, "{}", entries.join(", "))
    }
}
#[derive(Debug)]
pub struct InferenceTree {
    pub rule: String,
    pub input: String,
    pub output: String,
    pub children: Vec<InferenceTree>,
}
impl InferenceTree {
    fn new(rule: &str, input: &str, output: &str, children: Vec<InferenceTree>) -> Self {
        Self {
            rule: rule.to_string(),
            input: input.to_string(),
            output: output.to_string(),
            children,
        }
    }
}
impl fmt::Display for InferenceTree {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        self.display_with_indent(f, 0)
    }
}
impl InferenceTree {
    fn display_with_indent(&self, f: &mut fmt::Formatter, indent: usize) -> fmt::Result {
        let prefix = "  ".repeat(indent);
        writeln!(
            f,
            "{}{}: {} => {}",
            prefix, self.rule, self.input, self.output
        )?;
        for child in &self.children {
            child.display_with_indent(f, indent + 1)?;
        }
        Ok(())
    }
}
pub struct BiDirectional {
    counter: usize,
}
impl Default for BiDirectional {
    fn default() -> Self {
        Self::new()
    }
}
#[allow(clippy::only_used_in_recursion)]
impl BiDirectional {
    pub fn new() -> Self {
        Self { counter: 0 }
    }

    fn fresh_tyvar(&mut self) -> TyVar {
        let var = format!("α{}", self.counter);
        self.counter += 1;
        var
    }

    fn is_mono(&self, ty: &Type) -> bool {
        match ty {
            Type::Int | Type::Bool | Type::Var(_) | Type::ETVar(_) => true,
            Type::Arrow(t1, t2) => self.is_mono(t1) && self.is_mono(t2),
            Type::Forall(_, _) => false,
        }
    }

    fn free_vars(&self, ty: &Type) -> HashSet<TyVar> {
        match ty {
            Type::Var(name) | Type::ETVar(name) => {
                let mut set = HashSet::new();
                set.insert(name.clone());
                set
            }
            Type::Arrow(t1, t2) => {
                let mut set = self.free_vars(t1);
                set.extend(self.free_vars(t2));
                set
            }
            Type::Forall(var, ty) => {
                let mut set = self.free_vars(ty);
                set.remove(var);
                set
            }
            Type::Int | Type::Bool => HashSet::new(),
        }
    }

    fn subst_type(&self, var: &TyVar, replacement: &Type, ty: &Type) -> Type {
        match ty {
            Type::Var(name) if name == var => replacement.clone(),
            Type::ETVar(name) if name == var => replacement.clone(),
            Type::Var(_) | Type::ETVar(_) | Type::Int | Type::Bool => ty.clone(),
            Type::Arrow(t1, t2) => Type::Arrow(
                Box::new(self.subst_type(var, replacement, t1)),
                Box::new(self.subst_type(var, replacement, t2)),
            ),
            Type::Forall(bound_var, body) => {
                if bound_var == var {
                    ty.clone() // Variable is shadowed
                } else {
                    Type::Forall(
                        bound_var.clone(),
                        Box::new(self.subst_type(var, replacement, body)),
                    )
                }
            }
        }
    }

    pub fn apply_ctx_type(&self, ctx: &Context, ty: &Type) -> Type {
        let mut current = ty.clone();
        let mut changed = true;

        // Keep applying substitutions until no more changes occur
        while changed {
            changed = false;
            let new_type = self.apply_ctx_type_once(ctx, &current);
            if new_type != current {
                changed = true;
                current = new_type;
            }
        }

        current
    }

    fn apply_ctx_type_once(&self, ctx: &Context, ty: &Type) -> Type {
        match ty {
            Type::ETVar(a) => {
                if let Some(Entry::SETVarBnd(_, replacement)) =
                    ctx.find(|entry| matches!(entry, Entry::SETVarBnd(name, _) if name == a))
                {
                    // Recursively apply substitutions to the replacement type
                    self.apply_ctx_type_once(ctx, replacement)
                } else {
                    ty.clone()
                }
            }
            Type::Arrow(t1, t2) => Type::Arrow(
                Box::new(self.apply_ctx_type_once(ctx, t1)),
                Box::new(self.apply_ctx_type_once(ctx, t2)),
            ),
            Type::Forall(var, body) => {
                Type::Forall(var.clone(), Box::new(self.apply_ctx_type_once(ctx, body)))
            }
            _ => ty.clone(),
        }
    }

    fn before(&self, ctx: &Context, a: &TyVar, b: &TyVar) -> bool {
        let pos_a = ctx
            .0
            .iter()
            .position(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
        let pos_b = ctx
            .0
            .iter()
            .position(|entry| matches!(entry, Entry::ETVarBnd(name) if name == b));

        match (pos_a, pos_b) {
            (Some(pa), Some(pb)) => pa > pb, // Later in the context means earlier in ordering
            _ => false,
        }
    }

    /// Check if a type variable occurs in a type (occurs check for unification)
    /// This prevents creating infinite types when solving ^α := τ by ensuring α
    /// ∉ τ
    fn occurs_check(&self, var: &TyVar, ty: &Type) -> bool {
        match ty {
            Type::Var(name) | Type::ETVar(name) => name == var,
            Type::Arrow(t1, t2) => self.occurs_check(var, t1) || self.occurs_check(var, t2),
            Type::Forall(bound_var, body) => {
                // If the variable is shadowed by the forall binding, it doesn't occur
                if bound_var == var {
                    false
                } else {
                    self.occurs_check(var, body)
                }
            }
            Type::Int | Type::Bool => false,
        }
    }

    // ========== TYPING RULE METHODS ==========
    // Each method implements a specific typing rule from System F

    /// T-Var: Variable lookup rule
    /// Γ, x:A ⊢ x ⇒ A
    fn infer_var(
        &self,
        ctx: &Context,
        x: &str,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        if let Some(Entry::VarBnd(_, ty)) =
            ctx.find(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x))
        {
            let output = format!("{} ⇒ {} ⊣ {}", input, ty, ctx);
            Ok((
                ty.clone(),
                ctx.clone(),
                InferenceTree::new("InfVar", input, &output, vec![]),
            ))
        } else {
            Err(TypeError::UnboundVariable {
                name: x.to_string(),
                expr: None,
            })
        }
    }

    /// T-LitInt: Integer literal rule
    /// ⊢ n ⇒ Int
    fn infer_lit_int(
        &self,
        ctx: &Context,
        _n: i64,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let output = format!("{} ⇒ Int ⊣ {}", input, ctx);
        Ok((
            Type::Int,
            ctx.clone(),
            InferenceTree::new("InfLitInt", input, &output, vec![]),
        ))
    }

    /// T-LitBool: Boolean literal rule
    /// ⊢ b ⇒ Bool
    fn infer_lit_bool(
        &self,
        ctx: &Context,
        _b: bool,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let output = format!("{} ⇒ Bool ⊣ {}", input, ctx);
        Ok((
            Type::Bool,
            ctx.clone(),
            InferenceTree::new("InfLitBool", input, &output, vec![]),
        ))
    }

    /// T-Abs: Lambda abstraction rule
    /// Γ,x:A ⊢ e ⇐ B
    /// ─────────────────────
    /// Γ ⊢ λx:A.e ⇒ A → B
    fn infer_abs(
        &mut self,
        ctx: &Context,
        x: &str,
        param_ty: &Type,
        body: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let b = self.fresh_tyvar();
        let mut new_ctx = ctx.clone();
        new_ctx.push(Entry::VarBnd(x.to_string(), param_ty.clone()));
        new_ctx.push(Entry::ETVarBnd(b.clone()));

        let (ctx1, tree) = self.check(&new_ctx, body, &Type::ETVar(b.clone()))?;
        let (left, _, right) =
            ctx1.break3(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x));
        // Preserve solved existential variable bindings from the left context
        let mut final_ctx_entries = left
            .into_iter()
            .filter(|entry| matches!(entry, Entry::SETVarBnd(_, _)))
            .collect::<Vec<_>>();
        final_ctx_entries.extend(right);
        let final_ctx = Context(final_ctx_entries);
        let result_ty = Type::Arrow(Box::new(param_ty.clone()), Box::new(Type::ETVar(b)));
        let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, final_ctx);
        Ok((
            result_ty,
            final_ctx,
            InferenceTree::new("InfLam", input, &output, vec![tree]),
        ))
    }

    /// T-App: Function application rule
    /// Γ ⊢ e1 ⇒ A   Γ ⊢ e2 • A ⇒⇒ C
    /// ────────────────────────────────
    /// Γ ⊢ e1 e2 ⇒ C
    fn infer_application(
        &mut self,
        ctx: &Context,
        func: &Expr,
        arg: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let (func_ty, ctx1, tree1) = self.infer(ctx, func)?;
        let func_ty_applied = self.apply_ctx_type(&ctx1, &func_ty);
        let (result_ty, ctx2, tree2) = self.infer_app(&ctx1, &func_ty_applied, arg)?;
        let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, ctx2);
        Ok((
            result_ty,
            ctx2,
            InferenceTree::new("InfApp", input, &output, vec![tree1, tree2]),
        ))
    }

    /// T-Let: Let binding rule
    /// Γ ⊢ e1 ⇒ A   Γ,x:A ⊢ e2 ⇒ B
    /// ─────────────────────────────
    /// Γ ⊢ let x = e1 in e2 ⇒ B
    fn infer_let(
        &mut self,
        ctx: &Context,
        x: &str,
        e1: &Expr,
        e2: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let (ty1, ctx1, tree1) = self.infer(ctx, e1)?;
        let mut new_ctx = ctx1.clone();
        new_ctx.push(Entry::VarBnd(x.to_string(), ty1));
        let (ty2, ctx2, tree2) = self.infer(&new_ctx, e2)?;
        let (left, _, right) =
            ctx2.break3(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x));
        // Preserve solved existential variable bindings from the left context
        let mut final_ctx_entries = left
            .into_iter()
            .filter(|entry| matches!(entry, Entry::SETVarBnd(_, _)))
            .collect::<Vec<_>>();
        final_ctx_entries.extend(right);
        let final_ctx = Context(final_ctx_entries);
        let output = format!("{} ⇒ {} ⊣ {}", input, ty2, final_ctx);
        Ok((
            ty2,
            final_ctx,
            InferenceTree::new("InfLet", input, &output, vec![tree1, tree2]),
        ))
    }

    /// T-If: Conditional rule
    /// Γ ⊢ e1 ⇐ Bool   Γ ⊢ e2 ⇒ A   Γ ⊢ e3 ⇒ A
    /// ──────────────────────────────────────────
    /// Γ ⊢ if e1 then e2 else e3 ⇒ A
    fn infer_if(
        &mut self,
        ctx: &Context,
        e1: &Expr,
        e2: &Expr,
        e3: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let (ctx1, tree1) = self.check(ctx, e1, &Type::Bool)?;
        let (ty2, ctx2, tree2) = self.infer(&ctx1, e2)?;
        let (ty3, ctx3, tree3) = self.infer(&ctx2, e3)?;
        // Ensure both branches have the same type
        let (unified_ctx, tree_unify) = self.subtype(&ctx3, &ty2, &ty3)?;
        let output = format!("{} ⇒ {} ⊣ {}", input, ty2, unified_ctx);
        Ok((
            ty2,
            unified_ctx,
            InferenceTree::new(
                "InfIf",
                input,
                &output,
                vec![tree1, tree2, tree3, tree_unify],
            ),
        ))
    }

    /// T-BinOp: Binary operation rules
    fn infer_binop(
        &mut self,
        ctx: &Context,
        op: &BinOp,
        e1: &Expr,
        e2: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        match op {
            // T-Arith: Int → Int → Int
            BinOp::Add | BinOp::Sub | BinOp::Mul | BinOp::Div => {
                let (ctx1, tree1) = self.check(ctx, e1, &Type::Int)?;
                let (ctx2, tree2) = self.check(&ctx1, e2, &Type::Int)?;
                let output = format!("{} ⇒ Int ⊣ {}", input, ctx2);
                Ok((
                    Type::Int,
                    ctx2,
                    InferenceTree::new("InfArith", input, &output, vec![tree1, tree2]),
                ))
            }
            // T-Bool: Bool → Bool → Bool
            BinOp::And | BinOp::Or => {
                let (ctx1, tree1) = self.check(ctx, e1, &Type::Bool)?;
                let (ctx2, tree2) = self.check(&ctx1, e2, &Type::Bool)?;
                let output = format!("{} ⇒ Bool ⊣ {}", input, ctx2);
                Ok((
                    Type::Bool,
                    ctx2,
                    InferenceTree::new("InfBool", input, &output, vec![tree1, tree2]),
                ))
            }
            // T-Cmp: Int → Int → Bool
            BinOp::Lt | BinOp::Le | BinOp::Gt | BinOp::Ge => {
                let (ctx1, tree1) = self.check(ctx, e1, &Type::Int)?;
                let (ctx2, tree2) = self.check(&ctx1, e2, &Type::Int)?;
                let output = format!("{} ⇒ Bool ⊣ {}", input, ctx2);
                Ok((
                    Type::Bool,
                    ctx2,
                    InferenceTree::new("InfCmp", input, &output, vec![tree1, tree2]),
                ))
            }
            // T-Eq: ∀α. α → α → Bool
            BinOp::Eq | BinOp::Ne => {
                let (ty1, ctx1, tree1) = self.infer(ctx, e1)?;
                let (ctx2, tree2) = self.check(&ctx1, e2, &ty1)?;
                let output = format!("{} ⇒ Bool ⊣ {}", input, ctx2);
                Ok((
                    Type::Bool,
                    ctx2,
                    InferenceTree::new("InfEq", input, &output, vec![tree1, tree2]),
                ))
            }
        }
    }

    fn inst_l(
        &mut self,
        ctx: &Context,
        a: &TyVar,
        ty: &Type,
    ) -> TypeResult<(Context, InferenceTree)> {
        let input = format!("{} ⊢ ^{} :=< {}", ctx, a, ty);

        match ty {
            Type::ETVar(b) if self.before(ctx, a, b) => {
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == b));
                let new_ctx = Context::from_parts(
                    left,
                    Entry::SETVarBnd(b.clone(), Type::ETVar(a.clone())),
                    right,
                );
                let output = format!("{}", new_ctx);
                Ok((
                    new_ctx,
                    InferenceTree::new("InstLReach", &input, &output, vec![]),
                ))
            }
            Type::Arrow(t1, t2) => {
                let a1 = self.fresh_tyvar();
                let a2 = self.fresh_tyvar();
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
                let arrow_type = Type::Arrow(
                    Box::new(Type::ETVar(a1.clone())),
                    Box::new(Type::ETVar(a2.clone())),
                );
                let mut new_ctx = left;
                new_ctx.push(Entry::SETVarBnd(a.clone(), arrow_type));
                new_ctx.push(Entry::ETVarBnd(a1.clone()));
                new_ctx.push(Entry::ETVarBnd(a2.clone()));
                new_ctx.extend(right);
                let ctx1 = Context(new_ctx);

                let (ctx2, tree1) = self.inst_r(&ctx1, t1, &a1)?;
                let t2_applied = self.apply_ctx_type(&ctx2, t2);
                let (ctx3, tree2) = self.inst_l(&ctx2, &a2, &t2_applied)?;

                let output = format!("{}", ctx3);
                Ok((
                    ctx3,
                    InferenceTree::new("InstLArr", &input, &output, vec![tree1, tree2]),
                ))
            }
            Type::Forall(b, t) => {
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::TVarBnd(b.clone()));
                let (ctx1, tree) = self.inst_l(&new_ctx, a, t)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == b));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("InstLAllR", &input, &output, vec![tree]),
                ))
            }
            _ if self.is_mono(ty) => {
                // Occurs check: ensure ^α doesn't occur in τ to prevent infinite types
                if self.occurs_check(a, ty) {
                    return Err(TypeError::OccursCheck {
                        var: a.clone(),
                        ty: ty.clone(),
                        expr: None,
                    });
                }
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
                let new_ctx =
                    Context::from_parts(left, Entry::SETVarBnd(a.clone(), ty.clone()), right);
                let output = format!("{}", new_ctx);
                Ok((
                    new_ctx,
                    InferenceTree::new("InstLSolve", &input, &output, vec![]),
                ))
            }
            _ => Err(TypeError::InstantiationError {
                var: a.clone(),
                ty: ty.clone(),
                expr: None,
            }),
        }
    }

    fn inst_r(
        &mut self,
        ctx: &Context,
        ty: &Type,
        a: &TyVar,
    ) -> TypeResult<(Context, InferenceTree)> {
        let input = format!("{} ⊢ {} :=< ^{}", ctx, ty, a);

        match ty {
            Type::ETVar(b) if self.before(ctx, a, b) => {
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == b));
                let new_ctx = Context::from_parts(
                    left,
                    Entry::SETVarBnd(b.clone(), Type::ETVar(a.clone())),
                    right,
                );
                let output = format!("{}", new_ctx);
                Ok((
                    new_ctx,
                    InferenceTree::new("InstRReach", &input, &output, vec![]),
                ))
            }
            Type::Arrow(t1, t2) => {
                let a1 = self.fresh_tyvar();
                let a2 = self.fresh_tyvar();
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
                let arrow_type = Type::Arrow(
                    Box::new(Type::ETVar(a1.clone())),
                    Box::new(Type::ETVar(a2.clone())),
                );
                let mut new_ctx = left;
                new_ctx.push(Entry::SETVarBnd(a.clone(), arrow_type));
                new_ctx.push(Entry::ETVarBnd(a1.clone()));
                new_ctx.push(Entry::ETVarBnd(a2.clone()));
                new_ctx.extend(right);
                let ctx1 = Context(new_ctx);

                let (ctx2, tree1) = self.inst_l(&ctx1, &a1, t1)?;
                let t2_applied = self.apply_ctx_type(&ctx2, t2);
                let (ctx3, tree2) = self.inst_r(&ctx2, &t2_applied, &a2)?;

                let output = format!("{}", ctx3);
                Ok((
                    ctx3,
                    InferenceTree::new("InstRArr", &input, &output, vec![tree1, tree2]),
                ))
            }
            Type::Forall(b, t) => {
                let subst_t = self.subst_type(b, &Type::ETVar(b.clone()), t);
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::ETVarBnd(b.clone()));
                new_ctx.push(Entry::Mark(b.clone()));
                let (ctx1, tree) = self.inst_r(&new_ctx, &subst_t, a)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::Mark(name) if name == b));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("InstRAllL", &input, &output, vec![tree]),
                ))
            }
            _ if self.is_mono(ty) => {
                // Occurs check: ensure ^α doesn't occur in τ to prevent infinite types
                if self.occurs_check(a, ty) {
                    return Err(TypeError::OccursCheck {
                        var: a.clone(),
                        ty: ty.clone(),
                        expr: None,
                    });
                }
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
                let new_ctx =
                    Context::from_parts(left, Entry::SETVarBnd(a.clone(), ty.clone()), right);
                let output = format!("{}", new_ctx);
                Ok((
                    new_ctx,
                    InferenceTree::new("InstRSolve", &input, &output, vec![]),
                ))
            }
            _ => Err(TypeError::InstantiationError {
                var: a.clone(),
                ty: ty.clone(),
                expr: None,
            }),
        }
    }

    fn subtype(
        &mut self,
        ctx: &Context,
        t1: &Type,
        t2: &Type,
    ) -> TypeResult<(Context, InferenceTree)> {
        let input = format!("{} ⊢ {} <: {}", ctx, t1, t2);

        match (t1, t2) {
            (Type::Int, Type::Int) | (Type::Bool, Type::Bool) => Ok((
                ctx.clone(),
                InferenceTree::new("SubRefl", &input, &format!("{}", ctx), vec![]),
            )),
            (Type::Var(a), Type::Var(b)) if a == b => Ok((
                ctx.clone(),
                InferenceTree::new("SubReflTVar", &input, &format!("{}", ctx), vec![]),
            )),
            (Type::ETVar(a), Type::ETVar(b)) if a == b => Ok((
                ctx.clone(),
                InferenceTree::new("SubReflETVar", &input, &format!("{}", ctx), vec![]),
            )),
            (Type::Arrow(a1, a2), Type::Arrow(b1, b2)) => {
                let (ctx1, tree1) = self.subtype(ctx, b1, a1)?; // Contravariant in argument
                let (ctx2, tree2) = self.subtype(&ctx1, a2, b2)?; // Covariant in result
                let output = format!("{}", ctx2);
                Ok((
                    ctx2,
                    InferenceTree::new("SubArr", &input, &output, vec![tree1, tree2]),
                ))
            }
            (_, Type::Forall(b, t2_body)) => {
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::TVarBnd(b.clone()));
                let (ctx1, tree) = self.subtype(&new_ctx, t1, t2_body)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == b));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("SubAllR", &input, &output, vec![tree]),
                ))
            }
            (Type::Forall(a, t1_body), _) => {
                let subst_t1 = self.subst_type(a, &Type::ETVar(a.clone()), t1_body);
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::ETVarBnd(a.clone()));
                new_ctx.push(Entry::Mark(a.clone()));
                let (ctx1, tree) = self.subtype(&new_ctx, &subst_t1, t2)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::Mark(name) if name == a));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("SubAllL", &input, &output, vec![tree]),
                ))
            }
            (Type::ETVar(a), _) if !self.free_vars(t2).contains(a) => {
                let (ctx1, tree) = self.inst_l(ctx, a, t2)?;
                let output = format!("{}", ctx1);
                Ok((
                    ctx1,
                    InferenceTree::new("SubInstL", &input, &output, vec![tree]),
                ))
            }
            (_, Type::ETVar(a)) if !self.free_vars(t1).contains(a) => {
                let (ctx1, tree) = self.inst_r(ctx, t1, a)?;
                let output = format!("{}", ctx1);
                Ok((
                    ctx1,
                    InferenceTree::new("SubInstR", &input, &output, vec![tree]),
                ))
            }
            _ => Err(TypeError::SubtypingError {
                left: t1.clone(),
                right: t2.clone(),
                expr: None,
            }),
        }
    }

    fn check(
        &mut self,
        ctx: &Context,
        expr: &Expr,
        ty: &Type,
    ) -> TypeResult<(Context, InferenceTree)> {
        let input = format!("{} ⊢ {:?} ⇐ {}", ctx, expr, ty);

        match (expr, ty) {
            (Expr::LitInt(_), Type::Int) => Ok((
                ctx.clone(),
                InferenceTree::new("ChkLitInt", &input, &format!("{}", ctx), vec![]),
            )),
            (Expr::LitBool(_), Type::Bool) => Ok((
                ctx.clone(),
                InferenceTree::new("ChkLitBool", &input, &format!("{}", ctx), vec![]),
            )),
            (Expr::Abs(x, _param_ty, body), Type::Arrow(expected_param, result_ty)) => {
                // For simplicity, we assume the parameter type matches
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::VarBnd(x.clone(), *expected_param.clone()));
                let (ctx1, tree) = self.check(&new_ctx, body, result_ty)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("ChkLam", &input, &output, vec![tree]),
                ))
            }
            (_, Type::Forall(a, ty_body)) => {
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::TVarBnd(a.clone()));
                let (ctx1, tree) = self.check(&new_ctx, expr, ty_body)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == a));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("ChkAll", &input, &output, vec![tree]),
                ))
            }
            _ => {
                // Fallback to inference + subtyping
                let (inferred_ty, ctx1, tree1) = self.infer(ctx, expr)?;
                let inferred_applied = self.apply_ctx_type(&ctx1, &inferred_ty);
                let ty_applied = self.apply_ctx_type(&ctx1, ty);
                let (ctx2, tree2) = self.subtype(&ctx1, &inferred_applied, &ty_applied)?;
                let output = format!("{}", ctx2);
                Ok((
                    ctx2,
                    InferenceTree::new("ChkSub", &input, &output, vec![tree1, tree2]),
                ))
            }
        }
    }

    pub fn infer(
        &mut self,
        ctx: &Context,
        expr: &Expr,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let input = format!("{} ⊢ {:?}", ctx, expr);

        match expr {
            Expr::Var(x) => self.infer_var(ctx, x, &input),
            Expr::Ann(expr, ty) => self.infer_ann(ctx, expr, ty, &input),
            Expr::LitInt(n) => self.infer_lit_int(ctx, *n, &input),
            Expr::LitBool(b) => self.infer_lit_bool(ctx, *b, &input),
            Expr::Abs(x, param_ty, body) => self.infer_abs(ctx, x, param_ty, body, &input),
            Expr::App(func, arg) => self.infer_application(ctx, func, arg, &input),
            Expr::TAbs(a, body) => self.infer_tabs(ctx, a, body, &input),
            Expr::TApp(func, ty_arg) => self.infer_tapp(ctx, func, ty_arg, &input),
            Expr::Let(x, e1, e2) => self.infer_let(ctx, x, e1, e2, &input),
            Expr::IfThenElse(e1, e2, e3) => self.infer_if(ctx, e1, e2, e3, &input),
            Expr::BinOp(op, e1, e2) => self.infer_binop(ctx, op, e1, e2, &input),
        }
    }

    /// T-Instr: Type annotation rule
    /// Γ ⊢ e ⇐ A
    /// ──────────────────
    /// Γ ⊢ (e : A) ⇒ A
    fn infer_ann(
        &mut self,
        ctx: &Context,
        expr: &Expr,
        ty: &Type,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let (ctx1, tree) = self.check(ctx, expr, ty)?;
        let output = format!("{} ⇒ {} ⊣ {}", input, ty, ctx1);
        Ok((
            ty.clone(),
            ctx1,
            InferenceTree::new("InfAnn", input, &output, vec![tree]),
        ))
    }

    /// T-ForallI: Type abstraction rule
    /// Γ, α ⊢ e ⇒ A
    /// ──────────────────────
    /// Γ ⊢ Λα. e ⇒ ∀α. A
    fn infer_tabs(
        &mut self,
        ctx: &Context,
        a: &str,
        body: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let mut new_ctx = ctx.clone();
        new_ctx.push(Entry::TVarBnd(a.to_string()));
        let (body_ty, ctx1, tree) = self.infer(&new_ctx, body)?;

        // Apply context substitutions to resolve existential variables before removing
        // type binding
        let resolved_body_ty = self.apply_ctx_type(&ctx1, &body_ty);

        let (left, _, right) =
            ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == a));
        // Preserve solved existential variable bindings from the left context
        let mut final_ctx_entries = left
            .into_iter()
            .filter(|entry| matches!(entry, Entry::SETVarBnd(_, _)))
            .collect::<Vec<_>>();
        final_ctx_entries.extend(right);
        let final_ctx = Context(final_ctx_entries);
        let result_ty = Type::Forall(a.to_string(), Box::new(resolved_body_ty));
        let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, final_ctx);
        Ok((
            result_ty,
            final_ctx,
            InferenceTree::new("InfTAbs", input, &output, vec![tree]),
        ))
    }

    /// T-ForallE: Type application rule
    /// Γ ⊢ e ⇒ ∀α. A
    /// ──────────────────────
    /// Γ ⊢ e[B] ⇒ [B/α]A
    fn infer_tapp(
        &mut self,
        ctx: &Context,
        func: &Expr,
        ty_arg: &Type,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let (func_ty, ctx1, tree1) = self.infer(ctx, func)?;
        match func_ty {
            Type::Forall(a, body_ty) => {
                let result_ty = self.subst_type(&a, ty_arg, &body_ty);
                let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, ctx1);
                Ok((
                    result_ty,
                    ctx1,
                    InferenceTree::new("InfTApp", input, &output, vec![tree1]),
                ))
            }
            _ => Err(TypeError::TypeApplicationError {
                actual: func_ty.clone(),
                expr: None,
            }),
        }
    }

    fn infer_app(
        &mut self,
        ctx: &Context,
        func_ty: &Type,
        arg: &Expr,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let input = format!("{} ⊢ {:?} • {}", ctx, arg, func_ty);

        match func_ty {
            Type::Arrow(param_ty, result_ty) => {
                let (ctx1, tree) = self.check(ctx, arg, param_ty)?;
                let output = format!("{} ⇒⇒ {} ⊣ {}", input, result_ty, ctx1);
                Ok((
                    result_ty.as_ref().clone(),
                    ctx1,
                    InferenceTree::new("InfAppArr", &input, &output, vec![tree]),
                ))
            }
            Type::ETVar(a) => {
                let a1 = self.fresh_tyvar();
                let a2 = self.fresh_tyvar();
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
                let arrow_type = Type::Arrow(
                    Box::new(Type::ETVar(a1.clone())),
                    Box::new(Type::ETVar(a2.clone())),
                );
                let mut new_ctx = left;
                new_ctx.push(Entry::SETVarBnd(a.clone(), arrow_type));
                new_ctx.push(Entry::ETVarBnd(a1.clone()));
                new_ctx.push(Entry::ETVarBnd(a2.clone()));
                new_ctx.extend(right);
                let ctx1 = Context(new_ctx);

                let (ctx2, tree) = self.check(&ctx1, arg, &Type::ETVar(a1))?;
                let output = format!("{} ⇒⇒ ^{} ⊣ {}", input, a2, ctx2);
                Ok((
                    Type::ETVar(a2),
                    ctx2,
                    InferenceTree::new("InfAppETVar", &input, &output, vec![tree]),
                ))
            }
            _ => Err(TypeError::ApplicationTypeError {
                actual: func_ty.clone(),
                expr: None,
            }),
        }
    }
}
pub fn run_bidirectional(expr: &Expr) -> TypeResult<(Type, Context, InferenceTree)> {
    let mut bi = BiDirectional::new();
    let ctx = Context::new();
    let (ty, final_ctx, tree) = bi.infer(&ctx, expr)?;
    // Apply the final context to resolve existential variables
    let resolved_ty = bi.apply_ctx_type(&final_ctx, &ty);
    Ok((resolved_ty, final_ctx, tree))
}
#[derive(Debug, Clone)]
pub enum Entry {
    VarBnd(TmVar, Type),    // x: A
    TVarBnd(TyVar),         // α
    ETVarBnd(TyVar),        // ^α
    SETVarBnd(TyVar, Type), // ^α = τ
    Mark(TyVar),            // $α
}
}

The context entries represent different kinds of information the type checker needs to track throughout the bidirectional inference process. Variable bindings, represented as VarBnd(TmVar, Type), associate term variables with their types, such as recording that x has type Int in the current scope. Type variable bindings, denoted as TVarBnd(TyVar), introduce type variables into scope, such as the \( \alpha \) that appears in universal quantification \( \forall \alpha. \ldots \).

Existential type variable bindings, written as ETVarBnd(TyVar), introduce existential type variables that represent unknown types to be determined through constraint solving. Solved existential type variable bindings, represented as SETVarBnd(TyVar, Type), record the concrete solutions discovered for existential variables during the inference process. Finally, marker entries, denoted as Mark(TyVar), mark the beginning of a scope to enable proper garbage collection of type variables when their scope is exited.

The context itself maintains these entries in a stack-like structure where order matters crucially for scoping and variable resolution:

#![allow(unused)]
fn main() {
use std::collections::HashSet;
use std::fmt;
use crate::ast::{BinOp, Expr, Type};
use crate::errors::{TypeError, TypeResult};
pub type TyVar = String;
pub type TmVar = String;
#[derive(Debug, Clone)]
pub enum Entry {
    VarBnd(TmVar, Type),    // x: A
    TVarBnd(TyVar),         // α
    ETVarBnd(TyVar),        // ^α
    SETVarBnd(TyVar, Type), // ^α = τ
    Mark(TyVar),            // $α
}
impl fmt::Display for Entry {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Entry::VarBnd(x, ty) => write!(f, "{}: {}", x, ty),
            Entry::TVarBnd(a) => write!(f, "{}", a),
            Entry::ETVarBnd(a) => write!(f, "^{}", a),
            Entry::SETVarBnd(a, ty) => write!(f, "^{} = {}", a, ty),
            Entry::Mark(a) => write!(f, "${}", a),
        }
    }
}
impl Default for Context {
    fn default() -> Self {
        Self::new()
    }
}
impl Context {
    pub fn new() -> Self {
        Context(Vec::new())
    }

    pub fn push(&mut self, entry: Entry) {
        self.0.push(entry);
    }

    pub fn find<F>(&self, predicate: F) -> Option<&Entry>
    where
        F: Fn(&Entry) -> bool, {
        self.0.iter().find(|entry| predicate(entry))
    }

    pub fn break3<F>(&self, predicate: F) -> (Vec<Entry>, Option<Entry>, Vec<Entry>)
    where
        F: Fn(&Entry) -> bool, {
        if let Some(pos) = self.0.iter().position(predicate) {
            let left = self.0[..pos].to_vec();
            let middle = self.0[pos].clone();
            let right = self.0[pos + 1..].to_vec();
            (left, Some(middle), right)
        } else {
            (self.0.clone(), None, Vec::new())
        }
    }

    pub fn from_parts(left: Vec<Entry>, middle: Entry, right: Vec<Entry>) -> Self {
        let mut ctx = left;
        ctx.push(middle);
        ctx.extend(right);
        Context(ctx)
    }
}
impl fmt::Display for Context {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        let entries: Vec<String> = self.0.iter().rev().map(|e| e.to_string()).collect();
        write!(f, "{}", entries.join(", "))
    }
}
#[derive(Debug)]
pub struct InferenceTree {
    pub rule: String,
    pub input: String,
    pub output: String,
    pub children: Vec<InferenceTree>,
}
impl InferenceTree {
    fn new(rule: &str, input: &str, output: &str, children: Vec<InferenceTree>) -> Self {
        Self {
            rule: rule.to_string(),
            input: input.to_string(),
            output: output.to_string(),
            children,
        }
    }
}
impl fmt::Display for InferenceTree {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        self.display_with_indent(f, 0)
    }
}
impl InferenceTree {
    fn display_with_indent(&self, f: &mut fmt::Formatter, indent: usize) -> fmt::Result {
        let prefix = "  ".repeat(indent);
        writeln!(
            f,
            "{}{}: {} => {}",
            prefix, self.rule, self.input, self.output
        )?;
        for child in &self.children {
            child.display_with_indent(f, indent + 1)?;
        }
        Ok(())
    }
}
pub struct BiDirectional {
    counter: usize,
}
impl Default for BiDirectional {
    fn default() -> Self {
        Self::new()
    }
}
#[allow(clippy::only_used_in_recursion)]
impl BiDirectional {
    pub fn new() -> Self {
        Self { counter: 0 }
    }

    fn fresh_tyvar(&mut self) -> TyVar {
        let var = format!("α{}", self.counter);
        self.counter += 1;
        var
    }

    fn is_mono(&self, ty: &Type) -> bool {
        match ty {
            Type::Int | Type::Bool | Type::Var(_) | Type::ETVar(_) => true,
            Type::Arrow(t1, t2) => self.is_mono(t1) && self.is_mono(t2),
            Type::Forall(_, _) => false,
        }
    }

    fn free_vars(&self, ty: &Type) -> HashSet<TyVar> {
        match ty {
            Type::Var(name) | Type::ETVar(name) => {
                let mut set = HashSet::new();
                set.insert(name.clone());
                set
            }
            Type::Arrow(t1, t2) => {
                let mut set = self.free_vars(t1);
                set.extend(self.free_vars(t2));
                set
            }
            Type::Forall(var, ty) => {
                let mut set = self.free_vars(ty);
                set.remove(var);
                set
            }
            Type::Int | Type::Bool => HashSet::new(),
        }
    }

    fn subst_type(&self, var: &TyVar, replacement: &Type, ty: &Type) -> Type {
        match ty {
            Type::Var(name) if name == var => replacement.clone(),
            Type::ETVar(name) if name == var => replacement.clone(),
            Type::Var(_) | Type::ETVar(_) | Type::Int | Type::Bool => ty.clone(),
            Type::Arrow(t1, t2) => Type::Arrow(
                Box::new(self.subst_type(var, replacement, t1)),
                Box::new(self.subst_type(var, replacement, t2)),
            ),
            Type::Forall(bound_var, body) => {
                if bound_var == var {
                    ty.clone() // Variable is shadowed
                } else {
                    Type::Forall(
                        bound_var.clone(),
                        Box::new(self.subst_type(var, replacement, body)),
                    )
                }
            }
        }
    }

    pub fn apply_ctx_type(&self, ctx: &Context, ty: &Type) -> Type {
        let mut current = ty.clone();
        let mut changed = true;

        // Keep applying substitutions until no more changes occur
        while changed {
            changed = false;
            let new_type = self.apply_ctx_type_once(ctx, &current);
            if new_type != current {
                changed = true;
                current = new_type;
            }
        }

        current
    }

    fn apply_ctx_type_once(&self, ctx: &Context, ty: &Type) -> Type {
        match ty {
            Type::ETVar(a) => {
                if let Some(Entry::SETVarBnd(_, replacement)) =
                    ctx.find(|entry| matches!(entry, Entry::SETVarBnd(name, _) if name == a))
                {
                    // Recursively apply substitutions to the replacement type
                    self.apply_ctx_type_once(ctx, replacement)
                } else {
                    ty.clone()
                }
            }
            Type::Arrow(t1, t2) => Type::Arrow(
                Box::new(self.apply_ctx_type_once(ctx, t1)),
                Box::new(self.apply_ctx_type_once(ctx, t2)),
            ),
            Type::Forall(var, body) => {
                Type::Forall(var.clone(), Box::new(self.apply_ctx_type_once(ctx, body)))
            }
            _ => ty.clone(),
        }
    }

    fn before(&self, ctx: &Context, a: &TyVar, b: &TyVar) -> bool {
        let pos_a = ctx
            .0
            .iter()
            .position(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
        let pos_b = ctx
            .0
            .iter()
            .position(|entry| matches!(entry, Entry::ETVarBnd(name) if name == b));

        match (pos_a, pos_b) {
            (Some(pa), Some(pb)) => pa > pb, // Later in the context means earlier in ordering
            _ => false,
        }
    }

    /// Check if a type variable occurs in a type (occurs check for unification)
    /// This prevents creating infinite types when solving ^α := τ by ensuring α
    /// ∉ τ
    fn occurs_check(&self, var: &TyVar, ty: &Type) -> bool {
        match ty {
            Type::Var(name) | Type::ETVar(name) => name == var,
            Type::Arrow(t1, t2) => self.occurs_check(var, t1) || self.occurs_check(var, t2),
            Type::Forall(bound_var, body) => {
                // If the variable is shadowed by the forall binding, it doesn't occur
                if bound_var == var {
                    false
                } else {
                    self.occurs_check(var, body)
                }
            }
            Type::Int | Type::Bool => false,
        }
    }

    // ========== TYPING RULE METHODS ==========
    // Each method implements a specific typing rule from System F

    /// T-Var: Variable lookup rule
    /// Γ, x:A ⊢ x ⇒ A
    fn infer_var(
        &self,
        ctx: &Context,
        x: &str,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        if let Some(Entry::VarBnd(_, ty)) =
            ctx.find(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x))
        {
            let output = format!("{} ⇒ {} ⊣ {}", input, ty, ctx);
            Ok((
                ty.clone(),
                ctx.clone(),
                InferenceTree::new("InfVar", input, &output, vec![]),
            ))
        } else {
            Err(TypeError::UnboundVariable {
                name: x.to_string(),
                expr: None,
            })
        }
    }

    /// T-LitInt: Integer literal rule
    /// ⊢ n ⇒ Int
    fn infer_lit_int(
        &self,
        ctx: &Context,
        _n: i64,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let output = format!("{} ⇒ Int ⊣ {}", input, ctx);
        Ok((
            Type::Int,
            ctx.clone(),
            InferenceTree::new("InfLitInt", input, &output, vec![]),
        ))
    }

    /// T-LitBool: Boolean literal rule
    /// ⊢ b ⇒ Bool
    fn infer_lit_bool(
        &self,
        ctx: &Context,
        _b: bool,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let output = format!("{} ⇒ Bool ⊣ {}", input, ctx);
        Ok((
            Type::Bool,
            ctx.clone(),
            InferenceTree::new("InfLitBool", input, &output, vec![]),
        ))
    }

    /// T-Abs: Lambda abstraction rule
    /// Γ,x:A ⊢ e ⇐ B
    /// ─────────────────────
    /// Γ ⊢ λx:A.e ⇒ A → B
    fn infer_abs(
        &mut self,
        ctx: &Context,
        x: &str,
        param_ty: &Type,
        body: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let b = self.fresh_tyvar();
        let mut new_ctx = ctx.clone();
        new_ctx.push(Entry::VarBnd(x.to_string(), param_ty.clone()));
        new_ctx.push(Entry::ETVarBnd(b.clone()));

        let (ctx1, tree) = self.check(&new_ctx, body, &Type::ETVar(b.clone()))?;
        let (left, _, right) =
            ctx1.break3(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x));
        // Preserve solved existential variable bindings from the left context
        let mut final_ctx_entries = left
            .into_iter()
            .filter(|entry| matches!(entry, Entry::SETVarBnd(_, _)))
            .collect::<Vec<_>>();
        final_ctx_entries.extend(right);
        let final_ctx = Context(final_ctx_entries);
        let result_ty = Type::Arrow(Box::new(param_ty.clone()), Box::new(Type::ETVar(b)));
        let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, final_ctx);
        Ok((
            result_ty,
            final_ctx,
            InferenceTree::new("InfLam", input, &output, vec![tree]),
        ))
    }

    /// T-App: Function application rule
    /// Γ ⊢ e1 ⇒ A   Γ ⊢ e2 • A ⇒⇒ C
    /// ────────────────────────────────
    /// Γ ⊢ e1 e2 ⇒ C
    fn infer_application(
        &mut self,
        ctx: &Context,
        func: &Expr,
        arg: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let (func_ty, ctx1, tree1) = self.infer(ctx, func)?;
        let func_ty_applied = self.apply_ctx_type(&ctx1, &func_ty);
        let (result_ty, ctx2, tree2) = self.infer_app(&ctx1, &func_ty_applied, arg)?;
        let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, ctx2);
        Ok((
            result_ty,
            ctx2,
            InferenceTree::new("InfApp", input, &output, vec![tree1, tree2]),
        ))
    }

    /// T-Let: Let binding rule
    /// Γ ⊢ e1 ⇒ A   Γ,x:A ⊢ e2 ⇒ B
    /// ─────────────────────────────
    /// Γ ⊢ let x = e1 in e2 ⇒ B
    fn infer_let(
        &mut self,
        ctx: &Context,
        x: &str,
        e1: &Expr,
        e2: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let (ty1, ctx1, tree1) = self.infer(ctx, e1)?;
        let mut new_ctx = ctx1.clone();
        new_ctx.push(Entry::VarBnd(x.to_string(), ty1));
        let (ty2, ctx2, tree2) = self.infer(&new_ctx, e2)?;
        let (left, _, right) =
            ctx2.break3(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x));
        // Preserve solved existential variable bindings from the left context
        let mut final_ctx_entries = left
            .into_iter()
            .filter(|entry| matches!(entry, Entry::SETVarBnd(_, _)))
            .collect::<Vec<_>>();
        final_ctx_entries.extend(right);
        let final_ctx = Context(final_ctx_entries);
        let output = format!("{} ⇒ {} ⊣ {}", input, ty2, final_ctx);
        Ok((
            ty2,
            final_ctx,
            InferenceTree::new("InfLet", input, &output, vec![tree1, tree2]),
        ))
    }

    /// T-If: Conditional rule
    /// Γ ⊢ e1 ⇐ Bool   Γ ⊢ e2 ⇒ A   Γ ⊢ e3 ⇒ A
    /// ──────────────────────────────────────────
    /// Γ ⊢ if e1 then e2 else e3 ⇒ A
    fn infer_if(
        &mut self,
        ctx: &Context,
        e1: &Expr,
        e2: &Expr,
        e3: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let (ctx1, tree1) = self.check(ctx, e1, &Type::Bool)?;
        let (ty2, ctx2, tree2) = self.infer(&ctx1, e2)?;
        let (ty3, ctx3, tree3) = self.infer(&ctx2, e3)?;
        // Ensure both branches have the same type
        let (unified_ctx, tree_unify) = self.subtype(&ctx3, &ty2, &ty3)?;
        let output = format!("{} ⇒ {} ⊣ {}", input, ty2, unified_ctx);
        Ok((
            ty2,
            unified_ctx,
            InferenceTree::new(
                "InfIf",
                input,
                &output,
                vec![tree1, tree2, tree3, tree_unify],
            ),
        ))
    }

    /// T-BinOp: Binary operation rules
    fn infer_binop(
        &mut self,
        ctx: &Context,
        op: &BinOp,
        e1: &Expr,
        e2: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        match op {
            // T-Arith: Int → Int → Int
            BinOp::Add | BinOp::Sub | BinOp::Mul | BinOp::Div => {
                let (ctx1, tree1) = self.check(ctx, e1, &Type::Int)?;
                let (ctx2, tree2) = self.check(&ctx1, e2, &Type::Int)?;
                let output = format!("{} ⇒ Int ⊣ {}", input, ctx2);
                Ok((
                    Type::Int,
                    ctx2,
                    InferenceTree::new("InfArith", input, &output, vec![tree1, tree2]),
                ))
            }
            // T-Bool: Bool → Bool → Bool
            BinOp::And | BinOp::Or => {
                let (ctx1, tree1) = self.check(ctx, e1, &Type::Bool)?;
                let (ctx2, tree2) = self.check(&ctx1, e2, &Type::Bool)?;
                let output = format!("{} ⇒ Bool ⊣ {}", input, ctx2);
                Ok((
                    Type::Bool,
                    ctx2,
                    InferenceTree::new("InfBool", input, &output, vec![tree1, tree2]),
                ))
            }
            // T-Cmp: Int → Int → Bool
            BinOp::Lt | BinOp::Le | BinOp::Gt | BinOp::Ge => {
                let (ctx1, tree1) = self.check(ctx, e1, &Type::Int)?;
                let (ctx2, tree2) = self.check(&ctx1, e2, &Type::Int)?;
                let output = format!("{} ⇒ Bool ⊣ {}", input, ctx2);
                Ok((
                    Type::Bool,
                    ctx2,
                    InferenceTree::new("InfCmp", input, &output, vec![tree1, tree2]),
                ))
            }
            // T-Eq: ∀α. α → α → Bool
            BinOp::Eq | BinOp::Ne => {
                let (ty1, ctx1, tree1) = self.infer(ctx, e1)?;
                let (ctx2, tree2) = self.check(&ctx1, e2, &ty1)?;
                let output = format!("{} ⇒ Bool ⊣ {}", input, ctx2);
                Ok((
                    Type::Bool,
                    ctx2,
                    InferenceTree::new("InfEq", input, &output, vec![tree1, tree2]),
                ))
            }
        }
    }

    fn inst_l(
        &mut self,
        ctx: &Context,
        a: &TyVar,
        ty: &Type,
    ) -> TypeResult<(Context, InferenceTree)> {
        let input = format!("{} ⊢ ^{} :=< {}", ctx, a, ty);

        match ty {
            Type::ETVar(b) if self.before(ctx, a, b) => {
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == b));
                let new_ctx = Context::from_parts(
                    left,
                    Entry::SETVarBnd(b.clone(), Type::ETVar(a.clone())),
                    right,
                );
                let output = format!("{}", new_ctx);
                Ok((
                    new_ctx,
                    InferenceTree::new("InstLReach", &input, &output, vec![]),
                ))
            }
            Type::Arrow(t1, t2) => {
                let a1 = self.fresh_tyvar();
                let a2 = self.fresh_tyvar();
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
                let arrow_type = Type::Arrow(
                    Box::new(Type::ETVar(a1.clone())),
                    Box::new(Type::ETVar(a2.clone())),
                );
                let mut new_ctx = left;
                new_ctx.push(Entry::SETVarBnd(a.clone(), arrow_type));
                new_ctx.push(Entry::ETVarBnd(a1.clone()));
                new_ctx.push(Entry::ETVarBnd(a2.clone()));
                new_ctx.extend(right);
                let ctx1 = Context(new_ctx);

                let (ctx2, tree1) = self.inst_r(&ctx1, t1, &a1)?;
                let t2_applied = self.apply_ctx_type(&ctx2, t2);
                let (ctx3, tree2) = self.inst_l(&ctx2, &a2, &t2_applied)?;

                let output = format!("{}", ctx3);
                Ok((
                    ctx3,
                    InferenceTree::new("InstLArr", &input, &output, vec![tree1, tree2]),
                ))
            }
            Type::Forall(b, t) => {
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::TVarBnd(b.clone()));
                let (ctx1, tree) = self.inst_l(&new_ctx, a, t)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == b));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("InstLAllR", &input, &output, vec![tree]),
                ))
            }
            _ if self.is_mono(ty) => {
                // Occurs check: ensure ^α doesn't occur in τ to prevent infinite types
                if self.occurs_check(a, ty) {
                    return Err(TypeError::OccursCheck {
                        var: a.clone(),
                        ty: ty.clone(),
                        expr: None,
                    });
                }
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
                let new_ctx =
                    Context::from_parts(left, Entry::SETVarBnd(a.clone(), ty.clone()), right);
                let output = format!("{}", new_ctx);
                Ok((
                    new_ctx,
                    InferenceTree::new("InstLSolve", &input, &output, vec![]),
                ))
            }
            _ => Err(TypeError::InstantiationError {
                var: a.clone(),
                ty: ty.clone(),
                expr: None,
            }),
        }
    }

    fn inst_r(
        &mut self,
        ctx: &Context,
        ty: &Type,
        a: &TyVar,
    ) -> TypeResult<(Context, InferenceTree)> {
        let input = format!("{} ⊢ {} :=< ^{}", ctx, ty, a);

        match ty {
            Type::ETVar(b) if self.before(ctx, a, b) => {
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == b));
                let new_ctx = Context::from_parts(
                    left,
                    Entry::SETVarBnd(b.clone(), Type::ETVar(a.clone())),
                    right,
                );
                let output = format!("{}", new_ctx);
                Ok((
                    new_ctx,
                    InferenceTree::new("InstRReach", &input, &output, vec![]),
                ))
            }
            Type::Arrow(t1, t2) => {
                let a1 = self.fresh_tyvar();
                let a2 = self.fresh_tyvar();
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
                let arrow_type = Type::Arrow(
                    Box::new(Type::ETVar(a1.clone())),
                    Box::new(Type::ETVar(a2.clone())),
                );
                let mut new_ctx = left;
                new_ctx.push(Entry::SETVarBnd(a.clone(), arrow_type));
                new_ctx.push(Entry::ETVarBnd(a1.clone()));
                new_ctx.push(Entry::ETVarBnd(a2.clone()));
                new_ctx.extend(right);
                let ctx1 = Context(new_ctx);

                let (ctx2, tree1) = self.inst_l(&ctx1, &a1, t1)?;
                let t2_applied = self.apply_ctx_type(&ctx2, t2);
                let (ctx3, tree2) = self.inst_r(&ctx2, &t2_applied, &a2)?;

                let output = format!("{}", ctx3);
                Ok((
                    ctx3,
                    InferenceTree::new("InstRArr", &input, &output, vec![tree1, tree2]),
                ))
            }
            Type::Forall(b, t) => {
                let subst_t = self.subst_type(b, &Type::ETVar(b.clone()), t);
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::ETVarBnd(b.clone()));
                new_ctx.push(Entry::Mark(b.clone()));
                let (ctx1, tree) = self.inst_r(&new_ctx, &subst_t, a)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::Mark(name) if name == b));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("InstRAllL", &input, &output, vec![tree]),
                ))
            }
            _ if self.is_mono(ty) => {
                // Occurs check: ensure ^α doesn't occur in τ to prevent infinite types
                if self.occurs_check(a, ty) {
                    return Err(TypeError::OccursCheck {
                        var: a.clone(),
                        ty: ty.clone(),
                        expr: None,
                    });
                }
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
                let new_ctx =
                    Context::from_parts(left, Entry::SETVarBnd(a.clone(), ty.clone()), right);
                let output = format!("{}", new_ctx);
                Ok((
                    new_ctx,
                    InferenceTree::new("InstRSolve", &input, &output, vec![]),
                ))
            }
            _ => Err(TypeError::InstantiationError {
                var: a.clone(),
                ty: ty.clone(),
                expr: None,
            }),
        }
    }

    fn subtype(
        &mut self,
        ctx: &Context,
        t1: &Type,
        t2: &Type,
    ) -> TypeResult<(Context, InferenceTree)> {
        let input = format!("{} ⊢ {} <: {}", ctx, t1, t2);

        match (t1, t2) {
            (Type::Int, Type::Int) | (Type::Bool, Type::Bool) => Ok((
                ctx.clone(),
                InferenceTree::new("SubRefl", &input, &format!("{}", ctx), vec![]),
            )),
            (Type::Var(a), Type::Var(b)) if a == b => Ok((
                ctx.clone(),
                InferenceTree::new("SubReflTVar", &input, &format!("{}", ctx), vec![]),
            )),
            (Type::ETVar(a), Type::ETVar(b)) if a == b => Ok((
                ctx.clone(),
                InferenceTree::new("SubReflETVar", &input, &format!("{}", ctx), vec![]),
            )),
            (Type::Arrow(a1, a2), Type::Arrow(b1, b2)) => {
                let (ctx1, tree1) = self.subtype(ctx, b1, a1)?; // Contravariant in argument
                let (ctx2, tree2) = self.subtype(&ctx1, a2, b2)?; // Covariant in result
                let output = format!("{}", ctx2);
                Ok((
                    ctx2,
                    InferenceTree::new("SubArr", &input, &output, vec![tree1, tree2]),
                ))
            }
            (_, Type::Forall(b, t2_body)) => {
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::TVarBnd(b.clone()));
                let (ctx1, tree) = self.subtype(&new_ctx, t1, t2_body)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == b));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("SubAllR", &input, &output, vec![tree]),
                ))
            }
            (Type::Forall(a, t1_body), _) => {
                let subst_t1 = self.subst_type(a, &Type::ETVar(a.clone()), t1_body);
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::ETVarBnd(a.clone()));
                new_ctx.push(Entry::Mark(a.clone()));
                let (ctx1, tree) = self.subtype(&new_ctx, &subst_t1, t2)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::Mark(name) if name == a));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("SubAllL", &input, &output, vec![tree]),
                ))
            }
            (Type::ETVar(a), _) if !self.free_vars(t2).contains(a) => {
                let (ctx1, tree) = self.inst_l(ctx, a, t2)?;
                let output = format!("{}", ctx1);
                Ok((
                    ctx1,
                    InferenceTree::new("SubInstL", &input, &output, vec![tree]),
                ))
            }
            (_, Type::ETVar(a)) if !self.free_vars(t1).contains(a) => {
                let (ctx1, tree) = self.inst_r(ctx, t1, a)?;
                let output = format!("{}", ctx1);
                Ok((
                    ctx1,
                    InferenceTree::new("SubInstR", &input, &output, vec![tree]),
                ))
            }
            _ => Err(TypeError::SubtypingError {
                left: t1.clone(),
                right: t2.clone(),
                expr: None,
            }),
        }
    }

    fn check(
        &mut self,
        ctx: &Context,
        expr: &Expr,
        ty: &Type,
    ) -> TypeResult<(Context, InferenceTree)> {
        let input = format!("{} ⊢ {:?} ⇐ {}", ctx, expr, ty);

        match (expr, ty) {
            (Expr::LitInt(_), Type::Int) => Ok((
                ctx.clone(),
                InferenceTree::new("ChkLitInt", &input, &format!("{}", ctx), vec![]),
            )),
            (Expr::LitBool(_), Type::Bool) => Ok((
                ctx.clone(),
                InferenceTree::new("ChkLitBool", &input, &format!("{}", ctx), vec![]),
            )),
            (Expr::Abs(x, _param_ty, body), Type::Arrow(expected_param, result_ty)) => {
                // For simplicity, we assume the parameter type matches
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::VarBnd(x.clone(), *expected_param.clone()));
                let (ctx1, tree) = self.check(&new_ctx, body, result_ty)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("ChkLam", &input, &output, vec![tree]),
                ))
            }
            (_, Type::Forall(a, ty_body)) => {
                let mut new_ctx = ctx.clone();
                new_ctx.push(Entry::TVarBnd(a.clone()));
                let (ctx1, tree) = self.check(&new_ctx, expr, ty_body)?;
                let (_, _, right) =
                    ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == a));
                let final_ctx = Context(right);
                let output = format!("{}", final_ctx);
                Ok((
                    final_ctx,
                    InferenceTree::new("ChkAll", &input, &output, vec![tree]),
                ))
            }
            _ => {
                // Fallback to inference + subtyping
                let (inferred_ty, ctx1, tree1) = self.infer(ctx, expr)?;
                let inferred_applied = self.apply_ctx_type(&ctx1, &inferred_ty);
                let ty_applied = self.apply_ctx_type(&ctx1, ty);
                let (ctx2, tree2) = self.subtype(&ctx1, &inferred_applied, &ty_applied)?;
                let output = format!("{}", ctx2);
                Ok((
                    ctx2,
                    InferenceTree::new("ChkSub", &input, &output, vec![tree1, tree2]),
                ))
            }
        }
    }

    pub fn infer(
        &mut self,
        ctx: &Context,
        expr: &Expr,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let input = format!("{} ⊢ {:?}", ctx, expr);

        match expr {
            Expr::Var(x) => self.infer_var(ctx, x, &input),
            Expr::Ann(expr, ty) => self.infer_ann(ctx, expr, ty, &input),
            Expr::LitInt(n) => self.infer_lit_int(ctx, *n, &input),
            Expr::LitBool(b) => self.infer_lit_bool(ctx, *b, &input),
            Expr::Abs(x, param_ty, body) => self.infer_abs(ctx, x, param_ty, body, &input),
            Expr::App(func, arg) => self.infer_application(ctx, func, arg, &input),
            Expr::TAbs(a, body) => self.infer_tabs(ctx, a, body, &input),
            Expr::TApp(func, ty_arg) => self.infer_tapp(ctx, func, ty_arg, &input),
            Expr::Let(x, e1, e2) => self.infer_let(ctx, x, e1, e2, &input),
            Expr::IfThenElse(e1, e2, e3) => self.infer_if(ctx, e1, e2, e3, &input),
            Expr::BinOp(op, e1, e2) => self.infer_binop(ctx, op, e1, e2, &input),
        }
    }

    /// T-Instr: Type annotation rule
    /// Γ ⊢ e ⇐ A
    /// ──────────────────
    /// Γ ⊢ (e : A) ⇒ A
    fn infer_ann(
        &mut self,
        ctx: &Context,
        expr: &Expr,
        ty: &Type,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let (ctx1, tree) = self.check(ctx, expr, ty)?;
        let output = format!("{} ⇒ {} ⊣ {}", input, ty, ctx1);
        Ok((
            ty.clone(),
            ctx1,
            InferenceTree::new("InfAnn", input, &output, vec![tree]),
        ))
    }

    /// T-ForallI: Type abstraction rule
    /// Γ, α ⊢ e ⇒ A
    /// ──────────────────────
    /// Γ ⊢ Λα. e ⇒ ∀α. A
    fn infer_tabs(
        &mut self,
        ctx: &Context,
        a: &str,
        body: &Expr,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let mut new_ctx = ctx.clone();
        new_ctx.push(Entry::TVarBnd(a.to_string()));
        let (body_ty, ctx1, tree) = self.infer(&new_ctx, body)?;

        // Apply context substitutions to resolve existential variables before removing
        // type binding
        let resolved_body_ty = self.apply_ctx_type(&ctx1, &body_ty);

        let (left, _, right) =
            ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == a));
        // Preserve solved existential variable bindings from the left context
        let mut final_ctx_entries = left
            .into_iter()
            .filter(|entry| matches!(entry, Entry::SETVarBnd(_, _)))
            .collect::<Vec<_>>();
        final_ctx_entries.extend(right);
        let final_ctx = Context(final_ctx_entries);
        let result_ty = Type::Forall(a.to_string(), Box::new(resolved_body_ty));
        let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, final_ctx);
        Ok((
            result_ty,
            final_ctx,
            InferenceTree::new("InfTAbs", input, &output, vec![tree]),
        ))
    }

    /// T-ForallE: Type application rule
    /// Γ ⊢ e ⇒ ∀α. A
    /// ──────────────────────
    /// Γ ⊢ e[B] ⇒ [B/α]A
    fn infer_tapp(
        &mut self,
        ctx: &Context,
        func: &Expr,
        ty_arg: &Type,
        input: &str,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let (func_ty, ctx1, tree1) = self.infer(ctx, func)?;
        match func_ty {
            Type::Forall(a, body_ty) => {
                let result_ty = self.subst_type(&a, ty_arg, &body_ty);
                let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, ctx1);
                Ok((
                    result_ty,
                    ctx1,
                    InferenceTree::new("InfTApp", input, &output, vec![tree1]),
                ))
            }
            _ => Err(TypeError::TypeApplicationError {
                actual: func_ty.clone(),
                expr: None,
            }),
        }
    }

    fn infer_app(
        &mut self,
        ctx: &Context,
        func_ty: &Type,
        arg: &Expr,
    ) -> TypeResult<(Type, Context, InferenceTree)> {
        let input = format!("{} ⊢ {:?} • {}", ctx, arg, func_ty);

        match func_ty {
            Type::Arrow(param_ty, result_ty) => {
                let (ctx1, tree) = self.check(ctx, arg, param_ty)?;
                let output = format!("{} ⇒⇒ {} ⊣ {}", input, result_ty, ctx1);
                Ok((
                    result_ty.as_ref().clone(),
                    ctx1,
                    InferenceTree::new("InfAppArr", &input, &output, vec![tree]),
                ))
            }
            Type::ETVar(a) => {
                let a1 = self.fresh_tyvar();
                let a2 = self.fresh_tyvar();
                let (left, _, right) =
                    ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
                let arrow_type = Type::Arrow(
                    Box::new(Type::ETVar(a1.clone())),
                    Box::new(Type::ETVar(a2.clone())),
                );
                let mut new_ctx = left;
                new_ctx.push(Entry::SETVarBnd(a.clone(), arrow_type));
                new_ctx.push(Entry::ETVarBnd(a1.clone()));
                new_ctx.push(Entry::ETVarBnd(a2.clone()));
                new_ctx.extend(right);
                let ctx1 = Context(new_ctx);

                let (ctx2, tree) = self.check(&ctx1, arg, &Type::ETVar(a1))?;
                let output = format!("{} ⇒⇒ ^{} ⊣ {}", input, a2, ctx2);
                Ok((
                    Type::ETVar(a2),
                    ctx2,
                    InferenceTree::new("InfAppETVar", &input, &output, vec![tree]),
                ))
            }
            _ => Err(TypeError::ApplicationTypeError {
                actual: func_ty.clone(),
                expr: None,
            }),
        }
    }
}
pub fn run_bidirectional(expr: &Expr) -> TypeResult<(Type, Context, InferenceTree)> {
    let mut bi = BiDirectional::new();
    let ctx = Context::new();
    let (ty, final_ctx, tree) = bi.infer(&ctx, expr)?;
    // Apply the final context to resolve existential variables
    let resolved_ty = bi.apply_ctx_type(&final_ctx, &ty);
    Ok((resolved_ty, final_ctx, tree))
}
#[derive(Debug, Clone)]
pub struct Context(Vec<Entry>);
}

Context management requires careful attention to scoping rules. When we enter a polymorphic function or introduce new type variables, we must ensure they are properly cleaned up when we exit their scope. The context provides methods for breaking it apart and reconstructing it to handle these operations efficiently.

Type Substitution and Application

Type substitution forms the computational heart of our System F implementation. When we instantiate a polymorphic type or solve existential variables, we need to systematically replace type variables with concrete types throughout complex type expressions.

#![allow(unused)]
fn main() {
fn subst_type(&self, var: &TyVar, replacement: &Type, ty: &Type) -> Type {
    match ty {
        Type::Var(name) if name == var => replacement.clone(),
        Type::ETVar(name) if name == var => replacement.clone(),
        Type::Var(_) | Type::ETVar(_) | Type::Int | Type::Bool => ty.clone(),
        Type::Arrow(t1, t2) => Type::Arrow(
            Box::new(self.subst_type(var, replacement, t1)),
            Box::new(self.subst_type(var, replacement, t2)),
        ),
        Type::Forall(bound_var, body) => {
            if bound_var == var {
                ty.clone() // Variable is shadowed
            } else {
                Type::Forall(
                    bound_var.clone(),
                    Box::new(self.subst_type(var, replacement, body)),
                )
            }
        }
    }
}
}

Substitution must handle variable capture correctly. When substituting into a Forall type, we must ensure that the bound variable doesn’t conflict with the replacement type. This is analogous to alpha-conversion in lambda calculus but operates at the type level.

Context application extends substitution by applying the current state of existential variable solutions to a type:

#![allow(unused)]
fn main() {
pub fn apply_ctx_type(&self, ctx: &Context, ty: &Type) -> Type {
    let mut current = ty.clone();
    let mut changed = true;

    // Keep applying substitutions until no more changes occur
    while changed {
        changed = false;
        let new_type = self.apply_ctx_type_once(ctx, &current);
        if new_type != current {
            changed = true;
            current = new_type;
        }
    }

    current
}
}

This operation is essential because our algorithm incrementally builds up solutions to existential variables. As we learn more about unknown types, we need to propagate this information through all the types we’re working with.

Bidirectional Algorithm Core

Inference Rules

The inference mode synthesizes types from expressions. Our implementation uses a modular approach where the main inference function delegates to specialized methods for each syntactic form.

#![allow(unused)]
fn main() {
pub fn infer(
    &mut self,
    ctx: &Context,
    expr: &Expr,
) -> TypeResult<(Type, Context, InferenceTree)> {
    let input = format!("{} ⊢ {:?}", ctx, expr);

    match expr {
        Expr::Var(x) => self.infer_var(ctx, x, &input),
        Expr::Ann(expr, ty) => self.infer_ann(ctx, expr, ty, &input),
        Expr::LitInt(n) => self.infer_lit_int(ctx, *n, &input),
        Expr::LitBool(b) => self.infer_lit_bool(ctx, *b, &input),
        Expr::Abs(x, param_ty, body) => self.infer_abs(ctx, x, param_ty, body, &input),
        Expr::App(func, arg) => self.infer_application(ctx, func, arg, &input),
        Expr::TAbs(a, body) => self.infer_tabs(ctx, a, body, &input),
        Expr::TApp(func, ty_arg) => self.infer_tapp(ctx, func, ty_arg, &input),
        Expr::Let(x, e1, e2) => self.infer_let(ctx, x, e1, e2, &input),
        Expr::IfThenElse(e1, e2, e3) => self.infer_if(ctx, e1, e2, e3, &input),
        Expr::BinOp(op, e1, e2) => self.infer_binop(ctx, op, e1, e2, &input),
    }
}
}

The inference function delegates to specialized methods that implement individual typing rules:

Variable Lookup follows the T-Var rule:

\[ \frac{x : A \in \Gamma}{\Gamma \vdash x \Rightarrow A} \text{(T-Var)} \]

#![allow(unused)]
fn main() {
/// T-Var: Variable lookup rule
/// Γ, x:A ⊢ x ⇒ A
fn infer_var(
    &self,
    ctx: &Context,
    x: &str,
    input: &str,
) -> TypeResult<(Type, Context, InferenceTree)> {
    if let Some(Entry::VarBnd(_, ty)) =
        ctx.find(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x))
    {
        let output = format!("{} ⇒ {} ⊣ {}", input, ty, ctx);
        Ok((
            ty.clone(),
            ctx.clone(),
            InferenceTree::new("InfVar", input, &output, vec![]),
        ))
    } else {
        Err(TypeError::UnboundVariable {
            name: x.to_string(),
            expr: None,
        })
    }
}
}

Integer Literals use the T-LitInt rule:

\[ \frac{}{\Gamma \vdash n \Rightarrow \text{Int}} \text{(T-LitInt)} \]

#![allow(unused)]
fn main() {
/// T-LitInt: Integer literal rule
/// ⊢ n ⇒ Int
fn infer_lit_int(
    &self,
    ctx: &Context,
    _n: i64,
    input: &str,
) -> TypeResult<(Type, Context, InferenceTree)> {
    let output = format!("{} ⇒ Int ⊣ {}", input, ctx);
    Ok((
        Type::Int,
        ctx.clone(),
        InferenceTree::new("InfLitInt", input, &output, vec![]),
    ))
}
}

Boolean Literals use the T-LitBool rule:

\[ \frac{}{\Gamma \vdash \text{true} \Rightarrow \text{Bool}} \text{(T-LitBool)} \]

#![allow(unused)]
fn main() {
/// T-LitBool: Boolean literal rule
/// ⊢ b ⇒ Bool
fn infer_lit_bool(
    &self,
    ctx: &Context,
    _b: bool,
    input: &str,
) -> TypeResult<(Type, Context, InferenceTree)> {
    let output = format!("{} ⇒ Bool ⊣ {}", input, ctx);
    Ok((
        Type::Bool,
        ctx.clone(),
        InferenceTree::new("InfLitBool", input, &output, vec![]),
    ))
}
}

Lambda Abstraction implements the T-Abs rule:

\[ \frac{\Gamma, x : A \vdash e \Leftarrow B}{\Gamma \vdash \lambda x. e \Leftarrow A \to B} \text{(T-Abs)} \]

#![allow(unused)]
fn main() {
/// T-Abs: Lambda abstraction rule
/// Γ,x:A ⊢ e ⇐ B
/// ─────────────────────
/// Γ ⊢ λx:A.e ⇒ A → B
fn infer_abs(
    &mut self,
    ctx: &Context,
    x: &str,
    param_ty: &Type,
    body: &Expr,
    input: &str,
) -> TypeResult<(Type, Context, InferenceTree)> {
    let b = self.fresh_tyvar();
    let mut new_ctx = ctx.clone();
    new_ctx.push(Entry::VarBnd(x.to_string(), param_ty.clone()));
    new_ctx.push(Entry::ETVarBnd(b.clone()));

    let (ctx1, tree) = self.check(&new_ctx, body, &Type::ETVar(b.clone()))?;
    let (left, _, right) =
        ctx1.break3(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x));
    // Preserve solved existential variable bindings from the left context
    let mut final_ctx_entries = left
        .into_iter()
        .filter(|entry| matches!(entry, Entry::SETVarBnd(_, _)))
        .collect::<Vec<_>>();
    final_ctx_entries.extend(right);
    let final_ctx = Context(final_ctx_entries);
    let result_ty = Type::Arrow(Box::new(param_ty.clone()), Box::new(Type::ETVar(b)));
    let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, final_ctx);
    Ok((
        result_ty,
        final_ctx,
        InferenceTree::new("InfLam", input, &output, vec![tree]),
    ))
}
}

Function Application uses the T-App rule:

\[ \frac{\Gamma \vdash e_1 \Rightarrow A \to B \quad \Gamma \vdash e_2 \Leftarrow A}{\Gamma \vdash e_1 ; e_2 \Rightarrow B} \text{(T-App)} \]

#![allow(unused)]
fn main() {
/// T-App: Function application rule
/// Γ ⊢ e1 ⇒ A   Γ ⊢ e2 • A ⇒⇒ C
/// ────────────────────────────────
/// Γ ⊢ e1 e2 ⇒ C
fn infer_application(
    &mut self,
    ctx: &Context,
    func: &Expr,
    arg: &Expr,
    input: &str,
) -> TypeResult<(Type, Context, InferenceTree)> {
    let (func_ty, ctx1, tree1) = self.infer(ctx, func)?;
    let func_ty_applied = self.apply_ctx_type(&ctx1, &func_ty);
    let (result_ty, ctx2, tree2) = self.infer_app(&ctx1, &func_ty_applied, arg)?;
    let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, ctx2);
    Ok((
        result_ty,
        ctx2,
        InferenceTree::new("InfApp", input, &output, vec![tree1, tree2]),
    ))
}
}

Let Bindings implement the T-Let rule:

\[ \frac{\Gamma \vdash e_1 \Rightarrow A \quad \Gamma, x : A \vdash e_2 \Rightarrow B}{\Gamma \vdash \text{let } x = e_1 \text{ in } e_2 \Rightarrow B} \text{(T-Let)} \]

#![allow(unused)]
fn main() {
/// T-Let: Let binding rule
/// Γ ⊢ e1 ⇒ A   Γ,x:A ⊢ e2 ⇒ B
/// ─────────────────────────────
/// Γ ⊢ let x = e1 in e2 ⇒ B
fn infer_let(
    &mut self,
    ctx: &Context,
    x: &str,
    e1: &Expr,
    e2: &Expr,
    input: &str,
) -> TypeResult<(Type, Context, InferenceTree)> {
    let (ty1, ctx1, tree1) = self.infer(ctx, e1)?;
    let mut new_ctx = ctx1.clone();
    new_ctx.push(Entry::VarBnd(x.to_string(), ty1));
    let (ty2, ctx2, tree2) = self.infer(&new_ctx, e2)?;
    let (left, _, right) =
        ctx2.break3(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x));
    // Preserve solved existential variable bindings from the left context
    let mut final_ctx_entries = left
        .into_iter()
        .filter(|entry| matches!(entry, Entry::SETVarBnd(_, _)))
        .collect::<Vec<_>>();
    final_ctx_entries.extend(right);
    let final_ctx = Context(final_ctx_entries);
    let output = format!("{} ⇒ {} ⊣ {}", input, ty2, final_ctx);
    Ok((
        ty2,
        final_ctx,
        InferenceTree::new("InfLet", input, &output, vec![tree1, tree2]),
    ))
}
}

Conditional Expressions use the T-If rule:

\[ \frac{\Gamma \vdash e_1 \Leftarrow \text{Bool} \quad \Gamma \vdash e_2 \Rightarrow A \quad \Gamma \vdash e_3 \Leftarrow A}{\Gamma \vdash \text{if } e_1 \text{ then } e_2 \text{ else } e_3 \Rightarrow A} \text{(T-If)} \]

#![allow(unused)]
fn main() {
/// T-If: Conditional rule
/// Γ ⊢ e1 ⇐ Bool   Γ ⊢ e2 ⇒ A   Γ ⊢ e3 ⇒ A
/// ──────────────────────────────────────────
/// Γ ⊢ if e1 then e2 else e3 ⇒ A
fn infer_if(
    &mut self,
    ctx: &Context,
    e1: &Expr,
    e2: &Expr,
    e3: &Expr,
    input: &str,
) -> TypeResult<(Type, Context, InferenceTree)> {
    let (ctx1, tree1) = self.check(ctx, e1, &Type::Bool)?;
    let (ty2, ctx2, tree2) = self.infer(&ctx1, e2)?;
    let (ty3, ctx3, tree3) = self.infer(&ctx2, e3)?;
    // Ensure both branches have the same type
    let (unified_ctx, tree_unify) = self.subtype(&ctx3, &ty2, &ty3)?;
    let output = format!("{} ⇒ {} ⊣ {}", input, ty2, unified_ctx);
    Ok((
        ty2,
        unified_ctx,
        InferenceTree::new(
            "InfIf",
            input,
            &output,
            vec![tree1, tree2, tree3, tree_unify],
        ),
    ))
}
}

Binary Operations implement the T-BinOp rules (T-Arith, T-Bool, T-Cmp, T-Eq):

\[ \frac{\Gamma \vdash e_1 \Leftarrow \text{Int} \quad \Gamma \vdash e_2 \Leftarrow \text{Int}}{\Gamma \vdash e_1 \oplus e_2 \Rightarrow \text{Int}} \text{(T-Arith)} \]

\[ \frac{\Gamma \vdash e_1 \Leftarrow \text{Bool} \quad \Gamma \vdash e_2 \Leftarrow \text{Bool}}{\Gamma \vdash e_1 \land e_2 \Rightarrow \text{Bool}} \text{(T-Bool)} \]

\[ \frac{\Gamma \vdash e_1 \Leftarrow \text{Int} \quad \Gamma \vdash e_2 \Leftarrow \text{Int}}{\Gamma \vdash e_1 < e_2 \Rightarrow \text{Bool}} \text{(T-Cmp)} \]

\[ \frac{\Gamma \vdash e_1 \Rightarrow A \quad \Gamma \vdash e_2 \Leftarrow A}{\Gamma \vdash e_1 = e_2 \Rightarrow \text{Bool}} \text{(T-Eq)} \]

#![allow(unused)]
fn main() {
/// T-BinOp: Binary operation rules
fn infer_binop(
    &mut self,
    ctx: &Context,
    op: &BinOp,
    e1: &Expr,
    e2: &Expr,
    input: &str,
) -> TypeResult<(Type, Context, InferenceTree)> {
    match op {
        // T-Arith: Int → Int → Int
        BinOp::Add | BinOp::Sub | BinOp::Mul | BinOp::Div => {
            let (ctx1, tree1) = self.check(ctx, e1, &Type::Int)?;
            let (ctx2, tree2) = self.check(&ctx1, e2, &Type::Int)?;
            let output = format!("{} ⇒ Int ⊣ {}", input, ctx2);
            Ok((
                Type::Int,
                ctx2,
                InferenceTree::new("InfArith", input, &output, vec![tree1, tree2]),
            ))
        }
        // T-Bool: Bool → Bool → Bool
        BinOp::And | BinOp::Or => {
            let (ctx1, tree1) = self.check(ctx, e1, &Type::Bool)?;
            let (ctx2, tree2) = self.check(&ctx1, e2, &Type::Bool)?;
            let output = format!("{} ⇒ Bool ⊣ {}", input, ctx2);
            Ok((
                Type::Bool,
                ctx2,
                InferenceTree::new("InfBool", input, &output, vec![tree1, tree2]),
            ))
        }
        // T-Cmp: Int → Int → Bool
        BinOp::Lt | BinOp::Le | BinOp::Gt | BinOp::Ge => {
            let (ctx1, tree1) = self.check(ctx, e1, &Type::Int)?;
            let (ctx2, tree2) = self.check(&ctx1, e2, &Type::Int)?;
            let output = format!("{} ⇒ Bool ⊣ {}", input, ctx2);
            Ok((
                Type::Bool,
                ctx2,
                InferenceTree::new("InfCmp", input, &output, vec![tree1, tree2]),
            ))
        }
        // T-Eq: ∀α. α → α → Bool
        BinOp::Eq | BinOp::Ne => {
            let (ty1, ctx1, tree1) = self.infer(ctx, e1)?;
            let (ctx2, tree2) = self.check(&ctx1, e2, &ty1)?;
            let output = format!("{} ⇒ Bool ⊣ {}", input, ctx2);
            Ok((
                Type::Bool,
                ctx2,
                InferenceTree::new("InfEq", input, &output, vec![tree1, tree2]),
            ))
        }
    }
}
}

Type Annotations use the T-Instr rule:

\[ \frac{\Gamma \vdash e \Leftarrow A}{\Gamma \vdash (e : A) \Rightarrow A} \text{(T-Instr)} \]

#![allow(unused)]
fn main() {
/// T-Instr: Type annotation rule
/// Γ ⊢ e ⇐ A
/// ──────────────────
/// Γ ⊢ (e : A) ⇒ A
fn infer_ann(
    &mut self,
    ctx: &Context,
    expr: &Expr,
    ty: &Type,
    input: &str,
) -> TypeResult<(Type, Context, InferenceTree)> {
    let (ctx1, tree) = self.check(ctx, expr, ty)?;
    let output = format!("{} ⇒ {} ⊣ {}", input, ty, ctx1);
    Ok((
        ty.clone(),
        ctx1,
        InferenceTree::new("InfAnn", input, &output, vec![tree]),
    ))
}
}

Type Abstraction implements the T-TAbs rule:

\[ \frac{\Gamma, \alpha \vdash e \Rightarrow A}{\Gamma \vdash \Lambda \alpha. e \Rightarrow \forall \alpha. A} \text{(T-TAbs)} \]

#![allow(unused)]
fn main() {
/// T-ForallI: Type abstraction rule
/// Γ, α ⊢ e ⇒ A
/// ──────────────────────
/// Γ ⊢ Λα. e ⇒ ∀α. A
fn infer_tabs(
    &mut self,
    ctx: &Context,
    a: &str,
    body: &Expr,
    input: &str,
) -> TypeResult<(Type, Context, InferenceTree)> {
    let mut new_ctx = ctx.clone();
    new_ctx.push(Entry::TVarBnd(a.to_string()));
    let (body_ty, ctx1, tree) = self.infer(&new_ctx, body)?;

    // Apply context substitutions to resolve existential variables before removing
    // type binding
    let resolved_body_ty = self.apply_ctx_type(&ctx1, &body_ty);

    let (left, _, right) =
        ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == a));
    // Preserve solved existential variable bindings from the left context
    let mut final_ctx_entries = left
        .into_iter()
        .filter(|entry| matches!(entry, Entry::SETVarBnd(_, _)))
        .collect::<Vec<_>>();
    final_ctx_entries.extend(right);
    let final_ctx = Context(final_ctx_entries);
    let result_ty = Type::Forall(a.to_string(), Box::new(resolved_body_ty));
    let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, final_ctx);
    Ok((
        result_ty,
        final_ctx,
        InferenceTree::new("InfTAbs", input, &output, vec![tree]),
    ))
}
}

Type Application uses the T-TApp rule:

\[ \frac{\Gamma \vdash e \Rightarrow \forall \alpha. A}{\Gamma \vdash e[B] \Rightarrow [B/\alpha]A} \text{(T-TApp)} \]

#![allow(unused)]
fn main() {
/// T-ForallE: Type application rule
/// Γ ⊢ e ⇒ ∀α. A
/// ──────────────────────
/// Γ ⊢ e[B] ⇒ [B/α]A
fn infer_tapp(
    &mut self,
    ctx: &Context,
    func: &Expr,
    ty_arg: &Type,
    input: &str,
) -> TypeResult<(Type, Context, InferenceTree)> {
    let (func_ty, ctx1, tree1) = self.infer(ctx, func)?;
    match func_ty {
        Type::Forall(a, body_ty) => {
            let result_ty = self.subst_type(&a, ty_arg, &body_ty);
            let output = format!("{} ⇒ {} ⊣ {}", input, result_ty, ctx1);
            Ok((
                result_ty,
                ctx1,
                InferenceTree::new("InfTApp", input, &output, vec![tree1]),
            ))
        }
        _ => Err(TypeError::TypeApplicationError {
            actual: func_ty.clone(),
            expr: None,
        }),
    }
}
}

Each method includes comments linking it to the formal typing rule it implements, making the correspondence between theory and implementation explicit.

Checking Rules

The checking mode verifies that expressions conform to expected types. This is where the algorithm can make progress even when types aren’t fully determined.

#![allow(unused)]
fn main() {
fn check(
    &mut self,
    ctx: &Context,
    expr: &Expr,
    ty: &Type,
) -> TypeResult<(Context, InferenceTree)> {
    let input = format!("{} ⊢ {:?} ⇐ {}", ctx, expr, ty);

    match (expr, ty) {
        (Expr::LitInt(_), Type::Int) => Ok((
            ctx.clone(),
            InferenceTree::new("ChkLitInt", &input, &format!("{}", ctx), vec![]),
        )),
        (Expr::LitBool(_), Type::Bool) => Ok((
            ctx.clone(),
            InferenceTree::new("ChkLitBool", &input, &format!("{}", ctx), vec![]),
        )),
        (Expr::Abs(x, _param_ty, body), Type::Arrow(expected_param, result_ty)) => {
            // For simplicity, we assume the parameter type matches
            let mut new_ctx = ctx.clone();
            new_ctx.push(Entry::VarBnd(x.clone(), *expected_param.clone()));
            let (ctx1, tree) = self.check(&new_ctx, body, result_ty)?;
            let (_, _, right) =
                ctx1.break3(|entry| matches!(entry, Entry::VarBnd(name, _) if name == x));
            let final_ctx = Context(right);
            let output = format!("{}", final_ctx);
            Ok((
                final_ctx,
                InferenceTree::new("ChkLam", &input, &output, vec![tree]),
            ))
        }
        (_, Type::Forall(a, ty_body)) => {
            let mut new_ctx = ctx.clone();
            new_ctx.push(Entry::TVarBnd(a.clone()));
            let (ctx1, tree) = self.check(&new_ctx, expr, ty_body)?;
            let (_, _, right) =
                ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == a));
            let final_ctx = Context(right);
            let output = format!("{}", final_ctx);
            Ok((
                final_ctx,
                InferenceTree::new("ChkAll", &input, &output, vec![tree]),
            ))
        }
        _ => {
            // Fallback to inference + subtyping
            let (inferred_ty, ctx1, tree1) = self.infer(ctx, expr)?;
            let inferred_applied = self.apply_ctx_type(&ctx1, &inferred_ty);
            let ty_applied = self.apply_ctx_type(&ctx1, ty);
            let (ctx2, tree2) = self.subtype(&ctx1, &inferred_applied, &ty_applied)?;
            let output = format!("{}", ctx2);
            Ok((
                ctx2,
                InferenceTree::new("ChkSub", &input, &output, vec![tree1, tree2]),
            ))
        }
    }
}
}

Checking mode provides specialized handling that takes advantage of known type information to guide the inference process more efficiently. For lambda expressions, when checking \( \lambda x:\tau_1. e \) against type \( \tau_1 \to \tau_2 \), the algorithm can immediately verify that the parameter types match and then recursively check the body \( e \) against the expected return type \( \tau_2 \). This direct decomposition avoids the need to synthesize types and then verify compatibility.

When checking against universal types \( \forall\alpha. \tau \), the algorithm introduces the type variable \( \alpha \) into the context and then checks the expression against the instantiated type \( \tau \). This approach ensures that the universal quantification is handled correctly while maintaining the scoping discipline required for sound type checking. When direct checking strategies are not applicable to the current expression and expected type combination, the algorithm falls back to a synthesis-plus-subtyping approach, where it first synthesizes a type for the expression and then verifies that this synthesized type is a subtype of the expected type.

Subtyping and Instantiation

Subtyping Relations

System F includes a subtyping system that handles the relationships between polymorphic types. The key insight is that ∀α. τ is more general than any specific instantiation of τ.

#![allow(unused)]
fn main() {
fn subtype(
    &mut self,
    ctx: &Context,
    t1: &Type,
    t2: &Type,
) -> TypeResult<(Context, InferenceTree)> {
    let input = format!("{} ⊢ {} <: {}", ctx, t1, t2);

    match (t1, t2) {
        (Type::Int, Type::Int) | (Type::Bool, Type::Bool) => Ok((
            ctx.clone(),
            InferenceTree::new("SubRefl", &input, &format!("{}", ctx), vec![]),
        )),
        (Type::Var(a), Type::Var(b)) if a == b => Ok((
            ctx.clone(),
            InferenceTree::new("SubReflTVar", &input, &format!("{}", ctx), vec![]),
        )),
        (Type::ETVar(a), Type::ETVar(b)) if a == b => Ok((
            ctx.clone(),
            InferenceTree::new("SubReflETVar", &input, &format!("{}", ctx), vec![]),
        )),
        (Type::Arrow(a1, a2), Type::Arrow(b1, b2)) => {
            let (ctx1, tree1) = self.subtype(ctx, b1, a1)?; // Contravariant in argument
            let (ctx2, tree2) = self.subtype(&ctx1, a2, b2)?; // Covariant in result
            let output = format!("{}", ctx2);
            Ok((
                ctx2,
                InferenceTree::new("SubArr", &input, &output, vec![tree1, tree2]),
            ))
        }
        (_, Type::Forall(b, t2_body)) => {
            let mut new_ctx = ctx.clone();
            new_ctx.push(Entry::TVarBnd(b.clone()));
            let (ctx1, tree) = self.subtype(&new_ctx, t1, t2_body)?;
            let (_, _, right) =
                ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == b));
            let final_ctx = Context(right);
            let output = format!("{}", final_ctx);
            Ok((
                final_ctx,
                InferenceTree::new("SubAllR", &input, &output, vec![tree]),
            ))
        }
        (Type::Forall(a, t1_body), _) => {
            let subst_t1 = self.subst_type(a, &Type::ETVar(a.clone()), t1_body);
            let mut new_ctx = ctx.clone();
            new_ctx.push(Entry::ETVarBnd(a.clone()));
            new_ctx.push(Entry::Mark(a.clone()));
            let (ctx1, tree) = self.subtype(&new_ctx, &subst_t1, t2)?;
            let (_, _, right) =
                ctx1.break3(|entry| matches!(entry, Entry::Mark(name) if name == a));
            let final_ctx = Context(right);
            let output = format!("{}", final_ctx);
            Ok((
                final_ctx,
                InferenceTree::new("SubAllL", &input, &output, vec![tree]),
            ))
        }
        (Type::ETVar(a), _) if !self.free_vars(t2).contains(a) => {
            let (ctx1, tree) = self.inst_l(ctx, a, t2)?;
            let output = format!("{}", ctx1);
            Ok((
                ctx1,
                InferenceTree::new("SubInstL", &input, &output, vec![tree]),
            ))
        }
        (_, Type::ETVar(a)) if !self.free_vars(t1).contains(a) => {
            let (ctx1, tree) = self.inst_r(ctx, t1, a)?;
            let output = format!("{}", ctx1);
            Ok((
                ctx1,
                InferenceTree::new("SubInstR", &input, &output, vec![tree]),
            ))
        }
        _ => Err(TypeError::SubtypingError {
            left: t1.clone(),
            right: t2.clone(),
            expr: None,
        }),
    }
}
}

The subtyping rules capture several essential relationships that govern type compatibility in System F. Function types exhibit the classic contravariant-covariant pattern, where a function that accepts more general arguments and returns more specific results is considered a subtype of a function with more specific argument requirements and more general return types. This means that a function of type \( A_1 \to B_1 \) is a subtype of \( A_2 \to B_2 \) when \( A_2 \leq A_1 \) (contravariant in arguments) and \( B_1 \leq B_2 \) (covariant in results).

Universal quantification follows the principle that \( \forall\alpha. \tau_1 \leq \tau_2 \) holds if \( \tau_1 \leq \tau_2 \) when \( \alpha \) is instantiated with a fresh existential variable, effectively allowing polymorphic types to be related through their instantiations. When subtyping involves existential variables, the algorithm must solve constraints about what these variables should be instantiated to in order to satisfy the subtyping relationship, often leading to the generation of additional constraints that propagate through the system.

Variable Instantiation

The instantiation judgments handle the core complexity of polymorphic type checking. When we have constraints like ^α ≤ τ (an existential variable should be at most as general as some type), we need to find appropriate instantiations.

#![allow(unused)]
fn main() {
fn inst_l(
    &mut self,
    ctx: &Context,
    a: &TyVar,
    ty: &Type,
) -> TypeResult<(Context, InferenceTree)> {
    let input = format!("{} ⊢ ^{} :=< {}", ctx, a, ty);

    match ty {
        Type::ETVar(b) if self.before(ctx, a, b) => {
            let (left, _, right) =
                ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == b));
            let new_ctx = Context::from_parts(
                left,
                Entry::SETVarBnd(b.clone(), Type::ETVar(a.clone())),
                right,
            );
            let output = format!("{}", new_ctx);
            Ok((
                new_ctx,
                InferenceTree::new("InstLReach", &input, &output, vec![]),
            ))
        }
        Type::Arrow(t1, t2) => {
            let a1 = self.fresh_tyvar();
            let a2 = self.fresh_tyvar();
            let (left, _, right) =
                ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
            let arrow_type = Type::Arrow(
                Box::new(Type::ETVar(a1.clone())),
                Box::new(Type::ETVar(a2.clone())),
            );
            let mut new_ctx = left;
            new_ctx.push(Entry::SETVarBnd(a.clone(), arrow_type));
            new_ctx.push(Entry::ETVarBnd(a1.clone()));
            new_ctx.push(Entry::ETVarBnd(a2.clone()));
            new_ctx.extend(right);
            let ctx1 = Context(new_ctx);

            let (ctx2, tree1) = self.inst_r(&ctx1, t1, &a1)?;
            let t2_applied = self.apply_ctx_type(&ctx2, t2);
            let (ctx3, tree2) = self.inst_l(&ctx2, &a2, &t2_applied)?;

            let output = format!("{}", ctx3);
            Ok((
                ctx3,
                InferenceTree::new("InstLArr", &input, &output, vec![tree1, tree2]),
            ))
        }
        Type::Forall(b, t) => {
            let mut new_ctx = ctx.clone();
            new_ctx.push(Entry::TVarBnd(b.clone()));
            let (ctx1, tree) = self.inst_l(&new_ctx, a, t)?;
            let (_, _, right) =
                ctx1.break3(|entry| matches!(entry, Entry::TVarBnd(name) if name == b));
            let final_ctx = Context(right);
            let output = format!("{}", final_ctx);
            Ok((
                final_ctx,
                InferenceTree::new("InstLAllR", &input, &output, vec![tree]),
            ))
        }
        _ if self.is_mono(ty) => {
            // Occurs check: ensure ^α doesn't occur in τ to prevent infinite types
            if self.occurs_check(a, ty) {
                return Err(TypeError::OccursCheck {
                    var: a.clone(),
                    ty: ty.clone(),
                    expr: None,
                });
            }
            let (left, _, right) =
                ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
            let new_ctx =
                Context::from_parts(left, Entry::SETVarBnd(a.clone(), ty.clone()), right);
            let output = format!("{}", new_ctx);
            Ok((
                new_ctx,
                InferenceTree::new("InstLSolve", &input, &output, vec![]),
            ))
        }
        _ => Err(TypeError::InstantiationError {
            var: a.clone(),
            ty: ty.clone(),
            expr: None,
        }),
    }
}
}

Left instantiation (inst_l) handles cases where the existential variable is on the left side of a constraint. This typically means we’re looking for the most general type that satisfies the constraint.

#![allow(unused)]
fn main() {
fn inst_r(
    &mut self,
    ctx: &Context,
    ty: &Type,
    a: &TyVar,
) -> TypeResult<(Context, InferenceTree)> {
    let input = format!("{} ⊢ {} :=< ^{}", ctx, ty, a);

    match ty {
        Type::ETVar(b) if self.before(ctx, a, b) => {
            let (left, _, right) =
                ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == b));
            let new_ctx = Context::from_parts(
                left,
                Entry::SETVarBnd(b.clone(), Type::ETVar(a.clone())),
                right,
            );
            let output = format!("{}", new_ctx);
            Ok((
                new_ctx,
                InferenceTree::new("InstRReach", &input, &output, vec![]),
            ))
        }
        Type::Arrow(t1, t2) => {
            let a1 = self.fresh_tyvar();
            let a2 = self.fresh_tyvar();
            let (left, _, right) =
                ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
            let arrow_type = Type::Arrow(
                Box::new(Type::ETVar(a1.clone())),
                Box::new(Type::ETVar(a2.clone())),
            );
            let mut new_ctx = left;
            new_ctx.push(Entry::SETVarBnd(a.clone(), arrow_type));
            new_ctx.push(Entry::ETVarBnd(a1.clone()));
            new_ctx.push(Entry::ETVarBnd(a2.clone()));
            new_ctx.extend(right);
            let ctx1 = Context(new_ctx);

            let (ctx2, tree1) = self.inst_l(&ctx1, &a1, t1)?;
            let t2_applied = self.apply_ctx_type(&ctx2, t2);
            let (ctx3, tree2) = self.inst_r(&ctx2, &t2_applied, &a2)?;

            let output = format!("{}", ctx3);
            Ok((
                ctx3,
                InferenceTree::new("InstRArr", &input, &output, vec![tree1, tree2]),
            ))
        }
        Type::Forall(b, t) => {
            let subst_t = self.subst_type(b, &Type::ETVar(b.clone()), t);
            let mut new_ctx = ctx.clone();
            new_ctx.push(Entry::ETVarBnd(b.clone()));
            new_ctx.push(Entry::Mark(b.clone()));
            let (ctx1, tree) = self.inst_r(&new_ctx, &subst_t, a)?;
            let (_, _, right) =
                ctx1.break3(|entry| matches!(entry, Entry::Mark(name) if name == b));
            let final_ctx = Context(right);
            let output = format!("{}", final_ctx);
            Ok((
                final_ctx,
                InferenceTree::new("InstRAllL", &input, &output, vec![tree]),
            ))
        }
        _ if self.is_mono(ty) => {
            // Occurs check: ensure ^α doesn't occur in τ to prevent infinite types
            if self.occurs_check(a, ty) {
                return Err(TypeError::OccursCheck {
                    var: a.clone(),
                    ty: ty.clone(),
                    expr: None,
                });
            }
            let (left, _, right) =
                ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
            let new_ctx =
                Context::from_parts(left, Entry::SETVarBnd(a.clone(), ty.clone()), right);
            let output = format!("{}", new_ctx);
            Ok((
                new_ctx,
                InferenceTree::new("InstRSolve", &input, &output, vec![]),
            ))
        }
        _ => Err(TypeError::InstantiationError {
            var: a.clone(),
            ty: ty.clone(),
            expr: None,
        }),
    }
}
}

Right instantiation (inst_r) handles the dual case where the existential variable is on the right. This typically means we’re looking for the most specific type that satisfies the constraint.

The instantiation algorithms include careful handling of several complex scenarios that arise during constraint solving. The reach relationship occurs when two existential variables are constrained against each other, requiring the algorithm to determine which variable should be solved in terms of the other while maintaining the proper ordering constraints. Arrow type instantiation requires breaking function types apart into their component argument and return types, creating separate instantiation constraints for each component that must be solved consistently.

The interaction between instantiation and universal quantification presents particular challenges, as the algorithm must ensure that polymorphic types are instantiated correctly while preserving the scoping discipline that prevents type variables from escaping their intended scope. These cases require constraint management to ensure that all relationships are maintained throughout the solving process.

Occurs Check

A critical component of sound type inference is the occurs check, which prevents infinite types from arising during unification. When solving constraints like ^α := τ, we must ensure that α does not occur within τ, as this would create a cyclic type.

#![allow(unused)]
fn main() {
/// Check if a type variable occurs in a type (occurs check for unification)
/// This prevents creating infinite types when solving ^α := τ by ensuring α
/// ∉ τ
fn occurs_check(&self, var: &TyVar, ty: &Type) -> bool {
    match ty {
        Type::Var(name) | Type::ETVar(name) => name == var,
        Type::Arrow(t1, t2) => self.occurs_check(var, t1) || self.occurs_check(var, t2),
        Type::Forall(bound_var, body) => {
            // If the variable is shadowed by the forall binding, it doesn't occur
            if bound_var == var {
                false
            } else {
                self.occurs_check(var, body)
            }
        }
        Type::Int | Type::Bool => false,
    }
}
}

The occurs check is applied during the InstLSolve and InstRSolve cases of instantiation. Without this check, the type system could accept programs that would lead to infinite types, violating the decidability of type checking.

Application Inference

Function application in System F requires careful handling because the function type might not be immediately apparent. The infer_app judgment implements the T-AppArrow and T-AppEVar rules defined earlier:

#![allow(unused)]
fn main() {
fn infer_app(
    &mut self,
    ctx: &Context,
    func_ty: &Type,
    arg: &Expr,
) -> TypeResult<(Type, Context, InferenceTree)> {
    let input = format!("{} ⊢ {:?} • {}", ctx, arg, func_ty);

    match func_ty {
        Type::Arrow(param_ty, result_ty) => {
            let (ctx1, tree) = self.check(ctx, arg, param_ty)?;
            let output = format!("{} ⇒⇒ {} ⊣ {}", input, result_ty, ctx1);
            Ok((
                result_ty.as_ref().clone(),
                ctx1,
                InferenceTree::new("InfAppArr", &input, &output, vec![tree]),
            ))
        }
        Type::ETVar(a) => {
            let a1 = self.fresh_tyvar();
            let a2 = self.fresh_tyvar();
            let (left, _, right) =
                ctx.break3(|entry| matches!(entry, Entry::ETVarBnd(name) if name == a));
            let arrow_type = Type::Arrow(
                Box::new(Type::ETVar(a1.clone())),
                Box::new(Type::ETVar(a2.clone())),
            );
            let mut new_ctx = left;
            new_ctx.push(Entry::SETVarBnd(a.clone(), arrow_type));
            new_ctx.push(Entry::ETVarBnd(a1.clone()));
            new_ctx.push(Entry::ETVarBnd(a2.clone()));
            new_ctx.extend(right);
            let ctx1 = Context(new_ctx);

            let (ctx2, tree) = self.check(&ctx1, arg, &Type::ETVar(a1))?;
            let output = format!("{} ⇒⇒ ^{} ⊣ {}", input, a2, ctx2);
            Ok((
                Type::ETVar(a2),
                ctx2,
                InferenceTree::new("InfAppETVar", &input, &output, vec![tree]),
            ))
        }
        _ => Err(TypeError::ApplicationTypeError {
            actual: func_ty.clone(),
            expr: None,
        }),
    }
}
}

The implementation handles two core application scenarios through distinct inference rules. The T-AppArrow rule applies when the function has a known arrow type \( A \to B \), allowing the algorithm to check the argument against \( A \) and return \( B \) as the result type. This straightforward case corresponds to the Type::Arrow pattern in the implementation and represents the standard function application scenario.

The T-AppEVar rule handles the more complex case where the function type is an existential variable \( \hat{\alpha} \). In this situation, the algorithm instantiates the existential variable as \( \hat{\alpha_1} \to \hat{\alpha_2} \) with fresh existential variables, then checks the argument against \( \hat{\alpha_1} \) and returns \( \hat{\alpha_2} \) as the result type. This corresponds to the Type::ETVar case and enables type inference even when the function type is initially unknown.

When the function has a polymorphic Forall type, the instantiation is handled through the subtyping mechanism using the SubAllL rule rather than directly in application inference. This design choice ensures soundness by routing polymorphic instantiation through the well-established subtyping infrastructure and follows the standard bidirectional algorithm design patterns.

Error Handling

Our implementation provides comprehensive error reporting that distinguishes between parse errors and type errors. Parse errors use source-located reporting with ariadne, while type errors provide contextual information about the expressions being typed.

#![allow(unused)]
fn main() {
use ariadne::{Color, Label, Report, ReportKind};
use thiserror::Error;
use crate::ast::Type;
/// Source span for error reporting
#[derive(Debug, Clone, PartialEq)]
pub struct Span {
    pub start: usize,
    pub end: usize,
}
impl Span {
    pub fn new(start: usize, end: usize) -> Self {
        Self { start, end }
    }

    pub fn point(pos: usize) -> Self {
        Self {
            start: pos,
            end: pos + 1,
        }
    }
}
impl TypeError {}
impl std::fmt::Display for TypeError {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            TypeError::UnboundVariable { name, expr } => {
                write!(f, "Variable '{}' not found in context", name)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::ApplicationTypeError { actual, expr } => {
                write!(f, "Expected function type in application, got {}", actual)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::TypeApplicationError { actual, expr } => {
                write!(
                    f,
                    "Expected forall type in type application, got {}",
                    actual
                )?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::OccursCheck { var, ty, expr } => {
                write!(
                    f,
                    "Occurs check failed: variable '{}' occurs in type {}",
                    var, ty
                )?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::SubtypingError { left, right, expr } => {
                write!(f, "No matching rule for subtyping {} <: {}", left, right)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::InstantiationError { var, ty, expr } => {
                write!(f, "No matching rule for instantiation {} :=< {}", var, ty)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::CheckingError { expected, expr } => {
                write!(f, "No matching rule for checking against type {}", expected)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::InferenceError { expr } => {
                write!(f, "No matching rule for inference")?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::ContextError { message, expr } => {
                write!(f, "Context error: {}", message)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
        }
    }
}
#[derive(Error, Debug)]
#[allow(dead_code)]
pub enum ParseError {
    #[error("Parse error: {message}")]
    LalrpopError { message: String, span: Span },
}
impl ParseError {
    pub fn to_ariadne_report<'a>(
        &self,
        filename: &'a str,
    ) -> Report<'a, (&'a str, std::ops::Range<usize>)> {
        match self {
            ParseError::LalrpopError { message, span } => {
                Report::build(ReportKind::Error, filename, span.start)
                    .with_message("Parse Error")
                    .with_label(
                        Label::new((filename, span.start..span.end))
                            .with_message(message)
                            .with_color(Color::Red),
                    )
                    .finish()
            }
        }
    }
}
pub type TypeResult<T> = Result<T, TypeError>;
#[allow(dead_code)]
pub type ParseResult<T> = Result<T, ParseError>;
#[derive(Debug)]
#[allow(dead_code, clippy::enum_variant_names)]
pub enum TypeError {
    UnboundVariable {
        name: String,
        expr: Option<String>,
    },

    ApplicationTypeError {
        actual: Type,
        expr: Option<String>,
    },

    TypeApplicationError {
        actual: Type,
        expr: Option<String>,
    },

    OccursCheck {
        var: String,
        ty: Type,
        expr: Option<String>,
    },

    SubtypingError {
        left: Type,
        right: Type,
        expr: Option<String>,
    },

    InstantiationError {
        var: String,
        ty: Type,
        expr: Option<String>,
    },

    CheckingError {
        expected: Type,
        expr: Option<String>,
    },

    InferenceError {
        expr: Option<String>,
    },

    ContextError {
        message: String,
        expr: Option<String>,
    },
}
}

The type error system includes expression context to help developers understand where failures occur during type checking. Each error variant includes an expr field that stores the expression being typed when the error occurred, providing valuable debugging information.

Parse errors receive enhanced treatment with source location information for precise error reporting:

#![allow(unused)]
fn main() {
use ariadne::{Color, Label, Report, ReportKind};
use thiserror::Error;
use crate::ast::Type;
/// Source span for error reporting
#[derive(Debug, Clone, PartialEq)]
pub struct Span {
    pub start: usize,
    pub end: usize,
}
impl Span {
    pub fn new(start: usize, end: usize) -> Self {
        Self { start, end }
    }

    pub fn point(pos: usize) -> Self {
        Self {
            start: pos,
            end: pos + 1,
        }
    }
}
#[derive(Debug)]
#[allow(dead_code, clippy::enum_variant_names)]
pub enum TypeError {
    UnboundVariable {
        name: String,
        expr: Option<String>,
    },

    ApplicationTypeError {
        actual: Type,
        expr: Option<String>,
    },

    TypeApplicationError {
        actual: Type,
        expr: Option<String>,
    },

    OccursCheck {
        var: String,
        ty: Type,
        expr: Option<String>,
    },

    SubtypingError {
        left: Type,
        right: Type,
        expr: Option<String>,
    },

    InstantiationError {
        var: String,
        ty: Type,
        expr: Option<String>,
    },

    CheckingError {
        expected: Type,
        expr: Option<String>,
    },

    InferenceError {
        expr: Option<String>,
    },

    ContextError {
        message: String,
        expr: Option<String>,
    },
}
impl TypeError {}
impl std::fmt::Display for TypeError {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            TypeError::UnboundVariable { name, expr } => {
                write!(f, "Variable '{}' not found in context", name)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::ApplicationTypeError { actual, expr } => {
                write!(f, "Expected function type in application, got {}", actual)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::TypeApplicationError { actual, expr } => {
                write!(
                    f,
                    "Expected forall type in type application, got {}",
                    actual
                )?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::OccursCheck { var, ty, expr } => {
                write!(
                    f,
                    "Occurs check failed: variable '{}' occurs in type {}",
                    var, ty
                )?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::SubtypingError { left, right, expr } => {
                write!(f, "No matching rule for subtyping {} <: {}", left, right)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::InstantiationError { var, ty, expr } => {
                write!(f, "No matching rule for instantiation {} :=< {}", var, ty)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::CheckingError { expected, expr } => {
                write!(f, "No matching rule for checking against type {}", expected)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::InferenceError { expr } => {
                write!(f, "No matching rule for inference")?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
            TypeError::ContextError { message, expr } => {
                write!(f, "Context error: {}", message)?;
                if let Some(expr) = expr {
                    write!(f, "\n  When typing expression: {}", expr)?;
                }
                Ok(())
            }
        }
    }
}
impl ParseError {
    pub fn to_ariadne_report<'a>(
        &self,
        filename: &'a str,
    ) -> Report<'a, (&'a str, std::ops::Range<usize>)> {
        match self {
            ParseError::LalrpopError { message, span } => {
                Report::build(ReportKind::Error, filename, span.start)
                    .with_message("Parse Error")
                    .with_label(
                        Label::new((filename, span.start..span.end))
                            .with_message(message)
                            .with_color(Color::Red),
                    )
                    .finish()
            }
        }
    }
}
pub type TypeResult<T> = Result<T, TypeError>;
#[allow(dead_code)]
pub type ParseResult<T> = Result<T, ParseError>;
#[derive(Error, Debug)]
#[allow(dead_code)]
pub enum ParseError {
    #[error("Parse error: {message}")]
    LalrpopError { message: String, span: Span },
}
}

This dual approach ensures that syntax errors receive precise source location feedback while type errors focus on logical relationships between expressions and types.

End-to-end

To demonstrate the complete System F implementation with all the modular typing rules working together, consider this lambda expression that doubles its integer argument:

$ cargo run -- check "\x : Int -> x + x"

The system produces the following output showing the complete inference process:

Parsed expression: Abs("x", Int, BinOp(Add, Var("x"), Var("x")))

Type checking successful!
Final type: Int -> Int

InfLam:  ⊢ Abs("x", Int, BinOp(Add, Var("x"), Var("x"))) =>  ⊢ Abs("x", Int, BinOp(Add, Var("x"), Var("x"))) ⇒ Int -> ^α0 ⊣ ^α0 = Int
  ChkSub: ^α0, x: Int ⊢ BinOp(Add, Var("x"), Var("x")) ⇐ ^α0 => ^α0 = Int, x: Int
    InfArith: ^α0, x: Int ⊢ BinOp(Add, Var("x"), Var("x")) => ^α0, x: Int ⊢ BinOp(Add, Var("x"), Var("x")) ⇒ Int ⊣ ^α0, x: Int
      ChkSub: ^α0, x: Int ⊢ Var("x") ⇐ Int => ^α0, x: Int
        InfVar: ^α0, x: Int ⊢ Var("x") => ^α0, x: Int ⊢ Var("x") ⇒ Int ⊣ ^α0, x: Int
        SubRefl: ^α0, x: Int ⊢ Int <: Int => ^α0, x: Int
      ChkSub: ^α0, x: Int ⊢ Var("x") ⇐ Int => ^α0, x: Int
        InfVar: ^α0, x: Int ⊢ Var("x") => ^α0, x: Int ⊢ Var("x") ⇒ Int ⊣ ^α0, x: Int
        SubRefl: ^α0, x: Int ⊢ Int <: Int => ^α0, x: Int
    SubInstR: ^α0, x: Int ⊢ Int <: ^α0 => ^α0 = Int, x: Int
      InstRSolve: ^α0, x: Int ⊢ Int :=< ^α0 => ^α0 = Int, x: Int

The final result shows Final type: Int -> Int, correctly inferring that this lambda expression is a function from integers to integers! The existential variable ^α0 in the proof tree gets resolved to Int through the constraint solving process, and the final type application ensures all existential variables are properly substituted in the output.

For non-trivial expressions, the constraint solving process may involve more complex reasoning and may require additional inference steps but using the proof tree method we can visualize the inference steps and understand the constraints being solved. This is a very powerful technique.

Mission Accomplished

And there we have it! A complete bidirectional type checker for System F with polymorphic types, existential variables, and constraint solving! The algorithm handles the complex interplay between synthesis and checking modes, managing existential variables and subtyping relationships with the precision needed for a production-quality type system.

The bidirectional approach transforms what could be an overwhelming inference problem into a systematic, decidable process that gives us both powerful expressiveness and reliable type safety. System F’s polymorphism opens up a whole new world of type-safe programming that feels almost magical once you see it in action!

Examples

System F’s power lies in its ability to express polymorphic computations that work across different types while maintaining type safety. This chapter explores a range of examples that demonstrate the expressive capabilities of our implementation, from simple polymorphic functions to complex higher-rank polymorphism scenarios.

Our examples are drawn from real test cases in the implementation, showing both the syntax our parser accepts and the types our bidirectional algorithm infers. These examples illustrate the key features of System F and demonstrate how the type system guides program construction.

Basic Polymorphic Functions

The fundamental building block of System F is the polymorphic identity function. This simple example demonstrates universal quantification and type application in their most basic form.

Let’s examine how our implementation handles the core examples from the test suite:

#![allow(unused)]
fn main() {
Basic literals
42
true
false

Type annotations
\x : Int -> x
\x : Bool -> x
(\x : Int -> x) 42
(\x : Bool -> x) true

Polymorphic identity using forall
forall a. \x : a -> x

Type application with brackets
(forall a. \x : a -> x) [Int]
(forall a. \x : a -> x) [Bool]

Higher-rank polymorphism examples  
\f : (forall a. a -> a) -> f
(\f : (forall a. a -> a) -> f) (forall a. \x : a -> x)

Nested type abstractions
forall a. forall b. \x : a -> \y : b -> x
forall a. forall b. \x : a -> \y : b -> y

Complex polymorphic examples
forall a. \f : (a -> a) -> \x : a -> f x

Function types
\f : (Int -> Int) -> \x : Int -> f x
(\f : (Int -> Int) -> \x : Int -> f x) (\y : Int -> y) 42

Error cases should be handled by the type checker
(\x : Int -> x) true
}

These test cases reveal several important aspects of our System F implementation:

Literal Types and Basic Functions

The simplest cases involve literal values and monomorphic functions, where our bidirectional algorithm produces clean, precise types:

  • 42 : Int - Integer literals immediately receive the base type Int
  • true : Bool and false : Bool - Boolean literals similarly receive their base types
  • \x : Int -> x : Int -> Int - Monomorphic lambda expressions receive concrete function types

The type inference algorithm resolves all constraints to produce the most specific types possible. When we provide explicit type annotations, the algorithm can determine the complete function type without ambiguity.

Type Annotations and Explicit Typing

System F requires explicit type annotations on lambda parameters, and our bidirectional algorithm infers complete function types:

  • \x : Int -> x produces the type Int -> Int
  • (\x : Int -> x) 42 applies this function to produce type Int

These examples show how the algorithm handles the interaction between explicit annotations and type inference. The parameter type is fixed by annotation, and the algorithm infers that the return type must match the parameter type since we’re returning the parameter unchanged.

Universal Quantification in Practice

The true power of System F emerges with universal quantification. The polymorphic identity function demonstrates this clearly:

  • forall a. \x : a -> x : ∀a. a -> a

This example shows several important aspects of our implementation:

  1. Type Abstraction: The forall a. syntax introduces a type variable into scope
  2. Polymorphic Parameters: The parameter x : a uses the quantified type variable
  3. Complete Type Inference: The algorithm correctly resolves all type constraints to produce the expected polymorphic type

Type Application and Instantiation

Type application allows us to specialize polymorphic functions to specific types:

  • (forall a. \x : a -> x) [Int] : Int -> Int
  • (forall a. \x : a -> x) [Bool] : Bool -> Bool

Both applications produce the expected concrete function types, showing that our algorithm correctly instantiates the polymorphic function with the requested types and resolves all type constraints to their final forms.

Higher-Rank Polymorphism

System F supports higher-rank polymorphism, where polymorphic types can appear in argument positions. This creates functions that accept polymorphic arguments:

  • \f : (forall a. a -> a) -> f : ^α0 -> ^α1

This function accepts any argument of type forall a. a -> a (a polymorphic identity function) and returns it unchanged. The higher-rank nature means the caller must provide a function that works for all types, not just some specific type.

The application example demonstrates this in action:

  • (\f : (forall a. a -> a) -> f) (forall a. \x : a -> x) : ^α0

Here we pass the polymorphic identity function to a function that expects exactly that type signature. This showcases the precision and expressiveness of System F’s type system.

Nested Type Abstractions

System F allows multiple type abstractions to be nested, creating functions that are polymorphic in multiple type parameters:

  • forall a. forall b. \x : a -> \y : b -> x : ∀a. ∀b. a -> b -> a
  • forall a. forall b. \x : a -> \y : b -> y : ∀a. ∀b. a -> b -> b

These examples create functions that:

  1. Are polymorphic in two types a and b
  2. Accept arguments of those respective types
  3. Return either the first or second argument

The K and S combinators from combinatory logic can be expressed naturally in this style, showing System F’s expressiveness for abstract computation. As a fun aside, you express every other function in terms of just K and S, although in practice this is more of an theoretical curio because its unimaginably inefficient.

Complex Polymorphism

More examples demonstrate how System F handles complex combinations of polymorphism and function application:

  • forall a. \f : (a -> a) -> \x : a -> f x : ∀a. (a -> a) -> a -> a

This creates a function that:

  1. Is polymorphic in type a
  2. Accepts a function f : a -> a (an endomorphism on a)
  3. Accepts a value x : a
  4. Applies the function to the value

This pattern is fundamental in functional programming and demonstrates how System F naturally expresses higher-order polymorphic functions.

System F-ω

System F-ω stands as one of the most type systems in the theoretical foundations of programming languages, combining parametric polymorphism with higher-kinded types to enable powerful abstraction mechanisms. This system forms the theoretical backbone for modern functional programming languages like Haskell and provides the expressive power needed for advanced programming patterns including functors, monads, and generic programming.

To understand System F-ω’s significance, we must first examine its place within the broader landscape of type systems through the systematic classification known as the lambda cube.

The Lambda Cube: A Systematic Classification

The lambda cube, introduced by Henk Barendregt, provides a systematic way to understand how different type systems relate to each other by considering three independent dimensions of abstraction. Each dimension of the cube corresponds to a different kind of dependency between terms and types, where “dependency” refers to the capacity of a term or type to bind a term or type.

The three orthogonal axes of the lambda cube correspond to:

\(\to\)-axis (Types depending on Terms): Enables dependent types where the structure of types can depend on the values of terms. This allows types like Vector n where the type carries information about term-level values.

\(\uparrow\)-axis (Terms depending on Types): Enables polymorphism where terms can abstract over and depend on type parameters. This allows functions like identity : ∀α. α \\to α that work uniformly across all types.

\(\nearrow\)-axis (Types depending on Types): Enables type operators where types can abstract over and depend on other types. This allows type constructors like Maybe : * \\to * that take types as arguments and produce new types.

The eight vertices of the cube emerge from the different ways to combine these three dimensions of dependency. Each vertex represents a different typed system, obtained by enabling or disabling each form of abstraction:

Simply Typed Lambda Calculus (λ→): The foundation of the cube, supporting only term abstraction. Functions can abstract over terms (λx:τ. e) but types remain fixed and simple.

System F (λ2): Adds polymorphism through type abstraction, enabling terms to abstract over types (Λα. e). This introduces parametric polymorphism where functions like identity can work uniformly across all types.

System Fω (λω): Introduces higher-kinded types by adding type operators, allowing types to abstract over types (λα:κ. τ). This enables abstraction over type constructors like Maybe and List.

Lambda P (λP): Adds dependent types where types can depend on terms. This allows types like Vector n where the length n is a term-level value, enabling precise specifications of data structure properties.

System F-ω (λ2ω): Combines polymorphism with higher-kinded types, enabling both term abstraction over types and type abstraction over types. This is our target system, providing the expressiveness needed for modern functional programming.

System Fω-P (λωP): Combines higher-kinded types with dependent types, allowing type-level computation that can depend on term-level values.

System FP (λ2P): Combines polymorphism with dependent types, enabling functions that are parametric over both types and values while allowing types to depend on those values.

Calculus of Constructions (λ2ωP): The most expressive corner, combining all three forms of abstraction. This system underlies proof assistants like Coq and enables types that can express arbitrary logical propositions.

The lambda cube demonstrates that these eight systems form a natural hierarchy, with each system being a conservative extension of those below it in the ordering. System F-ω sits at a sweet spot, providing significant expressive power while maintaining decidable type checking and relatively straightforward implementation techniques.

Where System F introduced universal quantification over types (∀α. τ), System F-ω extends this to quantification over type constructors of arbitrary kinds. This enables us to write functions that are polymorphic not just over types like Int or Bool, but over type constructors like Maybe or List, and even higher-order type constructors that take multiple type arguments.

The type system is stratified by kinds, which classify types just as types classify terms:

\[ \begin{align*} \text{kinds} \quad \kappa &::= \star \mid \kappa_1 \to \kappa_2 \\ \text{types} \quad \tau &::= \alpha \mid \tau_1 \to \tau_2 \mid \forall \alpha:\kappa. \tau \mid \tau_1 \tau_2 \mid \lambda \alpha:\kappa. \tau \\ \text{terms} \quad e &::= x \mid \lambda x:\tau. e \mid e_1 e_2 \mid \Lambda \alpha:\kappa. e \mid e[\tau] \end{align*} \]

This hierarchy extends naturally upward: just as we need types to classify terms and kinds to classify types, we could in principle introduce sorts to classify kinds.

While System F-ω stops at the kind level for practical reasons, the theoretical framework suggests an infinite tower: terms have types, types have kinds, kinds have sorts, sorts have supersorts, and so on. This infinite hierarchy, known as the cumulative hierarchy in type theory, reflects the fundamental principle that every mathematical object must be classified by something at a higher level of abstraction. The aleph notation acknowledges this infinite ascent while recognizing that practical type systems must terminate the hierarchy at some finite level. More on this when we get to dependent types later!

Higher-Kinded Types

The key innovation of System F-ω is the kind system. While ordinary types like Int and Bool have kind * (pronounced “star”), type constructors have function kinds:

  • Maybe has kind * -> * (takes a type, returns a type)
  • Either has kind * -> * -> * (takes two types, returns a type)
  • A hypothetical StateT might have kind * -> (* -> *) -> * -> *

This stratification enables precise reasoning about type-level functions while maintaining decidable type checking.

Our implementation demonstrates these concepts through a rich surface language that compiles to a core System F-ω calculus. The surface syntax provides familiar algebraic data types and pattern matching:

data Maybe a = Nothing | Just a
data Either a b = Left a | Right b
data List a = Nil | Cons a (List a)

These declarations create type constructors of appropriate kinds. Maybe becomes a type-level function that, when applied to a type like Int, produces the concrete type Maybe Int.

Type-Level Computation

System F-ω supports type-level computation through type application and type abstraction. While our implementation focuses on the foundational mechanisms, the theoretical system allows arbitrary computation at the type level:

  • Type Application: F τ applies type constructor F to type τ
  • Type Abstraction: λα:κ. τ creates a type-level function

This enables type-level programming, though our implementation focuses on the essential features needed for practical programming language design.

Polymorphic Data Structures

The combination of universal quantification and higher-kinded types enables elegant expression of polymorphic data structures and functions:

map :: forall a b. (a -> b) -> List a -> List b
foldRight :: forall a b. (a -> b -> b) -> b -> List a -> b

These functions work uniformly across all types, demonstrating the power of parametric polymorphism in System F-ω.

Implementation Architecture

Our System F-ω implementation consists of several key components working together:

  • Surface Language: A user-friendly syntax with algebraic data types, pattern matching, and type inference
  • Core Language: An explicit System F-ω calculus with kinds, type abstractions, and applications
  • Elaboration: Translation from surface to core, inserting implicit type arguments and abstractions
  • Type Inference: A bidirectional algorithm based on the DK worklist approach
  • Kind Inference: Automatic inference of kinds for type constructors and type expressions

The implementation demonstrates that type systems can be made practical through careful algorithm design and implementation techniques.

Our implementation uses bidirectional type checking, which splits the problem into two complementary modes:

  • Synthesis (⇒): Given an expression, determine its type
  • Checking (⇐): Given an expression and expected type, verify compatibility

This approach handles the complexity of higher-rank polymorphism and type applications while maintaining decidability and providing good error messages.

Language Design

Our System F-ω implementation employs a two-layer architecture that separates user-facing syntax from the internal representation used by the type checker. This design pattern, common in compilers, allows us to provide an ergonomic programming experience while maintaining a clean theoretical foundation for type checking algorithms.

The surface language offers familiar syntax with algebraic data types, pattern matching, and implicit type inference. The core language provides an explicit representation of System F-ω with kinds, type abstractions, and applications. Translation between these layers handles the complex process of inserting implicit type arguments and managing the type-level computations that System F-ω enables.

The Haskell Hairball

Before diving into our clean System F-ω design, we must acknowledge the elephant in the room: Haskell, the language that wore the hairshirt for two decades and somehow convinced a generation of programmers that this constituted virtue. Our implementation deliberately avoids the accumulated cruft that has transformed what began as an elegant research language into something resembling a Lovecraftian nightmare of interacting language extensions.

Haskell represents a fascinating case study in how good intentions, academic enthusiasm, and the sunk cost fallacy can combine to create somethign is both a beautiful and a trainwreck at the same time. Consider the design decisions that seemed reasonable at the time but now serve as warnings to future language designers:

  • Call-by-need evaluation: Lazy evaluation sounds theoretically elegant until you discover that reasoning about space and time complexity requires divination skills, and debugging memory leaks involves consulting tea leaves about thunk accumulation patterns

  • LARPing dependent types: Rather than implementing actual dependent types, Haskell developed an increasingly baroque system of type-level programming that lets you pretend you have dependent types while maintaining all the complexity and none of the theoretical guarantees

  • Language extension proliferation: What started as a few modest extensions has metastasized into a catalog of over 200 language pragmas, creating not one language but a thousand mutually incompatible dialects that share only syntax

  • Infinite RAM Requirements: The GHC compiler requires approximately the computational resources of a small Dyson swarm to bootstrap itself, making it effectively impossible to port to new architectures without access to industrial-scale computing infrastructure

Haskell, while historically important and technically very interesting, is an excellent guide for how not to design a language.

Surface Language Syntax

The surface language provides a Haskell-like syntax that programmers can work with naturally. Users write programs without explicit type abstractions or applications, and the compiler inserts these automatically during elaboration to the core language.

Data Type Declarations

#![allow(unused)]
fn main() {
use std::fmt;
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub fields: Vec<Type>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct TypeScheme {
    pub quantified: Vec<String>, // forall a b. ...
    pub ty: Type,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Type {
    /// Type variable: a, b
    Var(String),
    /// Type constructor: Int, Bool, Color
    Con(String),
    /// Function type: a -> b
    Arrow(Box<Type>, Box<Type>),
    /// Type application: List a, Maybe Int
    App(Box<Type>, Box<Type>),
    /// Record type: { x :: Int, y :: Int }
    Record(Vec<(String, Type)>),
    /// Forall type: forall a b. a -> b
    Forall(Vec<String>, Box<Type>),
}
#[derive(Debug, Clone, PartialEq)]
pub enum Expr {
    /// Variable: x, y
    Var(String),
    /// Integer literal: 42
    LitInt(i64),
    /// Lambda: \x -> e
    Lambda { param: String, body: Box<Expr> },
    /// Function application: f x
    App { func: Box<Expr>, arg: Box<Expr> },
    /// Conditional: if cond then e1 else e2
    If {
        cond: Box<Expr>,
        then_branch: Box<Expr>,
        else_branch: Box<Expr>,
    },
    /// Pattern matching: match e { p1 -> e1, p2 -> e2 }
    Match {
        expr: Box<Expr>,
        arms: Vec<MatchArm>,
    },
    /// Binary operation: e1 + e2
    BinOp {
        op: BinOp,
        left: Box<Expr>,
        right: Box<Expr>,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Expr,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Wildcard pattern: _
    Wildcard,
    /// Variable pattern: x
    Var(String),
    /// Constructor pattern: Red, Cons x xs
    Constructor { name: String, args: Vec<Pattern> },
}
#[derive(Debug, Clone, PartialEq)]
pub enum BinOp {
    Add,
    Sub,
    Mul,
    Div,
    // Comparison operations
    Lt,
    Le,
}
impl fmt::Display for Type {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Type::Var(name) | Type::Con(name) => write!(f, "{}", name),
            Type::Arrow(t1, t2) => write!(f, "{} -> {}", t1, t2),
            Type::App(t1, t2) => write!(f, "{} {}", t1, t2),
            Type::Record(fields) => {
                write!(f, "{{ ")?;
                for (i, (name, ty)) in fields.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{} :: {}", name, ty)?;
                }
                write!(f, " }}")
            }
            Type::Forall(vars, ty) => {
                write!(f, "forall ")?;
                for (i, var) in vars.iter().enumerate() {
                    if i > 0 {
                        write!(f, " ")?;
                    }
                    write!(f, "{}", var)?;
                }
                write!(f, ". {}", ty)
            }
        }
    }
}
impl fmt::Display for TypeScheme {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        if self.quantified.is_empty() {
            write!(f, "{}", self.ty)
        } else {
            write!(f, "forall {}. {}", self.quantified.join(" "), self.ty)
        }
    }
}
impl fmt::Display for BinOp {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            BinOp::Add => write!(f, "+"),
            BinOp::Sub => write!(f, "-"),
            BinOp::Mul => write!(f, "*"),
            BinOp::Div => write!(f, "/"),
            BinOp::Lt => write!(f, "<"),
            BinOp::Le => write!(f, "<="),
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Data type declaration: data Color = Red | Blue
    Data {
        name: String,
        type_params: Vec<String>,
        constructors: Vec<Constructor>,
    },
    /// Function type signature: fib :: Int -> Int
    TypeSig {
        name: String,
        type_scheme: TypeScheme,
    },
    /// Function definition: fib n = match n { 0 -> 0, ... }
    FunDef {
        name: String,
        params: Vec<String>,
        body: Expr,
    },
}
}

Algebraic data types form the foundation of our surface language. These declarations create both type constructors and value constructors:

data Bool = True | False;
data Maybe a = Nothing | Just a;
data Either a b = Left a | Right b;
data List a = Nil | Cons a (List a);

Each data declaration introduces:

  • A type constructor (like Maybe or List) that can be applied to type arguments
  • Value constructors (like Nothing, Just, Nil, Cons) for creating values of the type
  • Implicit kind information determined by the number and usage of type parameters

The surface language allows natural type parameter syntax where data List a automatically implies that List has kind * -> *.

Type Signatures and Schemes

#![allow(unused)]
fn main() {
use std::fmt;
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Data type declaration: data Color = Red | Blue
    Data {
        name: String,
        type_params: Vec<String>,
        constructors: Vec<Constructor>,
    },
    /// Function type signature: fib :: Int -> Int
    TypeSig {
        name: String,
        type_scheme: TypeScheme,
    },
    /// Function definition: fib n = match n { 0 -> 0, ... }
    FunDef {
        name: String,
        params: Vec<String>,
        body: Expr,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub fields: Vec<Type>,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Type {
    /// Type variable: a, b
    Var(String),
    /// Type constructor: Int, Bool, Color
    Con(String),
    /// Function type: a -> b
    Arrow(Box<Type>, Box<Type>),
    /// Type application: List a, Maybe Int
    App(Box<Type>, Box<Type>),
    /// Record type: { x :: Int, y :: Int }
    Record(Vec<(String, Type)>),
    /// Forall type: forall a b. a -> b
    Forall(Vec<String>, Box<Type>),
}
#[derive(Debug, Clone, PartialEq)]
pub enum Expr {
    /// Variable: x, y
    Var(String),
    /// Integer literal: 42
    LitInt(i64),
    /// Lambda: \x -> e
    Lambda { param: String, body: Box<Expr> },
    /// Function application: f x
    App { func: Box<Expr>, arg: Box<Expr> },
    /// Conditional: if cond then e1 else e2
    If {
        cond: Box<Expr>,
        then_branch: Box<Expr>,
        else_branch: Box<Expr>,
    },
    /// Pattern matching: match e { p1 -> e1, p2 -> e2 }
    Match {
        expr: Box<Expr>,
        arms: Vec<MatchArm>,
    },
    /// Binary operation: e1 + e2
    BinOp {
        op: BinOp,
        left: Box<Expr>,
        right: Box<Expr>,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Expr,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Wildcard pattern: _
    Wildcard,
    /// Variable pattern: x
    Var(String),
    /// Constructor pattern: Red, Cons x xs
    Constructor { name: String, args: Vec<Pattern> },
}
#[derive(Debug, Clone, PartialEq)]
pub enum BinOp {
    Add,
    Sub,
    Mul,
    Div,
    // Comparison operations
    Lt,
    Le,
}
impl fmt::Display for Type {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Type::Var(name) | Type::Con(name) => write!(f, "{}", name),
            Type::Arrow(t1, t2) => write!(f, "{} -> {}", t1, t2),
            Type::App(t1, t2) => write!(f, "{} {}", t1, t2),
            Type::Record(fields) => {
                write!(f, "{{ ")?;
                for (i, (name, ty)) in fields.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{} :: {}", name, ty)?;
                }
                write!(f, " }}")
            }
            Type::Forall(vars, ty) => {
                write!(f, "forall ")?;
                for (i, var) in vars.iter().enumerate() {
                    if i > 0 {
                        write!(f, " ")?;
                    }
                    write!(f, "{}", var)?;
                }
                write!(f, ". {}", ty)
            }
        }
    }
}
impl fmt::Display for TypeScheme {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        if self.quantified.is_empty() {
            write!(f, "{}", self.ty)
        } else {
            write!(f, "forall {}. {}", self.quantified.join(" "), self.ty)
        }
    }
}
impl fmt::Display for BinOp {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            BinOp::Add => write!(f, "+"),
            BinOp::Sub => write!(f, "-"),
            BinOp::Mul => write!(f, "*"),
            BinOp::Div => write!(f, "/"),
            BinOp::Lt => write!(f, "<"),
            BinOp::Le => write!(f, "<="),
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub struct TypeScheme {
    pub quantified: Vec<String>, // forall a b. ...
    pub ty: Type,
}
}

Type schemes provide explicit polymorphic type signatures using familiar quantifier syntax:

identity :: a -> a;
const :: a -> b -> a;
map :: (a -> b) -> List a -> List b;

The surface language uses implicit quantification - any free type variables in a type signature are automatically universally quantified. This provides the convenience of languages like Haskell while maintaining the theoretical precision of System F-ω.

Surface Types

#![allow(unused)]
fn main() {
use std::fmt;
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Data type declaration: data Color = Red | Blue
    Data {
        name: String,
        type_params: Vec<String>,
        constructors: Vec<Constructor>,
    },
    /// Function type signature: fib :: Int -> Int
    TypeSig {
        name: String,
        type_scheme: TypeScheme,
    },
    /// Function definition: fib n = match n { 0 -> 0, ... }
    FunDef {
        name: String,
        params: Vec<String>,
        body: Expr,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub fields: Vec<Type>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct TypeScheme {
    pub quantified: Vec<String>, // forall a b. ...
    pub ty: Type,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Expr {
    /// Variable: x, y
    Var(String),
    /// Integer literal: 42
    LitInt(i64),
    /// Lambda: \x -> e
    Lambda { param: String, body: Box<Expr> },
    /// Function application: f x
    App { func: Box<Expr>, arg: Box<Expr> },
    /// Conditional: if cond then e1 else e2
    If {
        cond: Box<Expr>,
        then_branch: Box<Expr>,
        else_branch: Box<Expr>,
    },
    /// Pattern matching: match e { p1 -> e1, p2 -> e2 }
    Match {
        expr: Box<Expr>,
        arms: Vec<MatchArm>,
    },
    /// Binary operation: e1 + e2
    BinOp {
        op: BinOp,
        left: Box<Expr>,
        right: Box<Expr>,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Expr,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Wildcard pattern: _
    Wildcard,
    /// Variable pattern: x
    Var(String),
    /// Constructor pattern: Red, Cons x xs
    Constructor { name: String, args: Vec<Pattern> },
}
#[derive(Debug, Clone, PartialEq)]
pub enum BinOp {
    Add,
    Sub,
    Mul,
    Div,
    // Comparison operations
    Lt,
    Le,
}
impl fmt::Display for Type {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Type::Var(name) | Type::Con(name) => write!(f, "{}", name),
            Type::Arrow(t1, t2) => write!(f, "{} -> {}", t1, t2),
            Type::App(t1, t2) => write!(f, "{} {}", t1, t2),
            Type::Record(fields) => {
                write!(f, "{{ ")?;
                for (i, (name, ty)) in fields.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{} :: {}", name, ty)?;
                }
                write!(f, " }}")
            }
            Type::Forall(vars, ty) => {
                write!(f, "forall ")?;
                for (i, var) in vars.iter().enumerate() {
                    if i > 0 {
                        write!(f, " ")?;
                    }
                    write!(f, "{}", var)?;
                }
                write!(f, ". {}", ty)
            }
        }
    }
}
impl fmt::Display for TypeScheme {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        if self.quantified.is_empty() {
            write!(f, "{}", self.ty)
        } else {
            write!(f, "forall {}. {}", self.quantified.join(" "), self.ty)
        }
    }
}
impl fmt::Display for BinOp {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            BinOp::Add => write!(f, "+"),
            BinOp::Sub => write!(f, "-"),
            BinOp::Mul => write!(f, "*"),
            BinOp::Div => write!(f, "/"),
            BinOp::Lt => write!(f, "<"),
            BinOp::Le => write!(f, "<="),
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum Type {
    /// Type variable: a, b
    Var(String),
    /// Type constructor: Int, Bool, Color
    Con(String),
    /// Function type: a -> b
    Arrow(Box<Type>, Box<Type>),
    /// Type application: List a, Maybe Int
    App(Box<Type>, Box<Type>),
    /// Record type: { x :: Int, y :: Int }
    Record(Vec<(String, Type)>),
    /// Forall type: forall a b. a -> b
    Forall(Vec<String>, Box<Type>),
}
}

The surface type system includes:

  • Type variables (a, b) for generic parameters
  • Type constructors (Int, Bool, List) for concrete and parameterized types
  • Function types (a -> b) with right-associative arrow notation
  • Type application (List a, Maybe Int) for applying type constructors

Notably absent from the surface language are explicit type abstractions, type applications to terms, and kind annotations. These are inferred and inserted automatically during elaboration.

Expression Language

#![allow(unused)]
fn main() {
use std::fmt;
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Data type declaration: data Color = Red | Blue
    Data {
        name: String,
        type_params: Vec<String>,
        constructors: Vec<Constructor>,
    },
    /// Function type signature: fib :: Int -> Int
    TypeSig {
        name: String,
        type_scheme: TypeScheme,
    },
    /// Function definition: fib n = match n { 0 -> 0, ... }
    FunDef {
        name: String,
        params: Vec<String>,
        body: Expr,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub fields: Vec<Type>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct TypeScheme {
    pub quantified: Vec<String>, // forall a b. ...
    pub ty: Type,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Type {
    /// Type variable: a, b
    Var(String),
    /// Type constructor: Int, Bool, Color
    Con(String),
    /// Function type: a -> b
    Arrow(Box<Type>, Box<Type>),
    /// Type application: List a, Maybe Int
    App(Box<Type>, Box<Type>),
    /// Record type: { x :: Int, y :: Int }
    Record(Vec<(String, Type)>),
    /// Forall type: forall a b. a -> b
    Forall(Vec<String>, Box<Type>),
}
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Expr,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Wildcard pattern: _
    Wildcard,
    /// Variable pattern: x
    Var(String),
    /// Constructor pattern: Red, Cons x xs
    Constructor { name: String, args: Vec<Pattern> },
}
#[derive(Debug, Clone, PartialEq)]
pub enum BinOp {
    Add,
    Sub,
    Mul,
    Div,
    // Comparison operations
    Lt,
    Le,
}
impl fmt::Display for Type {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Type::Var(name) | Type::Con(name) => write!(f, "{}", name),
            Type::Arrow(t1, t2) => write!(f, "{} -> {}", t1, t2),
            Type::App(t1, t2) => write!(f, "{} {}", t1, t2),
            Type::Record(fields) => {
                write!(f, "{{ ")?;
                for (i, (name, ty)) in fields.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{} :: {}", name, ty)?;
                }
                write!(f, " }}")
            }
            Type::Forall(vars, ty) => {
                write!(f, "forall ")?;
                for (i, var) in vars.iter().enumerate() {
                    if i > 0 {
                        write!(f, " ")?;
                    }
                    write!(f, "{}", var)?;
                }
                write!(f, ". {}", ty)
            }
        }
    }
}
impl fmt::Display for TypeScheme {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        if self.quantified.is_empty() {
            write!(f, "{}", self.ty)
        } else {
            write!(f, "forall {}. {}", self.quantified.join(" "), self.ty)
        }
    }
}
impl fmt::Display for BinOp {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            BinOp::Add => write!(f, "+"),
            BinOp::Sub => write!(f, "-"),
            BinOp::Mul => write!(f, "*"),
            BinOp::Div => write!(f, "/"),
            BinOp::Lt => write!(f, "<"),
            BinOp::Le => write!(f, "<="),
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum Expr {
    /// Variable: x, y
    Var(String),
    /// Integer literal: 42
    LitInt(i64),
    /// Lambda: \x -> e
    Lambda { param: String, body: Box<Expr> },
    /// Function application: f x
    App { func: Box<Expr>, arg: Box<Expr> },
    /// Conditional: if cond then e1 else e2
    If {
        cond: Box<Expr>,
        then_branch: Box<Expr>,
        else_branch: Box<Expr>,
    },
    /// Pattern matching: match e { p1 -> e1, p2 -> e2 }
    Match {
        expr: Box<Expr>,
        arms: Vec<MatchArm>,
    },
    /// Binary operation: e1 + e2
    BinOp {
        op: BinOp,
        left: Box<Expr>,
        right: Box<Expr>,
    },
}
}

The surface expression language supports:

Variables and Literals: Standard identifiers and numeric/boolean constants Function Application: f x applies function f to argument x Lambda Abstractions: \x -> e creates anonymous functions Pattern Matching: match e { p1 -> e1; p2 -> e2; } for case analysis Constructor Application: Just 42, Cons x xs for building data structures

The surface language omits explicit type abstractions (Λα. e) and type applications (e [τ]). These System F-ω constructs are handled automatically by the elaboration process.

Core Language Representation

The core language provides an explicit encoding of System F-ω with full type-level computation capabilities. This representation makes type checking tractable by exposing all implicit operations from the surface language.

Core Types and Kinds

#![allow(unused)]
fn main() {
use std::fmt;
/// Core language for System-F-ω with explicit type applications and
/// abstractions
#[derive(Debug, Clone, PartialEq)]
pub struct CoreModule {
    pub type_defs: Vec<TypeDef>,
    pub term_defs: Vec<TermDef>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct TypeDef {
    pub name: String,
    pub kind: Kind,
    pub constructors: Vec<DataConstructor>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct DataConstructor {
    pub name: String,
    pub ty: CoreType,
}
#[derive(Debug, Clone, PartialEq)]
pub struct TermDef {
    pub name: String,
    pub ty: CoreType,
    pub body: CoreTerm,
}
/// Core types (System-F-ω)
#[derive(Debug, Clone, PartialEq)]
pub enum CoreType {
    /// Type variable: a
    Var(String),
    /// Existential type variable: ^a
    ETVar(String),
    /// Type constructor: Int, Bool
    Con(String),
    /// Function type: T1 -> T2
    Arrow(Box<CoreType>, Box<CoreType>),
    /// Universal quantification: ∀a. T
    Forall(String, Box<CoreType>),
    /// Type application: F T
    App(Box<CoreType>, Box<CoreType>),
    /// Product type: T1 × T2
    Product(Vec<CoreType>),
}
/// Core terms (System-F)
#[derive(Debug, Clone, PartialEq)]
pub enum CoreTerm {
    /// Variable: x
    Var(String),
    /// Integer literal: 42
    LitInt(i64),
    /// Lambda abstraction: λx:T. e
    Lambda {
        param: String,
        param_ty: CoreType,
        body: Box<CoreTerm>,
    },
    /// Application: e1 e2
    App {
        func: Box<CoreTerm>,
        arg: Box<CoreTerm>,
    },
    /// Type abstraction: Λα. e
    TypeLambda { param: String, body: Box<CoreTerm> },
    /// Constructor: C e1 ... en
    Constructor { name: String, args: Vec<CoreTerm> },
    /// Pattern matching: case e of { p1 -> e1; ... }
    Case {
        scrutinee: Box<CoreTerm>,
        arms: Vec<CaseArm>,
    },
    /// Built-in operations
    BinOp {
        op: CoreBinOp,
        left: Box<CoreTerm>,
        right: Box<CoreTerm>,
    },
    /// Conditional: if e1 then e2 else e3
    If {
        cond: Box<CoreTerm>,
        then_branch: Box<CoreTerm>,
        else_branch: Box<CoreTerm>,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub struct CaseArm {
    pub pattern: CorePattern,
    pub body: CoreTerm,
}
#[derive(Debug, Clone, PartialEq)]
pub enum CorePattern {
    /// Wildcard: _
    Wildcard,
    /// Variable: x
    Var(String),
    /// Constructor: C p1 ... pn
    Constructor {
        name: String,
        args: Vec<CorePattern>,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub enum CoreBinOp {
    Add,
    Sub,
    Mul,
    Div,
    Lt,
    Le,
}
impl fmt::Display for Kind {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Kind::Star => write!(f, "*"),
            Kind::Arrow(k1, k2) => write!(f, "{} -> {}", k1, k2),
        }
    }
}
impl fmt::Display for CoreType {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            CoreType::Var(name) => write!(f, "{}", name),
            CoreType::ETVar(name) => write!(f, "^{}", name),
            CoreType::Con(name) => write!(f, "{}", name),
            CoreType::Arrow(t1, t2) => write!(f, "{} -> {}", t1, t2),
            CoreType::Forall(var, ty) => write!(f, "∀{}. {}", var, ty),
            CoreType::App(t1, t2) => write!(f, "({} {})", t1, t2),
            CoreType::Product(types) => {
                write!(f, "(")?;
                for (i, ty) in types.iter().enumerate() {
                    if i > 0 {
                        write!(f, " × ")?;
                    }
                    write!(f, "{}", ty)?;
                }
                write!(f, ")")
            }
        }
    }
}
impl fmt::Display for CoreBinOp {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            CoreBinOp::Add => write!(f, "+"),
            CoreBinOp::Sub => write!(f, "-"),
            CoreBinOp::Mul => write!(f, "*"),
            CoreBinOp::Div => write!(f, "/"),
            CoreBinOp::Lt => write!(f, "<"),
            CoreBinOp::Le => write!(f, "<="),
        }
    }
}
impl fmt::Display for CoreTerm {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            CoreTerm::Var(name) => write!(f, "{}", name),
            CoreTerm::LitInt(n) => write!(f, "{}", n),
            CoreTerm::Lambda {
                param,
                param_ty,
                body,
            } => {
                write!(f, "λ{} : {}. {}", param, param_ty, body)
            }
            CoreTerm::App { func, arg } => write!(f, "{} {}", func, arg),
            CoreTerm::Constructor { name, args } => {
                if args.is_empty() {
                    write!(f, "{}", name)
                } else {
                    write!(f, "{}", name)?;
                    for arg in args {
                        write!(f, " {}", arg)?;
                    }
                    Ok(())
                }
            }
            CoreTerm::If {
                cond,
                then_branch,
                else_branch,
            } => {
                write!(f, "if {} then {} else {}", cond, then_branch, else_branch)
            }
            CoreTerm::Case { scrutinee, arms: _ } => {
                write!(f, "match {} {{ ... }}", scrutinee)
            }
            CoreTerm::BinOp { op, left, right } => {
                write!(f, "{} {} {}", left, op, right)
            }
            _ => write!(f, "<term>"),
        }
    }
}
/// Kinds for types
#[derive(Debug, Clone, PartialEq)]
pub enum Kind {
    /// Base kind: *
    Star,
    /// Function kind: k1 -> k2
    Arrow(Box<Kind>, Box<Kind>),
}
}

The kind system classifies types hierarchically:

  • Star (*) for ordinary types like Int, Bool, List Int
  • Arrow(k1, k2) for type constructors like Maybe : * -> * or Either : * -> * -> *
#![allow(unused)]
fn main() {
use std::fmt;
/// Core language for System-F-ω with explicit type applications and
/// abstractions
#[derive(Debug, Clone, PartialEq)]
pub struct CoreModule {
    pub type_defs: Vec<TypeDef>,
    pub term_defs: Vec<TermDef>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct TypeDef {
    pub name: String,
    pub kind: Kind,
    pub constructors: Vec<DataConstructor>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct DataConstructor {
    pub name: String,
    pub ty: CoreType,
}
#[derive(Debug, Clone, PartialEq)]
pub struct TermDef {
    pub name: String,
    pub ty: CoreType,
    pub body: CoreTerm,
}
/// Kinds for types
#[derive(Debug, Clone, PartialEq)]
pub enum Kind {
    /// Base kind: *
    Star,
    /// Function kind: k1 -> k2
    Arrow(Box<Kind>, Box<Kind>),
}
/// Core terms (System-F)
#[derive(Debug, Clone, PartialEq)]
pub enum CoreTerm {
    /// Variable: x
    Var(String),
    /// Integer literal: 42
    LitInt(i64),
    /// Lambda abstraction: λx:T. e
    Lambda {
        param: String,
        param_ty: CoreType,
        body: Box<CoreTerm>,
    },
    /// Application: e1 e2
    App {
        func: Box<CoreTerm>,
        arg: Box<CoreTerm>,
    },
    /// Type abstraction: Λα. e
    TypeLambda { param: String, body: Box<CoreTerm> },
    /// Constructor: C e1 ... en
    Constructor { name: String, args: Vec<CoreTerm> },
    /// Pattern matching: case e of { p1 -> e1; ... }
    Case {
        scrutinee: Box<CoreTerm>,
        arms: Vec<CaseArm>,
    },
    /// Built-in operations
    BinOp {
        op: CoreBinOp,
        left: Box<CoreTerm>,
        right: Box<CoreTerm>,
    },
    /// Conditional: if e1 then e2 else e3
    If {
        cond: Box<CoreTerm>,
        then_branch: Box<CoreTerm>,
        else_branch: Box<CoreTerm>,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub struct CaseArm {
    pub pattern: CorePattern,
    pub body: CoreTerm,
}
#[derive(Debug, Clone, PartialEq)]
pub enum CorePattern {
    /// Wildcard: _
    Wildcard,
    /// Variable: x
    Var(String),
    /// Constructor: C p1 ... pn
    Constructor {
        name: String,
        args: Vec<CorePattern>,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub enum CoreBinOp {
    Add,
    Sub,
    Mul,
    Div,
    Lt,
    Le,
}
impl fmt::Display for Kind {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Kind::Star => write!(f, "*"),
            Kind::Arrow(k1, k2) => write!(f, "{} -> {}", k1, k2),
        }
    }
}
impl fmt::Display for CoreType {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            CoreType::Var(name) => write!(f, "{}", name),
            CoreType::ETVar(name) => write!(f, "^{}", name),
            CoreType::Con(name) => write!(f, "{}", name),
            CoreType::Arrow(t1, t2) => write!(f, "{} -> {}", t1, t2),
            CoreType::Forall(var, ty) => write!(f, "∀{}. {}", var, ty),
            CoreType::App(t1, t2) => write!(f, "({} {})", t1, t2),
            CoreType::Product(types) => {
                write!(f, "(")?;
                for (i, ty) in types.iter().enumerate() {
                    if i > 0 {
                        write!(f, " × ")?;
                    }
                    write!(f, "{}", ty)?;
                }
                write!(f, ")")
            }
        }
    }
}
impl fmt::Display for CoreBinOp {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            CoreBinOp::Add => write!(f, "+"),
            CoreBinOp::Sub => write!(f, "-"),
            CoreBinOp::Mul => write!(f, "*"),
            CoreBinOp::Div => write!(f, "/"),
            CoreBinOp::Lt => write!(f, "<"),
            CoreBinOp::Le => write!(f, "<="),
        }
    }
}
impl fmt::Display for CoreTerm {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            CoreTerm::Var(name) => write!(f, "{}", name),
            CoreTerm::LitInt(n) => write!(f, "{}", n),
            CoreTerm::Lambda {
                param,
                param_ty,
                body,
            } => {
                write!(f, "λ{} : {}. {}", param, param_ty, body)
            }
            CoreTerm::App { func, arg } => write!(f, "{} {}", func, arg),
            CoreTerm::Constructor { name, args } => {
                if args.is_empty() {
                    write!(f, "{}", name)
                } else {
                    write!(f, "{}", name)?;
                    for arg in args {
                        write!(f, " {}", arg)?;
                    }
                    Ok(())
                }
            }
            CoreTerm::If {
                cond,
                then_branch,
                else_branch,
            } => {
                write!(f, "if {} then {} else {}", cond, then_branch, else_branch)
            }
            CoreTerm::Case { scrutinee, arms: _ } => {
                write!(f, "match {} {{ ... }}", scrutinee)
            }
            CoreTerm::BinOp { op, left, right } => {
                write!(f, "{} {} {}", left, op, right)
            }
            _ => write!(f, "<term>"),
        }
    }
}
/// Core types (System-F-ω)
#[derive(Debug, Clone, PartialEq)]
pub enum CoreType {
    /// Type variable: a
    Var(String),
    /// Existential type variable: ^a
    ETVar(String),
    /// Type constructor: Int, Bool
    Con(String),
    /// Function type: T1 -> T2
    Arrow(Box<CoreType>, Box<CoreType>),
    /// Universal quantification: ∀a. T
    Forall(String, Box<CoreType>),
    /// Type application: F T
    App(Box<CoreType>, Box<CoreType>),
    /// Product type: T1 × T2
    Product(Vec<CoreType>),
}
}

Core types include all the expressive power of System F-ω:

Type Variables: Both ordinary (Var) and existential (ETVar) variables for unification Type Constructors: Built-in types (Con) like Int, Bool Function Types: Explicit arrow types (Arrow) Universal Quantification: Forall types with kind-annotated bound variables Type Application: App for applying type constructors to arguments Type Abstraction: TAbs for creating type-level functions

The core representation makes explicit all type-level computation that remains implicit in the surface language.

Core Terms

#![allow(unused)]
fn main() {
use std::fmt;
/// Core language for System-F-ω with explicit type applications and
/// abstractions
#[derive(Debug, Clone, PartialEq)]
pub struct CoreModule {
    pub type_defs: Vec<TypeDef>,
    pub term_defs: Vec<TermDef>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct TypeDef {
    pub name: String,
    pub kind: Kind,
    pub constructors: Vec<DataConstructor>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct DataConstructor {
    pub name: String,
    pub ty: CoreType,
}
#[derive(Debug, Clone, PartialEq)]
pub struct TermDef {
    pub name: String,
    pub ty: CoreType,
    pub body: CoreTerm,
}
/// Kinds for types
#[derive(Debug, Clone, PartialEq)]
pub enum Kind {
    /// Base kind: *
    Star,
    /// Function kind: k1 -> k2
    Arrow(Box<Kind>, Box<Kind>),
}
/// Core types (System-F-ω)
#[derive(Debug, Clone, PartialEq)]
pub enum CoreType {
    /// Type variable: a
    Var(String),
    /// Existential type variable: ^a
    ETVar(String),
    /// Type constructor: Int, Bool
    Con(String),
    /// Function type: T1 -> T2
    Arrow(Box<CoreType>, Box<CoreType>),
    /// Universal quantification: ∀a. T
    Forall(String, Box<CoreType>),
    /// Type application: F T
    App(Box<CoreType>, Box<CoreType>),
    /// Product type: T1 × T2
    Product(Vec<CoreType>),
}
#[derive(Debug, Clone, PartialEq)]
pub struct CaseArm {
    pub pattern: CorePattern,
    pub body: CoreTerm,
}
#[derive(Debug, Clone, PartialEq)]
pub enum CorePattern {
    /// Wildcard: _
    Wildcard,
    /// Variable: x
    Var(String),
    /// Constructor: C p1 ... pn
    Constructor {
        name: String,
        args: Vec<CorePattern>,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub enum CoreBinOp {
    Add,
    Sub,
    Mul,
    Div,
    Lt,
    Le,
}
impl fmt::Display for Kind {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Kind::Star => write!(f, "*"),
            Kind::Arrow(k1, k2) => write!(f, "{} -> {}", k1, k2),
        }
    }
}
impl fmt::Display for CoreType {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            CoreType::Var(name) => write!(f, "{}", name),
            CoreType::ETVar(name) => write!(f, "^{}", name),
            CoreType::Con(name) => write!(f, "{}", name),
            CoreType::Arrow(t1, t2) => write!(f, "{} -> {}", t1, t2),
            CoreType::Forall(var, ty) => write!(f, "∀{}. {}", var, ty),
            CoreType::App(t1, t2) => write!(f, "({} {})", t1, t2),
            CoreType::Product(types) => {
                write!(f, "(")?;
                for (i, ty) in types.iter().enumerate() {
                    if i > 0 {
                        write!(f, " × ")?;
                    }
                    write!(f, "{}", ty)?;
                }
                write!(f, ")")
            }
        }
    }
}
impl fmt::Display for CoreBinOp {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            CoreBinOp::Add => write!(f, "+"),
            CoreBinOp::Sub => write!(f, "-"),
            CoreBinOp::Mul => write!(f, "*"),
            CoreBinOp::Div => write!(f, "/"),
            CoreBinOp::Lt => write!(f, "<"),
            CoreBinOp::Le => write!(f, "<="),
        }
    }
}
impl fmt::Display for CoreTerm {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            CoreTerm::Var(name) => write!(f, "{}", name),
            CoreTerm::LitInt(n) => write!(f, "{}", n),
            CoreTerm::Lambda {
                param,
                param_ty,
                body,
            } => {
                write!(f, "λ{} : {}. {}", param, param_ty, body)
            }
            CoreTerm::App { func, arg } => write!(f, "{} {}", func, arg),
            CoreTerm::Constructor { name, args } => {
                if args.is_empty() {
                    write!(f, "{}", name)
                } else {
                    write!(f, "{}", name)?;
                    for arg in args {
                        write!(f, " {}", arg)?;
                    }
                    Ok(())
                }
            }
            CoreTerm::If {
                cond,
                then_branch,
                else_branch,
            } => {
                write!(f, "if {} then {} else {}", cond, then_branch, else_branch)
            }
            CoreTerm::Case { scrutinee, arms: _ } => {
                write!(f, "match {} {{ ... }}", scrutinee)
            }
            CoreTerm::BinOp { op, left, right } => {
                write!(f, "{} {} {}", left, op, right)
            }
            _ => write!(f, "<term>"),
        }
    }
}
/// Core terms (System-F)
#[derive(Debug, Clone, PartialEq)]
pub enum CoreTerm {
    /// Variable: x
    Var(String),
    /// Integer literal: 42
    LitInt(i64),
    /// Lambda abstraction: λx:T. e
    Lambda {
        param: String,
        param_ty: CoreType,
        body: Box<CoreTerm>,
    },
    /// Application: e1 e2
    App {
        func: Box<CoreTerm>,
        arg: Box<CoreTerm>,
    },
    /// Type abstraction: Λα. e
    TypeLambda { param: String, body: Box<CoreTerm> },
    /// Constructor: C e1 ... en
    Constructor { name: String, args: Vec<CoreTerm> },
    /// Pattern matching: case e of { p1 -> e1; ... }
    Case {
        scrutinee: Box<CoreTerm>,
        arms: Vec<CaseArm>,
    },
    /// Built-in operations
    BinOp {
        op: CoreBinOp,
        left: Box<CoreTerm>,
        right: Box<CoreTerm>,
    },
    /// Conditional: if e1 then e2 else e3
    If {
        cond: Box<CoreTerm>,
        then_branch: Box<CoreTerm>,
        else_branch: Box<CoreTerm>,
    },
}
}

Core terms expose the full System F-ω term language:

Variables and Literals: Direct correspondence with surface language Function Abstraction/Application: Explicitly typed lambda calculus Type Abstraction: TyAbs for creating polymorphic functions Type Application: TyApp for instantiating polymorphic functions Data Constructors: Con for algebraic data type constructors Pattern Matching: Match with explicit constructor patterns

Every implicit type operation from the surface language becomes explicit in the core representation.

Pattern Matching and Case Analysis

Pattern matching provides the fundamental mechanism for deconstructing algebraic data types and extracting their constituent values. Our implementation supports comprehensive pattern matching that integrates seamlessly with the type system, ensuring that all pattern analyses are both exhaustive and type-safe.

The pattern matching construct match e { p1 -> e1; p2 -> e2; } performs case analysis on the scrutinee expression e, attempting to match it against each pattern in sequence. When a pattern matches, the corresponding branch expression executes with pattern variables bound to the extracted values.

Constructor Patterns and Variable Binding

#![allow(unused)]
fn main() {
use std::fmt;
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Data type declaration: data Color = Red | Blue
    Data {
        name: String,
        type_params: Vec<String>,
        constructors: Vec<Constructor>,
    },
    /// Function type signature: fib :: Int -> Int
    TypeSig {
        name: String,
        type_scheme: TypeScheme,
    },
    /// Function definition: fib n = match n { 0 -> 0, ... }
    FunDef {
        name: String,
        params: Vec<String>,
        body: Expr,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub fields: Vec<Type>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct TypeScheme {
    pub quantified: Vec<String>, // forall a b. ...
    pub ty: Type,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Type {
    /// Type variable: a, b
    Var(String),
    /// Type constructor: Int, Bool, Color
    Con(String),
    /// Function type: a -> b
    Arrow(Box<Type>, Box<Type>),
    /// Type application: List a, Maybe Int
    App(Box<Type>, Box<Type>),
    /// Record type: { x :: Int, y :: Int }
    Record(Vec<(String, Type)>),
    /// Forall type: forall a b. a -> b
    Forall(Vec<String>, Box<Type>),
}
#[derive(Debug, Clone, PartialEq)]
pub enum Expr {
    /// Variable: x, y
    Var(String),
    /// Integer literal: 42
    LitInt(i64),
    /// Lambda: \x -> e
    Lambda { param: String, body: Box<Expr> },
    /// Function application: f x
    App { func: Box<Expr>, arg: Box<Expr> },
    /// Conditional: if cond then e1 else e2
    If {
        cond: Box<Expr>,
        then_branch: Box<Expr>,
        else_branch: Box<Expr>,
    },
    /// Pattern matching: match e { p1 -> e1, p2 -> e2 }
    Match {
        expr: Box<Expr>,
        arms: Vec<MatchArm>,
    },
    /// Binary operation: e1 + e2
    BinOp {
        op: BinOp,
        left: Box<Expr>,
        right: Box<Expr>,
    },
}
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Expr,
}
#[derive(Debug, Clone, PartialEq)]
pub enum BinOp {
    Add,
    Sub,
    Mul,
    Div,
    // Comparison operations
    Lt,
    Le,
}
impl fmt::Display for Type {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Type::Var(name) | Type::Con(name) => write!(f, "{}", name),
            Type::Arrow(t1, t2) => write!(f, "{} -> {}", t1, t2),
            Type::App(t1, t2) => write!(f, "{} {}", t1, t2),
            Type::Record(fields) => {
                write!(f, "{{ ")?;
                for (i, (name, ty)) in fields.iter().enumerate() {
                    if i > 0 {
                        write!(f, ", ")?;
                    }
                    write!(f, "{} :: {}", name, ty)?;
                }
                write!(f, " }}")
            }
            Type::Forall(vars, ty) => {
                write!(f, "forall ")?;
                for (i, var) in vars.iter().enumerate() {
                    if i > 0 {
                        write!(f, " ")?;
                    }
                    write!(f, "{}", var)?;
                }
                write!(f, ". {}", ty)
            }
        }
    }
}
impl fmt::Display for TypeScheme {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        if self.quantified.is_empty() {
            write!(f, "{}", self.ty)
        } else {
            write!(f, "forall {}. {}", self.quantified.join(" "), self.ty)
        }
    }
}
impl fmt::Display for BinOp {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            BinOp::Add => write!(f, "+"),
            BinOp::Sub => write!(f, "-"),
            BinOp::Mul => write!(f, "*"),
            BinOp::Div => write!(f, "/"),
            BinOp::Lt => write!(f, "<"),
            BinOp::Le => write!(f, "<="),
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Wildcard pattern: _
    Wildcard,
    /// Variable pattern: x
    Var(String),
    /// Constructor pattern: Red, Cons x xs
    Constructor { name: String, args: Vec<Pattern> },
}
}

Constructor patterns decompose algebraic data types by matching against specific constructors and binding their arguments to pattern variables. The pattern Cons x xs matches values constructed with Cons, binding the first argument to x and the second to xs. This binding mechanism provides type-safe access to the components of structured data.

Variable patterns like x match any value and bind the entire matched value to the variable name. The pattern matching compiler ensures that each variable binding receives the appropriate type based on the context in which the pattern appears.

Wildcard patterns represented by _ match any value without creating bindings, useful for ignoring components that are not needed in the branch expression. The type checker verifies that wildcard patterns are used consistently with the expected type structure.

Exhaustiveness and Type Safety

The pattern matching implementation enforces exhaustiveness, requiring that pattern sets cover all possible values of the matched type. For algebraic data types, this means providing patterns for every constructor defined in the type declaration. The compiler rejects programs with non-exhaustive patterns, preventing runtime errors that could occur when unhandled cases arise.

Type safety extends through pattern matching by ensuring that pattern variables receive types consistent with the constructor arguments they represent. When matching Just x against a Maybe Int, the variable x automatically receives type Int, eliminating the need for explicit type annotations or runtime type checks.

Nested Patterns and Deep Matching

Patterns can nest arbitrarily deeply, enabling decomposition of complex data structures in single pattern matches. The pattern Cons (Just x) xs simultaneously matches the outer list structure and the inner Maybe type, binding both the unwrapped value x and the remaining list xs in a single operation.

Nested pattern matching interacts correctly with polymorphism, maintaining type relationships across multiple levels of structure. The type checker propagates type information through nested patterns, ensuring that all bindings receive their most general types while maintaining compatibility with the overall pattern context.

Algebraic Data Types

Algebraic data types provide the foundation for structured data in our System F-ω implementation, combining sum types (disjoint unions) and product types (tuples and records) into a unified framework that supports both data abstraction and generic programming.

The data declaration syntax enables concise specification of complex type structures while automatically deriving the associated constructors, destructors, and type information needed for pattern matching and type checking.

Sum Types and Tagged Unions

Sum types represent choices between alternative data representations, with each alternative identified by a unique constructor tag. The declaration data Either a b = Left a | Right b creates a sum type with two alternatives, each carrying a value of a different type.

Sum types enable type-safe representation of optional values, error conditions, and other scenarios where data can take one of several forms. The type Maybe a = Nothing | Just a encapsulates the common pattern of values that might be absent, replacing null pointer patterns with type-safe alternatives.

Each constructor in a sum type creates values that are distinguishable through pattern matching, enabling exhaustive case analysis that the type checker can verify statically. The tag information embedded in sum type values allows pattern matching to dispatch correctly to the appropriate branch without runtime type inspection.

Product Types and Data Aggregation

Product types combine multiple values into single composite structures, with each component accessible through pattern matching or projection operations. Constructor syntax like Cons a (List a) creates product types where each constructor argument represents a field in the resulting structure.

Product types support both named and positional field access through pattern matching, providing flexibility in how composite data gets decomposed. The pattern Cons head tail extracts both components of a list cell, binding them to appropriately typed variables for use in the branch expression.

Tuples represent anonymous product types where component ordering determines access patterns. While our surface language does not include explicit tuple syntax, the pattern matching mechanism supports tuple-like destructuring of constructor arguments.

Recursive Types and Inductive Data Structures

Recursive type definitions enable the construction of arbitrarily large data structures through self-reference in constructor arguments. The declaration data List a = Nil | Cons a (List a) defines lists inductively, with the base case Nil and the recursive case Cons that references the type being defined.

Recursive types interact correctly with polymorphism, enabling the definition of generic container types that work uniformly across all element types. The list type List a demonstrates how recursive structure combines with parametric polymorphism to create flexible, reusable data abstractions.

Inductive types support well-founded recursion through pattern matching, enabling terminating recursive functions that process arbitrarily large data structures. The pattern matching compiler ensures that recursive calls operate on structurally smaller arguments, supporting termination analysis and optimization.

Kind Inference and Type Constructor Classification

Data type declarations automatically infer appropriate kinds for the defined type constructors based on their parameter structure and usage patterns. Simple types like Bool receive kind *, while parameterized types like Maybe receive kinds of the form * -> *.

Higher-kinded types emerge naturally from data declarations with multiple parameters or higher-order structure. The type Either receives kind * -> * -> *, indicating a type constructor that requires two type arguments to produce a complete type.

Kind inference propagates through type expressions, ensuring that type applications in data constructors receive appropriate kind annotations for use in the core language representation. This automatic kind inference eliminates the need for explicit kind annotations while maintaining the precision required for System F-ω’s type-level computation.

Elaboration Process

Elaboration forms the critical bridge between the user-friendly surface language and the theoretically precise core language, transforming high-level programming constructs into explicit System F-ω representations that enable sound type checking and compilation. This translation process handles the complex task of inferring and inserting all implicit type-level operations that the surface language deliberately omits for conciseness and programmer convenience.

The elaboration algorithm operates through multiple interdependent phases that work together to produce well-typed core language programs. Each phase builds upon the results of previous phases, creating a pipeline that systematically transforms surface constructs into their explicit core equivalents while maintaining type safety and semantic preservation.

Kind Inference and Type Constructor Analysis

Kind inference represents one of the most aspects of elaboration, requiring analysis of type constructor usage patterns to determine their proper classification in the kind hierarchy. The process begins by analyzing type parameters in data declarations to determine the kinds of type constructors being defined. A simple data type like Bool with no parameters receives kind \( \star \), indicating it represents a complete type that can classify terms.

Parameterized data types require more complex analysis to determine their proper kinds. The declaration data Maybe a = Nothing | Just a reveals that Maybe is a type constructor that takes one type argument, yielding kind \( \star \to \star \). Multi-parameter types like Either a b receive kinds of the form \( \star \to \star \to \star \), reflecting their need for multiple type arguments before producing complete types.

The kind inference algorithm propagates kind constraints through type applications and signatures, ensuring consistency across the entire program. When a type constructor appears in a type application like Maybe Int, the algorithm verifies that the argument Int has kind \( \star \) to match the expected parameter kind of Maybe. This constraint propagation catches kind errors early in the elaboration process, preventing malformed types from reaching the core language.

The final step inserts implicit kind abstractions in the core representation, making explicit the kind-level lambda abstractions that correspond to parameterized type constructors. The surface type Maybe becomes a kind-level function \( \lambda\alpha :: \star. \text{Maybe}\, \alpha \) in the core language, exposing the type-level computation that remains hidden in the surface syntax.

Type Inference and Polymorphic Instantiation

Surface programs deliberately omit many type-level operations to maintain readability and reduce annotation burden, requiring the elaboration process to infer and insert these operations automatically. The most complex aspect involves handling polymorphic functions, which use implicit universal quantification in the surface language but require explicit type abstractions and applications in the core.

Type abstraction insertion analyzes function definitions to identify polymorphic variables that must be abstracted at the type level. A surface function like identity x = x with inferred type a -> a becomes a core expression \( \Lambda\alpha :: \star. \lambda x : \alpha. x \), making explicit the type-level abstraction over the polymorphic variable a. This transformation ensures that the core representation captures the full generality of polymorphic functions.

Type application insertion occurs at call sites of polymorphic functions, where the surface language relies on type inference to determine appropriate instantiations. When a polymorphic function like identity is applied to a specific argument like 42, the elaboration process inserts the type application \( \text{identity}[\text{Int}]\, 42 \), making explicit the instantiation of the type parameter with Int.

The generation of existential variables handles unknown types that arise during inference, creating placeholders that the constraint solver can resolve later. When the elaboration process encounters expressions whose types cannot be determined immediately, it generates fresh existential variables and creates constraints that guide the solver toward appropriate instantiations.

Pattern Compilation and Constructor Analysis

Pattern matching in the surface language undergoes substantial transformation during elaboration, converting high-level pattern matching constructs into explicit case analysis that the core language can represent directly. This compilation process must handle the complex interactions between pattern structure, type information, and variable binding that make pattern matching both powerful and type-safe.

Constructor pattern analysis examines each pattern to determine the type of the scrutinee expression and the types of bound variables. A pattern like Cons x xs matching against List Int reveals that x has type Int and xs has type List Int. This type information gets embedded in the core representation to guide type checking and code generation.

The generation of core match expressions creates explicit case analysis constructs that enumerate all possible constructor alternatives. Surface pattern matches become core expressions of the form \( \text{match}\, e\, \{ \text{C}_1\, x_1 \to e_1; \ldots; \text{C}_n\, x_n \to e_n \} \), with each alternative explicitly typed and all constructor possibilities accounted for.

Pattern variable binding requires careful scope management to ensure that variables bound in patterns are available with the correct types in branch expressions. The elaboration process maintains binding contexts that track the types of pattern variables and ensures that core expressions correctly reference these bindings with appropriate type information.

Exhaustiveness checking during pattern compilation verifies that pattern sets cover all possible constructor alternatives, preventing runtime errors from unhandled cases. The elaboration process analyzes constructor declarations to determine the complete set of alternatives and reports errors when patterns are non-exhaustive, maintaining the type safety guarantees that System F-ω provides.

Design Rationale

The two-layer architecture of our System F-ω implementation represents a carefully considered approach to balancing theoretical precision with practical usability, addressing fundamental tensions that arise when building advanced type systems for real-world programming. This design philosophy emerges from the recognition that pure System F-ω, while theoretically elegant, imposes significant annotation burden that makes it impractical for everyday programming tasks.

Programmer Productivity and Cognitive Load Management

The surface language prioritizes programmer productivity by eliminating the explicit type-level operations that System F-ω requires while preserving all the expressive power of the underlying type system. Programmers can write natural, intuitive code using familiar algebraic data types and pattern matching without being forced to understand or manipulate higher-kinded types, type applications, or kind annotations directly.

This approach recognizes that cognitive load represents a scarce resource in software development. By hiding the complexity of type-level computation behind a clean surface syntax, we enable programmers to focus on problem-solving rather than wrestling with the mechanical details of type system operation. The surface language provides enough abstraction that polymorphic programming feels natural and obvious, even though the underlying elaboration process involves type inference and constraint solving.

The implicit quantification system exemplifies this philosophy. Where pure System F-ω requires explicit type abstractions like \( \Lambda\alpha :: \star. \lambda x : \alpha. x \), our surface language allows the simple definition identity x = x with automatic inference of the polymorphic type a -> a. This transformation eliminates tedious annotation while preserving full generality and type safety.

Type Safety and Theoretical Soundness

The core language ensures that all the theoretical guarantees of System F-ω remain intact by making every type operation explicit and checkable. This explicit representation enables rigorous type checking algorithms that can verify program correctness with mathematical precision, preventing the subtle errors that can arise when implicit operations are handled incorrectly.

By elaborating surface programs to core representations, we gain access to the full theoretical machinery of System F-ω, including decidable type checking, principal types, and strong normalization. The core language serves as a certificate of correctness, providing concrete evidence that surface programs satisfy all the constraints of the type system.

The explicit nature of core representations also enables advanced optimizations and program transformations that rely on precise type information. Code generators can exploit type-level information to produce efficient implementations, while program analysis tools can reason about program behavior with greater precision than would be possible with surface-level representations alone.

Modularity and Language Evolution

The separation between surface and core languages provides crucial modularity that enables independent evolution of user-facing features and theoretical foundations. Surface language features can be added, modified, or removed without affecting the core type checking algorithms, provided the elaboration process can translate them to appropriate core representations.

This modularity proves essential for language experimentation and extension. New surface constructs like syntactic sugar, alternative pattern matching styles, or domain-specific language features can be implemented entirely through elaboration without requiring changes to the type checker or core language semantics. This flexibility enables rapid prototyping of language features while maintaining implementation stability.

The core language can similarly evolve independently, with improvements to type checking algorithms, optimization passes, or code generation strategies affecting all surface programs automatically through the existing elaboration interface. This separation enables focused development where surface language designers can concentrate on usability while type theorists can focus on correctness and efficiency.

Debugging and Developer Understanding

The explicit core representations provide invaluable insight into type checker behavior, enabling developers to understand how their programs are interpreted by the type system. When type errors occur, the core representation shows exactly which type-level operations failed and why, providing much more precise diagnostic information than surface-level error reporting alone could offer.

This debugging capability extends to performance analysis, where core representations reveal the type-level computation overhead imposed by different programming patterns. Developers can examine core representations to understand which surface constructs generate expensive type-level operations and adjust their programming style accordingly.

The elaboration process itself serves as an educational tool, demonstrating how high-level programming constructs decompose into fundamental type theoretic operations. Students learning advanced type systems can examine the core representations of their programs to develop intuition about how polymorphism, higher-kinded types, and type-level computation actually work.

Balancing Expressiveness with Accessibility

Our design demonstrates that advanced type systems can be made accessible without sacrificing their theoretical foundations. The surface language captures the essential patterns that programmers want to express - polymorphic functions, generic data structures, type-safe pattern matching - while the core language ensures that these patterns have rigorous foundations in type theory.

This balance addresses a fundamental challenge in programming language design: how to provide the benefits of advanced type systems without requiring all programmers to become experts in type theory. By carefully choosing which aspects to make implicit and which to keep explicit, we create a system that scales from beginning programmers writing simple functions to expert developers implementing complex generic libraries.

The success of this approach validates the broader principle that programming language design should prioritize human factors alongside theoretical considerations. Technical elegance and mathematical precision remain essential, but they must be balanced against the practical realities of software development where programmer time, cognitive load, and learning curves represent real constraints that affect the ultimate utility of programming languages.

Through this two-layer architecture, we achieve a System F-ω implementation that provides the full power of higher-kinded polymorphism while remaining approachable for programmers who simply want to write correct, generic, type-safe code without becoming experts in the theoretical foundations that make such code possible.

Lexer and Parser

Our System F-ω implementation employs a carefully designed parsing strategy that contrasts sharply with the organically evolved complexity found in production Haskell implementations. While Haskell’s grammar has grown over decades through endless language extensions and “pragmatic” compromises, resulting in context-dependent parsing, whitespace sensitivity, and intricate layout rules with virtual braces and semicolons, and a general clusterf*** our implementation opts for a much smaller surface langauge.

The Haskell language specification presents significant parsing challenges that have led to numerous implementation variations across different compilers. The language’s sensitivity to indentation creates a complex interaction between lexical analysis and parsing phases, where layout rules must insert implicit structure markers. Context-dependent syntax means that identical token sequences can parse differently depending on surrounding declarations, while the proliferation of language extensions has created a patchwork of parsing rules that interact in unexpected ways.

Our toy System F-ω implementation deliberately sidesteps these complexities by adopting explicit syntax with clear delimiters, enabling straightforward LALR(1) parsing that produces predictable results independent of context or whitespace variations.

Lexical Analysis Strategy

The lexical analysis phase transforms source text into a stream of tokens that the parser can process deterministically. Unlike Haskell’s context-sensitive lexer that must track indentation levels and insert layout tokens, our lexer operates as a pure finite state machine.

#![allow(unused)]
fn main() {
use std::fmt;
use logos::Logos;
pub struct Lexer<'input> {
    token_stream: logos::SpannedIter<'input, Token>,
    last_token: Option<Token>,
    pending_token: Option<(usize, Token, usize)>, // Token to emit after semicolon
    just_inserted_semicolon: bool,                // Track if we just inserted a semicolon
}
impl<'input> Lexer<'input> {
    pub fn new(input: &'input str) -> Self {
        Self {
            token_stream: Token::lexer(input).spanned(),
            last_token: None,
            pending_token: None,
            just_inserted_semicolon: false,
        }
    }
}
impl fmt::Display for Token {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Token::Data => write!(f, "data"),
            Token::Forall => write!(f, "forall"),
            Token::Match => write!(f, "match"),
            Token::If => write!(f, "if"),
            Token::Then => write!(f, "then"),
            Token::Else => write!(f, "else"),
            Token::Arrow => write!(f, "->"),
            Token::DoubleColon => write!(f, "::"),
            Token::Equal => write!(f, "="),
            Token::Plus => write!(f, "+"),
            Token::Minus => write!(f, "-"),
            Token::Star => write!(f, "*"),
            Token::Slash => write!(f, "/"),
            Token::LessEqual => write!(f, "<="),
            Token::Less => write!(f, "<"),
            Token::Pipe => write!(f, "|"),
            Token::Underscore => write!(f, "_"),
            Token::Dot => write!(f, "."),
            Token::Comma => write!(f, ","),
            Token::Backslash => write!(f, "\\"),
            Token::LeftParen => write!(f, "("),
            Token::RightParen => write!(f, ")"),
            Token::LeftBrace => write!(f, "{{"),
            Token::RightBrace => write!(f, "}}"),
            Token::Ident(s) => write!(f, "{}", s),
            Token::Integer(n) => write!(f, "{}", n),
            Token::Semicolon => write!(f, ";"),
            Token::Newline => write!(f, "\\n"),
            Token::Error => write!(f, "ERROR"),
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum LexicalError {
    InvalidToken,
}
impl fmt::Display for LexicalError {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            LexicalError::InvalidToken => write!(f, "Invalid token"),
        }
    }
}
pub type Spanned<Tok, Loc, Error> = Result<(Loc, Tok, Loc), Error>;
impl<'input> Iterator for Lexer<'input> {
    type Item = Spanned<Token, usize, LexicalError>;

    fn next(&mut self) -> Option<Self::Item> {
        // If we have a pending token, return it
        if let Some((start, token, end)) = self.pending_token.take() {
            self.last_token = Some(token.clone());
            self.just_inserted_semicolon = false;
            return Some(Ok((start, token, end)));
        }

        match self.token_stream.next()? {
            (Ok(Token::Newline), span) => {
                // Check if we should insert a semicolon before this newline
                // But don't insert if we just inserted one
                if !self.just_inserted_semicolon {
                    if let Some(ref last) = self.last_token {
                        if should_insert_semicolon(last) {
                            self.just_inserted_semicolon = true;
                            return Some(Ok((span.start, Token::Semicolon, span.start)));
                        }
                    }
                }
                // Skip newlines and continue
                self.just_inserted_semicolon = false;
                self.next()
            }
            (Ok(token), span) => {
                // Check for semicolon insertion at statement boundaries
                // But don't insert if we just inserted one
                if !self.just_inserted_semicolon {
                    if let Some(ref last) = self.last_token {
                        if should_insert_semicolon(last) && needs_semicolon_before(&token) {
                            // Store the current token for the next call, and return a semicolon
                            self.pending_token = Some((span.start, token, span.end));
                            self.just_inserted_semicolon = true;
                            return Some(Ok((span.start, Token::Semicolon, span.start)));
                        }
                    }
                }

                self.last_token = Some(token.clone());
                self.just_inserted_semicolon = false;
                Some(Ok((span.start, token, span.end)))
            }
            (Err(_), _span) => Some(Err(LexicalError::InvalidToken)),
        }
    }
}
fn should_insert_semicolon(token: &Token) -> bool {
    matches!(
        token,
        Token::Ident(_) | Token::Integer(_) | Token::RightParen | Token::RightBrace
    )
}
fn needs_semicolon_before(token: &Token) -> bool {
    matches!(token, Token::Data | Token::If | Token::Match)
    // Note: Removed Token::Ident(_) to prevent semicolon insertion in function
    // parameters
}
#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_simple_tokens() {
        let lexer = Lexer::new("data Bool = True | False");
        let tokens: Vec<_> = lexer.collect();

        // Check that we get the expected tokens without errors
        assert!(tokens.iter().all(|t| t.is_ok()));

        let token_values: Vec<_> = tokens.into_iter().map(|t| t.unwrap().1).collect();
        assert_eq!(
            token_values,
            vec![
                Token::Data,
                Token::Ident("Bool".to_string()),
                Token::Equal,
                Token::Ident("True".to_string()),
                Token::Pipe,
                Token::Ident("False".to_string()),
            ]
        );
    }

    #[test]
    fn test_function_definition() {
        let lexer = Lexer::new("fib :: Int -> Int");
        let tokens: Vec<_> = lexer.map(|t| t.unwrap().1).collect();

        assert_eq!(
            tokens,
            vec![
                Token::Ident("fib".to_string()),
                Token::DoubleColon,
                Token::Ident("Int".to_string()),
                Token::Arrow,
                Token::Ident("Int".to_string()),
            ]
        );
    }

    #[test]
    fn test_arithmetic() {
        let lexer = Lexer::new("1 + 2 * 3 <= 42");
        let tokens: Vec<_> = lexer.map(|t| t.unwrap().1).collect();

        assert_eq!(
            tokens,
            vec![
                Token::Integer(1),
                Token::Plus,
                Token::Integer(2),
                Token::Star,
                Token::Integer(3),
                Token::LessEqual,
                Token::Integer(42),
            ]
        );
    }

    #[test]
    fn test_comments_ignored() {
        let lexer = Lexer::new("fib -- This is a comment\n:: Int");
        let tokens: Vec<_> = lexer.map(|t| t.unwrap().1).collect();

        assert_eq!(
            tokens,
            vec![
                Token::Ident("fib".to_string()),
                Token::Semicolon, // Auto-inserted at newline
                Token::DoubleColon,
                Token::Ident("Int".to_string()),
            ]
        );
    }

    #[test]
    fn test_auto_semicolon_insertion() {
        let lexer = Lexer::new("data Bool = True | False\nfib n = 42");
        let tokens: Vec<_> = lexer.map(|t| t.unwrap().1).collect();

        // Should insert semicolon after "False" before "fib"
        assert!(tokens.contains(&Token::Semicolon));

        // Check that we have the right tokens in order
        assert_eq!(tokens[0], Token::Data);
        assert_eq!(tokens[1], Token::Ident("Bool".to_string()));
        assert_eq!(tokens[2], Token::Equal);
        assert_eq!(tokens[3], Token::Ident("True".to_string()));
        assert_eq!(tokens[4], Token::Pipe);
        assert_eq!(tokens[5], Token::Ident("False".to_string()));
        assert_eq!(tokens[6], Token::Semicolon); // Auto-inserted
        assert_eq!(tokens[7], Token::Ident("fib".to_string()));
        assert_eq!(tokens[8], Token::Ident("n".to_string()));
        assert_eq!(tokens[9], Token::Equal);
        assert_eq!(tokens[10], Token::Integer(42));
    }

    #[test]
    fn test_function_parameter_tokens() {
        let lexer = Lexer::new("id x = x;");
        let tokens: Vec<_> = lexer.map(|t| t.unwrap().1).collect();

        println!("Function parameter tokens: {:?}", tokens);

        assert_eq!(tokens[0], Token::Ident("id".to_string()));
        assert_eq!(tokens[1], Token::Ident("x".to_string()));
        assert_eq!(tokens[2], Token::Equal);
        assert_eq!(tokens[3], Token::Ident("x".to_string()));
        assert_eq!(tokens[4], Token::Semicolon);
    }
}
#[derive(Logos, Debug, Clone, PartialEq)]
pub enum Token {
    // Keywords
    #[token("data")]
    Data,
    #[token("forall")]
    Forall,
    #[token("match")]
    Match,
    #[token("if")]
    If,
    #[token("then")]
    Then,
    #[token("else")]
    Else,

    // Operators and delimiters
    #[token("->")]
    Arrow,
    #[token("::")]
    DoubleColon,
    #[token("=")]
    Equal,
    #[token("+")]
    Plus,
    #[token("-")]
    Minus,
    #[token("*")]
    Star,
    #[token("/")]
    Slash,
    #[token("<=")]
    LessEqual,
    #[token("<")]
    Less,
    #[token("|")]
    Pipe,
    #[token("_", priority = 2)]
    Underscore,
    #[token(".")]
    Dot,
    #[token(",")]
    Comma,
    #[token("\\")]
    Backslash,

    // Delimiters
    #[token("(")]
    LeftParen,
    #[token(")")]
    RightParen,
    #[token("{")]
    LeftBrace,
    #[token("}")]
    RightBrace,

    // Identifiers and literals
    #[regex(r"[a-zA-Z][a-zA-Z0-9_']*", |lex| lex.slice().to_string(), priority = 1)]
    Ident(String),

    #[regex(r"[0-9]+", |lex| lex.slice().parse::<i64>().unwrap_or(0))]
    Integer(i64),

    // Automatic semicolon insertion
    #[token(";")]
    Semicolon,

    // Special tokens
    Newline,

    // Skip whitespace and comments
    #[regex(r"[ \t\f]+", logos::skip)]
    #[regex(r"--[^\r\n]*", logos::skip)]
    #[token("\r\n", |_| Token::Newline)]
    #[token("\n", |_| Token::Newline)]
    Error,
}
}

The lexer recognizes several categories of tokens that capture the essential elements of our surface language. Keywords like data, match, and forall establish the syntactic framework for declarations and expressions. Identifiers distinguish between type variables, term variables, and constructor names through naming conventions that the lexer enforces consistently.

Operators receive special treatment to handle the function arrow (->) and type signature marker (::), both crucial for expressing types and function signatures. Delimiters including parentheses, braces, and semicolons provide explicit structure that eliminates the ambiguity inherent in layout-based syntax.

Numeric and boolean literals complete the token vocabulary, providing the primitive values that serve as the foundation for more complex expressions.

This is not ALL of Haskell, but it’s enough to do non-trivial typechecking over. And that’s all we really need. The full langauge is, to put it mildly, a bit too large and “organic”.

Bidirectional Type Checking

The type inference engine represents the most component of our System F-ω implementation. Built around the DK (Dunfield-Krishnaswami) worklist algorithm, it handles the complex interactions between higher-rank polymorphism, existential type variables, and bidirectional type checking that make System F-ω both powerful and challenging to implement efficiently.

The algorithm operates on a worklist of judgments and constraints, systematically reducing complex type checking problems into simpler ones until all constraints are resolved. This approach provides decidable type inference for System F-ω while maintaining principal types and supporting polymorphic programming patterns.

Notably this approach is different than our direct tree-walking approach before. Now we walk the syntax tree and emit constraints, which are then solved by the worklist algorithm, and then back-propagate the results to the original syntax tree to give the final types. This system allows more copmlex rules to be introduced, but it also makes the entire process “non-local” in the sense that we are looking at a constraint generated by many different expressions, and thus do error reporting becomes a bit more challenging because tracing back a constraint problem to its original syntax requires quite a bit of bookkeeping

Typing Rules

Now we’re going to bodly go into the next generation of type checking. We have some new symbols in our type system, but they’re mostly the same as before except for the addition of new kind-level operations.

  • \( \Gamma \) (Gamma) - The typing context that stores information about variables and their types.

  • \( \vdash \) (Turnstile) - The “proves” relation, meaning “given this context, we can conclude this judgment.”

  • \( \Rightarrow \) (Double Right Arrow) - Synthesis mode, where the type checker figures out what type an expression has.

  • \( \Leftarrow \) (Double Left Arrow) - Checking mode, where we verify that an expression has the expected type.

  • \( \forall\alpha \) (Forall Alpha) - Universal quantification, meaning “for any type α.”

  • \( :: \) (Double Colon) - The “has kind” relation, telling us what category a type constructor belongs to.

  • \( \star \) (Star) - The kind of concrete types like Int, Bool, and String.

  • \( \square \) (Box) - The kind of kinds, the kind that kinds themselves have.

  • \( \to \) (Arrow) - Function types at the term level, or kind arrows at the type level.

  • \( [B/\alpha]A \) - Type substitution that replaces all occurrences of type variable α with type B in type A.

  • \( \Lambda\alpha \) (Big Lambda) - Type abstraction that creates polymorphic functions at the type level.

  • \( \kappa \) (Kappa) - Kind variables representing unknown categories in our type system.

  • \( \overline{x} \) - Overline notation indicating a sequence or list of elements, such as multiple variables or arguments.

Before examining the DK worklist algorithm implementation, let’s establish the formal typing rules that govern System F-ω. These rules extend System F with higher-kinded types and more polymorphism.

Basic Expression Rules

Variable lookup finds types in the context:

\[ \frac{x : A \in \Gamma}{\Gamma \vdash x \Rightarrow A} \text{(T-Var)} \]

Function application with bidirectional checking:

\[ \frac{\Gamma \vdash e_1 \Rightarrow A \to B \quad \Gamma \vdash e_2 \Leftarrow A}{\Gamma \vdash e_1 ; e_2 \Rightarrow B} \text{(T-App)} \]

Lambda abstraction in checking mode:

\[ \frac{\Gamma, x : A \vdash e \Leftarrow B}{\Gamma \vdash \lambda x. e \Leftarrow A \to B} \text{(T-Lam)} \]

Type Application and Abstraction

Type application instantiates polymorphic expressions:

\[ \frac{\Gamma \vdash e \Rightarrow \forall \alpha :: \kappa. A \quad \Gamma \vdash B :: \kappa}{\Gamma \vdash e ; [B] \Rightarrow [B/\alpha]A} \text{(T-TApp)} \]

Type abstraction introduces universal quantification:

\[ \frac{\Gamma, \alpha :: \kappa \vdash e \Leftarrow A}{\Gamma \vdash \Lambda \alpha :: \kappa. e \Leftarrow \forall \alpha :: \kappa. A} \text{(T-TAbs)} \]

Higher-Kinded Type Rules

Kind checking ensures type constructors are well-formed:

\[ \frac{}{\Gamma \vdash \star :: \square} \text{(K-Type)} \]

\[ \frac{\Gamma \vdash \kappa_1 :: \square \quad \Gamma \vdash \kappa_2 :: \square}{\Gamma \vdash \kappa_1 \to \kappa_2 :: \square} \text{(K-Arrow)} \]

Type constructor application:

\[ \frac{\Gamma \vdash F :: \kappa_1 \to \kappa_2 \quad \Gamma \vdash A :: \kappa_1}{\Gamma \vdash F ; A :: \kappa_2} \text{(K-App)} \]

Pattern Matching Rules

Constructor patterns check against data types:

\[ \frac{\Gamma \vdash e \Rightarrow T \quad \overline{\Gamma \vdash p_i \Leftarrow T \leadsto \Delta_i \quad \Gamma, \Delta_i \vdash e_i \Leftarrow A}}{\Gamma \vdash \text{match } e \text{ with } \overline{p_i \to e_i} \Rightarrow A} \text{(T-Match)} \]

Bidirectional Control

Mode switching from inference to checking:

\[ \frac{\Gamma \vdash e \Rightarrow A}{\Gamma \vdash e \Leftarrow A} \text{(T-Sub)} \]

Existential variable instantiation:

\[ \frac{\Gamma, \hat{\alpha} :: \kappa \vdash e \Rightarrow A}{\Gamma \vdash e \Rightarrow [\hat{\alpha}/\alpha]A} \text{(T-InstL)} \]

In these rules, \( \Rightarrow \) denotes inference mode (synthesizing types), \( \Leftarrow \) denotes checking mode (verifying against expected types), \( :: \) relates types to their kinds, and \( \hat{\alpha} \) represents existential type variables solved during inference.

The DK Worklist Algorithm

The DK algorithm represents type inference as a constraint solving problem. Instead of recursively traversing expressions and immediately making type checking decisions, it accumulates constraints in a worklist and processes them systematically.

Worklist Structure

#![allow(unused)]
fn main() {
use std::collections::HashMap;
use crate::core::{CoreBinOp, CorePattern, CoreTerm, CoreType};
use crate::errors::{TypeError, TypeResult};
/// DK Worklist Algorithm for System-F-ω
/// Based on "A Mechanical Formalization of Higher-Ranked Polymorphic Type
/// Inference"
pub type TyVar = String;
pub type TmVar = String;
#[derive(Debug, Clone, PartialEq)]
pub enum TyVarKind {
    /// Universal type variable: α
    Universal,
    /// Existential type variable: ^α
    Existential,
    /// Solved existential: ^α = τ
    Solved(CoreType),
    /// Marker: ►α (for scoping)
    Marker,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Judgment {
    /// Subtyping: A <: B
    Sub { left: CoreType, right: CoreType },
    /// Type inference: e ⊢ A
    Inf { term: CoreTerm, ty: CoreType },
    /// Type checking: e ⇐ A
    Chk { term: CoreTerm, ty: CoreType },
    /// Application inference helper: A • e ⊢ C
    InfApp {
        func_ty: CoreType,
        arg: CoreTerm,
        result_ty: CoreType,
    },
}
#[derive(Debug, Clone)]
pub struct Worklist {
    entries: Vec<WorklistEntry>,
    next_var: usize,
}
impl Default for Worklist {
    fn default() -> Self {
        Self::new()
    }
}
impl Worklist {
    pub fn new() -> Self {
        Worklist {
            entries: Vec::new(),
            next_var: 0,
        }
    }

    pub fn fresh_var(&mut self) -> TyVar {
        let var = format!("α{}", self.next_var);
        self.next_var += 1;
        var
    }

    pub fn fresh_evar(&mut self) -> TyVar {
        let var = format!("^α{}", self.next_var);
        self.next_var += 1;
        var
    }

    pub fn push(&mut self, entry: WorklistEntry) {
        self.entries.push(entry);
    }

    pub fn pop(&mut self) -> Option<WorklistEntry> {
        self.entries.pop()
    }

    pub fn find_var(&self, name: &str) -> Option<&CoreType> {
        for entry in self.entries.iter().rev() {
            if let WorklistEntry::Var(var_name, ty) = entry {
                if var_name == name {
                    return Some(ty);
                }
            }
        }
        None
    }

    pub fn solve_evar(&mut self, name: &str, ty: CoreType) -> TypeResult<()> {
        for entry in self.entries.iter_mut() {
            if let WorklistEntry::TVar(var_name, kind) = entry {
                if var_name == name {
                    match kind {
                        TyVarKind::Existential => {
                            *kind = TyVarKind::Solved(ty);
                            return Ok(());
                        }
                        TyVarKind::Solved(_) => {
                            // Variable already solved, that's OK
                            return Ok(());
                        }
                        _ => {
                            // Skip universal variables, markers, etc.
                            continue;
                        }
                    }
                }
            }
        }
        Err(TypeError::UnboundVariable {
            name: name.to_string(),
            span: None,
        })
    }

    pub fn before(&self, a: &str, b: &str) -> bool {
        let mut pos_a = None;
        let mut pos_b = None;

        for (i, entry) in self.entries.iter().enumerate() {
            if let WorklistEntry::TVar(name, _) = entry {
                if name == a {
                    pos_a = Some(i);
                }
                if name == b {
                    pos_b = Some(i);
                }
            }
        }

        match (pos_a, pos_b) {
            (Some(pa), Some(pb)) => pa < pb,
            _ => false,
        }
    }
}
pub struct DKInference {
    worklist: Worklist,
    trace: Vec<String>,
    data_constructors: HashMap<String, CoreType>,
    /// Variable typing context for pattern-bound variables
    var_context: HashMap<String, CoreType>,
}
impl DKInference {
    pub fn with_context(
        data_constructors: HashMap<String, CoreType>,
        var_context: HashMap<String, CoreType>,
    ) -> Self {
        DKInference {
            worklist: Worklist::new(),
            trace: Vec::new(),
            data_constructors,
            var_context,
        }
    }

    pub fn check_type(&mut self, term: &CoreTerm, expected_ty: &CoreType) -> TypeResult<()> {
        self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
            term: term.clone(),
            ty: expected_ty.clone(),
        }));

        self.solve()
    }

    fn solve(&mut self) -> TypeResult<()> {
        while let Some(entry) = self.worklist.pop() {
            match entry {
                WorklistEntry::TVar(_, _) => {
                    // Skip variable bindings during processing
                    continue;
                }
                WorklistEntry::Var(_, _) => {
                    // Skip term variable bindings during processing
                    continue;
                }
                WorklistEntry::Judgment(judgment) => {
                    self.solve_judgment(judgment)?;
                }
            }
        }
        Ok(())
    }

    fn solve_judgment(&mut self, judgment: Judgment) -> TypeResult<()> {
        match judgment {
            Judgment::Sub { left, right } => self.solve_subtype(left, right),
            Judgment::Inf { term, ty } => self.solve_inference(term, ty),
            Judgment::Chk { term, ty } => self.solve_checking(term, ty),
            Judgment::InfApp {
                func_ty,
                arg,
                result_ty,
            } => self.solve_inf_app(func_ty, arg, result_ty),
        }
    }

    fn solve_subtype(&mut self, left: CoreType, right: CoreType) -> TypeResult<()> {
        self.trace.push(format!("Sub {} <: {}", left, right));

        if left == right {
            return Ok(());
        }

        match (&left, &right) {
            // Reflexivity
            (CoreType::Con(a), CoreType::Con(b)) if a == b => Ok(()),
            (CoreType::Var(a), CoreType::Var(b)) if a == b => Ok(()),
            (CoreType::ETVar(a), CoreType::ETVar(b)) if a == b => Ok(()),

            // Function subtyping (contravariant in argument, covariant in result)
            (CoreType::Arrow(a1, a2), CoreType::Arrow(b1, b2)) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *b1.clone(),
                    right: *a1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *a2.clone(),
                    right: *b2.clone(),
                }));
                Ok(())
            }

            // Application subtyping (covariant in both components)
            (CoreType::App(a1, a2), CoreType::App(b1, b2)) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *a1.clone(),
                    right: *b1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *a2.clone(),
                    right: *b2.clone(),
                }));
                Ok(())
            }

            // Forall right
            (_, CoreType::Forall(var, ty)) => {
                let fresh_var = self.worklist.fresh_var();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
                let substituted_ty = self.substitute_type(var, &CoreType::Var(fresh_var), ty);
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left,
                    right: substituted_ty,
                }));
                Ok(())
            }

            // Forall left
            (CoreType::Forall(var, ty), _) => {
                let fresh_evar = self.worklist.fresh_evar();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_evar.clone(), TyVarKind::Marker));
                self.worklist.push(WorklistEntry::TVar(
                    fresh_evar.clone(),
                    TyVarKind::Existential,
                ));
                let substituted_ty = self.substitute_type(var, &CoreType::ETVar(fresh_evar), ty);
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: substituted_ty,
                    right,
                }));
                Ok(())
            }

            // Existential variable instantiation
            (CoreType::ETVar(a), _) if !self.occurs_check(a, &right) => {
                self.instantiate_left(a, &right)
            }
            (_, CoreType::ETVar(a)) if !self.occurs_check(a, &left) => {
                self.instantiate_right(&left, a)
            }

            _ => Err(TypeError::SubtypingError {
                left,
                right,
                span: None,
            }),
        }
    }

    fn solve_inference(&mut self, term: CoreTerm, ty: CoreType) -> TypeResult<()> {
        self.trace
            .push(format!("Inf {} ⊢ {}", self.term_to_string(&term), ty));

        match term {
            CoreTerm::Var(name) => {
                // Check pattern variable context first
                if let Some(var_ty) = self.var_context.get(&name).cloned() {
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: var_ty,
                        right: ty,
                    }));
                    Ok(())
                } else if let Some(var_ty) = self.worklist.find_var(&name).cloned() {
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: var_ty,
                        right: ty,
                    }));
                    Ok(())
                } else if let Some(var_ty) = self.data_constructors.get(&name).cloned() {
                    // Check data constructors for constructor variables
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: var_ty,
                        right: ty,
                    }));
                    Ok(())
                } else {
                    Err(TypeError::UnboundVariable {
                        name: name.to_string(),
                        span: None,
                    })
                }
            }

            CoreTerm::LitInt(_) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::Con("Int".to_string()),
                    right: ty,
                }));
                Ok(())
            }

            CoreTerm::Lambda {
                param,
                param_ty,
                body,
            } => {
                let result_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &result_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                let arrow_ty =
                    CoreType::Arrow(Box::new(param_ty.clone()), Box::new(result_ty.clone()));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: arrow_ty,
                    right: ty,
                }));

                self.worklist.push(WorklistEntry::Var(param, param_ty));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term: *body,
                    ty: result_ty,
                }));

                Ok(())
            }

            CoreTerm::App { func, arg } => {
                let func_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &func_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                self.worklist
                    .push(WorklistEntry::Judgment(Judgment::InfApp {
                        func_ty: func_ty.clone(),
                        arg: *arg,
                        result_ty: ty,
                    }));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term: *func,
                    ty: func_ty,
                }));

                Ok(())
            }

            CoreTerm::TypeLambda { param, body } => {
                let body_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &body_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                let forall_ty = CoreType::Forall(param.clone(), Box::new(body_ty.clone()));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: forall_ty,
                    right: ty,
                }));

                self.worklist
                    .push(WorklistEntry::TVar(param, TyVarKind::Universal));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term: *body,
                    ty: body_ty,
                }));

                Ok(())
            }

            CoreTerm::Constructor { name, args: _ } => {
                if let Some(constructor_ty) = self.data_constructors.get(&name) {
                    // Instantiate the constructor type with fresh existential variables
                    let instantiated_ty = self.instantiate_constructor_type(constructor_ty.clone());

                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: instantiated_ty,
                        right: ty,
                    }));
                    Ok(())
                } else {
                    Err(TypeError::UnboundDataConstructor {
                        name: name.clone(),
                        span: None,
                    })
                }
            }

            CoreTerm::BinOp { op, left, right } => {
                let (left_ty, right_ty, result_ty) = self.infer_binop_types(&op);

                // Add any existential variables to the worklist
                self.add_etvars_to_worklist(&left_ty);
                self.add_etvars_to_worklist(&right_ty);
                self.add_etvars_to_worklist(&result_ty);

                // Check that the operands have the expected types
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *left,
                    ty: left_ty,
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *right,
                    ty: right_ty,
                }));

                // Check that the result type matches the expected type
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: result_ty,
                    right: ty,
                }));

                Ok(())
            }

            CoreTerm::If {
                cond,
                then_branch,
                else_branch,
            } => {
                // The condition must be Bool
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *cond,
                    ty: CoreType::Con("Bool".to_string()),
                }));

                // Both branches must have the same type as the expected result
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *then_branch,
                    ty: ty.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *else_branch,
                    ty,
                }));

                Ok(())
            }

            CoreTerm::Case { scrutinee, arms } => {
                // Create a fresh type variable for the scrutinee
                let scrutinee_ty = CoreType::ETVar(self.worklist.fresh_evar());
                self.add_etvars_to_worklist(&scrutinee_ty);

                // Check that the scrutinee has the inferred type
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *scrutinee,
                    ty: scrutinee_ty.clone(),
                }));

                // Process each pattern arm
                for arm in arms {
                    // Check pattern constraints and get pattern variable bindings
                    let pattern_bindings = self.check_pattern(&arm.pattern, &scrutinee_ty)?;

                    // Add pattern variable bindings to worklist as regular variables
                    for (var_name, var_type) in pattern_bindings {
                        self.worklist.push(WorklistEntry::Var(var_name, var_type));
                    }

                    // Check the body with pattern variables in the worklist
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                        term: arm.body.clone(),
                        ty: ty.clone(),
                    }));
                }

                Ok(())
            }
        }
    }

    fn solve_checking(&mut self, term: CoreTerm, ty: CoreType) -> TypeResult<()> {
        self.trace
            .push(format!("Chk {} ⇐ {}", self.term_to_string(&term), ty));

        match (&term, &ty) {
            (
                CoreTerm::Lambda {
                    param,
                    param_ty,
                    body,
                },
                CoreType::Arrow(expected_param, result_ty),
            ) => {
                // Check that parameter types match
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: param_ty.clone(),
                    right: *expected_param.clone(),
                }));

                self.worklist
                    .push(WorklistEntry::Var(param.clone(), param_ty.clone()));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *body.clone(),
                    ty: *result_ty.clone(),
                }));

                Ok(())
            }

            (_, CoreType::Forall(var, body_ty)) => {
                let fresh_var = self.worklist.fresh_var();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
                let substituted_ty = self.substitute_type(var, &CoreType::Var(fresh_var), body_ty);
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term,
                    ty: substituted_ty,
                }));
                Ok(())
            }

            _ => {
                // Fallback: infer type and check subtyping
                let inferred_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &inferred_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: inferred_ty.clone(),
                    right: ty,
                }));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term,
                    ty: inferred_ty,
                }));

                Ok(())
            }
        }
    }

    fn solve_inf_app(
        &mut self,
        func_ty: CoreType,
        arg: CoreTerm,
        result_ty: CoreType,
    ) -> TypeResult<()> {
        match func_ty {
            CoreType::Arrow(param_ty, ret_ty) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *ret_ty,
                    right: result_ty,
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: arg,
                    ty: *param_ty,
                }));
                Ok(())
            }

            CoreType::Forall(var, body) => {
                let fresh_evar = self.worklist.fresh_evar();
                self.worklist.push(WorklistEntry::TVar(
                    fresh_evar.clone(),
                    TyVarKind::Existential,
                ));
                let substituted = self.substitute_type(&var, &CoreType::ETVar(fresh_evar), &body);
                self.worklist
                    .push(WorklistEntry::Judgment(Judgment::InfApp {
                        func_ty: substituted,
                        arg,
                        result_ty,
                    }));
                Ok(())
            }

            CoreType::ETVar(a) => {
                let param_ty_name = self.worklist.fresh_evar();
                let ret_ty_name = self.worklist.fresh_evar();
                let param_ty = CoreType::ETVar(param_ty_name.clone());
                let ret_ty = CoreType::ETVar(ret_ty_name.clone());

                // Add the fresh existential variables to the worklist
                self.worklist
                    .push(WorklistEntry::TVar(param_ty_name, TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(ret_ty_name, TyVarKind::Existential));

                let arrow_ty =
                    CoreType::Arrow(Box::new(param_ty.clone()), Box::new(ret_ty.clone()));
                self.worklist.solve_evar(&a, arrow_ty)?;

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: ret_ty,
                    right: result_ty,
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: arg,
                    ty: param_ty,
                }));

                Ok(())
            }

            _ => Err(TypeError::NotAFunction {
                ty: func_ty,
                span: None,
            }),
        }
    }

    fn instantiate_left(&mut self, var: &str, ty: &CoreType) -> TypeResult<()> {
        match ty {
            CoreType::ETVar(b) if self.worklist.before(var, b) => {
                self.worklist
                    .solve_evar(b, CoreType::ETVar(var.to_string()))?;
                Ok(())
            }
            CoreType::Arrow(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let arrow_ty = CoreType::Arrow(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, arrow_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t1.clone(),
                    right: CoreType::ETVar(a1),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a2),
                    right: *t2.clone(),
                }));

                Ok(())
            }
            CoreType::App(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let app_ty = CoreType::App(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, app_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a1),
                    right: *t1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a2),
                    right: *t2.clone(),
                }));

                Ok(())
            }
            CoreType::Forall(b, t) => {
                let fresh_var = self.worklist.fresh_var();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
                let substituted = self.substitute_type(b, &CoreType::Var(fresh_var), t);
                self.instantiate_left(var, &substituted)
            }
            _ if self.is_monotype(ty) => {
                self.worklist.solve_evar(var, ty.clone())?;
                Ok(())
            }
            _ => Err(TypeError::InstantiationError {
                var: var.to_string(),
                ty: ty.clone(),
                span: None,
            }),
        }
    }

    fn instantiate_right(&mut self, ty: &CoreType, var: &str) -> TypeResult<()> {
        match ty {
            CoreType::ETVar(a) if self.worklist.before(var, a) => {
                self.worklist
                    .solve_evar(a, CoreType::ETVar(var.to_string()))?;
                Ok(())
            }
            CoreType::Arrow(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let arrow_ty = CoreType::Arrow(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, arrow_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a1),
                    right: *t1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t2.clone(),
                    right: CoreType::ETVar(a2),
                }));

                Ok(())
            }
            CoreType::App(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let app_ty = CoreType::App(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, app_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t1.clone(),
                    right: CoreType::ETVar(a1),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t2.clone(),
                    right: CoreType::ETVar(a2),
                }));

                Ok(())
            }
            CoreType::Forall(a, t) => {
                let fresh_evar = self.worklist.fresh_evar();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_evar.clone(), TyVarKind::Marker));
                self.worklist.push(WorklistEntry::TVar(
                    fresh_evar.clone(),
                    TyVarKind::Existential,
                ));
                let substituted = self.substitute_type(a, &CoreType::ETVar(fresh_evar), t);
                self.instantiate_right(&substituted, var)
            }
            _ if self.is_monotype(ty) => {
                self.worklist.solve_evar(var, ty.clone())?;
                Ok(())
            }
            _ => Err(TypeError::InstantiationError {
                var: var.to_string(),
                ty: ty.clone(),
                span: None,
            }),
        }
    }

    fn occurs_check(&self, var: &str, ty: &CoreType) -> bool {
        match ty {
            CoreType::ETVar(name) | CoreType::Var(name) => name == var,
            CoreType::Arrow(t1, t2) | CoreType::App(t1, t2) => {
                self.occurs_check(var, t1) || self.occurs_check(var, t2)
            }
            CoreType::Forall(_, t) => self.occurs_check(var, t),
            CoreType::Product(types) => types.iter().any(|t| self.occurs_check(var, t)),
            _ => false,
        }
    }

    fn is_monotype(&self, ty: &CoreType) -> bool {
        match ty {
            CoreType::Con(_) | CoreType::Var(_) | CoreType::ETVar(_) => true,
            CoreType::Arrow(t1, t2) | CoreType::App(t1, t2) => {
                self.is_monotype(t1) && self.is_monotype(t2)
            }
            CoreType::Product(types) => types.iter().all(|t| self.is_monotype(t)),
            CoreType::Forall(_, _) => false,
        }
    }

    fn substitute_type(&self, var: &str, replacement: &CoreType, ty: &CoreType) -> CoreType {
        match ty {
            CoreType::Var(name) if name == var => replacement.clone(),
            CoreType::Arrow(t1, t2) => CoreType::Arrow(
                Box::new(self.substitute_type(var, replacement, t1)),
                Box::new(self.substitute_type(var, replacement, t2)),
            ),
            CoreType::Forall(bound_var, body) if bound_var != var => CoreType::Forall(
                bound_var.clone(),
                Box::new(self.substitute_type(var, replacement, body)),
            ),
            CoreType::App(t1, t2) => CoreType::App(
                Box::new(self.substitute_type(var, replacement, t1)),
                Box::new(self.substitute_type(var, replacement, t2)),
            ),
            CoreType::Product(types) => CoreType::Product(
                types
                    .iter()
                    .map(|t| self.substitute_type(var, replacement, t))
                    .collect(),
            ),
            _ => ty.clone(),
        }
    }

    fn term_to_string(&self, term: &CoreTerm) -> String {
        match term {
            CoreTerm::Var(name) => name.clone(),
            CoreTerm::LitInt(n) => n.to_string(),
            _ => format!("{:?}", term), // Simplified for now
        }
    }

    pub fn get_trace(&self) -> &[String] {
        &self.trace
    }

    fn infer_binop_types(&mut self, op: &CoreBinOp) -> (CoreType, CoreType, CoreType) {
        match op {
            // Arithmetic operations: Int -> Int -> Int
            CoreBinOp::Add | CoreBinOp::Sub | CoreBinOp::Mul | CoreBinOp::Div => (
                CoreType::Con("Int".to_string()),
                CoreType::Con("Int".to_string()),
                CoreType::Con("Int".to_string()),
            ),
            // Comparison operations: Int -> Int -> Bool
            CoreBinOp::Lt | CoreBinOp::Le => (
                CoreType::Con("Int".to_string()),
                CoreType::Con("Int".to_string()),
                CoreType::Con("Bool".to_string()),
            ),
        }
    }

    fn add_etvars_to_worklist(&mut self, ty: &CoreType) {
        match ty {
            CoreType::ETVar(var_name) => {
                self.worklist.push(WorklistEntry::TVar(
                    var_name.clone(),
                    TyVarKind::Existential,
                ));
            }
            CoreType::Arrow(t1, t2) | CoreType::App(t1, t2) => {
                self.add_etvars_to_worklist(t1);
                self.add_etvars_to_worklist(t2);
            }
            CoreType::Forall(_, t) => {
                self.add_etvars_to_worklist(t);
            }
            CoreType::Product(types) => {
                for t in types {
                    self.add_etvars_to_worklist(t);
                }
            }
            CoreType::Var(_) | CoreType::Con(_) => {
                // No existential variables to add
            }
        }
    }

    /// Check a pattern against a type and return variable bindings
    /// This implements pattern type checking with unification
    fn check_pattern(
        &mut self,
        pattern: &CorePattern,
        expected_ty: &CoreType,
    ) -> TypeResult<HashMap<String, CoreType>> {
        let mut bindings = HashMap::new();

        match pattern {
            CorePattern::Wildcard => {
                // Wildcard matches anything, no bindings
                Ok(bindings)
            }

            CorePattern::Var(var_name) => {
                // Variable patterns bind the scrutinee type
                bindings.insert(var_name.clone(), expected_ty.clone());
                Ok(bindings)
            }

            CorePattern::Constructor { name, args } => {
                // Look up constructor type from data constructor environment
                if let Some(constructor_ty) = self.data_constructors.get(name).cloned() {
                    // Constructor type should be of the form: T1 -> T2 -> ... -> DataType
                    // We need to unify the result type with expected_ty and
                    // check argument patterns against parameter types

                    let (param_types, result_type) =
                        self.extract_constructor_signature(&constructor_ty)?;

                    // The result type should unify with the expected type
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: result_type,
                        right: expected_ty.clone(),
                    }));

                    // Check that we have the right number of pattern arguments
                    if args.len() != param_types.len() {
                        return Err(TypeError::ArityMismatch {
                            expected: param_types.len(),
                            actual: args.len(),
                            span: None,
                        });
                    }

                    // Recursively check argument patterns
                    for (arg_pattern, param_ty) in args.iter().zip(param_types.iter()) {
                        let arg_bindings = self.check_pattern(arg_pattern, param_ty)?;
                        // Merge bindings (should check for conflicts in a real implementation)
                        for (var, ty) in arg_bindings {
                            bindings.insert(var, ty);
                        }
                    }

                    Ok(bindings)
                } else {
                    Err(TypeError::UnboundDataConstructor {
                        name: name.clone(),
                        span: None,
                    })
                }
            }
        }
    }

    /// Extract parameter types and result type from a constructor type
    /// signature e.g., Int -> String -> Bool -> MyData => ([Int, String,
    /// Bool], MyData)
    fn extract_constructor_signature(
        &mut self,
        constructor_ty: &CoreType,
    ) -> TypeResult<(Vec<CoreType>, CoreType)> {
        let mut param_types = Vec::new();
        let mut current_ty = constructor_ty.clone();
        let mut substitutions = HashMap::new();

        // Instantiate forall quantifiers with fresh existential variables
        while let CoreType::Forall(var, body) = current_ty {
            let fresh_var_name = self.worklist.fresh_evar();
            let fresh_evar = CoreType::ETVar(fresh_var_name.clone());

            // Add the existential variable to the worklist
            self.worklist.push(WorklistEntry::TVar(
                fresh_var_name.clone(),
                TyVarKind::Existential,
            ));

            substitutions.insert(var.clone(), fresh_evar);
            current_ty = *body;
        }

        // Apply substitutions to the type
        current_ty = self.apply_type_substitutions(&current_ty, &substitutions);

        // Extract parameter types from arrows
        while let CoreType::Arrow(param_ty, result_ty) = current_ty {
            param_types.push(*param_ty.clone());
            current_ty = *result_ty;
        }

        // The final type is the result type
        Ok((param_types, current_ty))
    }

    fn instantiate_constructor_type(&mut self, constructor_ty: CoreType) -> CoreType {
        let mut current_ty = constructor_ty;
        let mut substitutions = HashMap::new();

        // Instantiate forall quantifiers with fresh existential variables
        while let CoreType::Forall(var, body) = current_ty {
            let fresh_var_name = self.worklist.fresh_evar();
            let fresh_evar = CoreType::ETVar(fresh_var_name.clone());

            // Add the existential variable to the worklist
            self.worklist.push(WorklistEntry::TVar(
                fresh_var_name.clone(),
                TyVarKind::Existential,
            ));

            substitutions.insert(var.clone(), fresh_evar);
            current_ty = *body;
        }

        // Apply substitutions to the type
        self.apply_type_substitutions(&current_ty, &substitutions)
    }

    fn apply_type_substitutions(
        &self,
        ty: &CoreType,
        substitutions: &HashMap<String, CoreType>,
    ) -> CoreType {
        match ty {
            CoreType::Var(name) => substitutions
                .get(name)
                .cloned()
                .unwrap_or_else(|| ty.clone()),
            CoreType::Con(name) => CoreType::Con(name.clone()),
            CoreType::ETVar(name) => CoreType::ETVar(name.clone()),
            CoreType::Arrow(left, right) => CoreType::Arrow(
                Box::new(self.apply_type_substitutions(left, substitutions)),
                Box::new(self.apply_type_substitutions(right, substitutions)),
            ),
            CoreType::App(left, right) => CoreType::App(
                Box::new(self.apply_type_substitutions(left, substitutions)),
                Box::new(self.apply_type_substitutions(right, substitutions)),
            ),
            CoreType::Forall(var, body) => {
                // Don't substitute under forall bindings - this is a simplification
                CoreType::Forall(
                    var.clone(),
                    Box::new(self.apply_type_substitutions(body, substitutions)),
                )
            }
            CoreType::Product(types) => CoreType::Product(
                types
                    .iter()
                    .map(|t| self.apply_type_substitutions(t, substitutions))
                    .collect(),
            ),
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum WorklistEntry {
    /// Type variable binding: α
    TVar(TyVar, TyVarKind),
    /// Term variable binding: x : T
    Var(TmVar, CoreType),
    /// Judgment: Sub A B | Inf e ⊢ A | Chk e ⇐ A
    Judgment(Judgment),
}
}

The worklist contains three fundamental kinds of entries that work together to manage the type checking process. Type variable bindings track type variables and their current status, which can be universal (from explicit quantifiers), existential (representing unknown types that need inference), or solved (existential variables that have been determined through constraint solving). Term variable bindings associate program variables with their inferred or declared types, maintaining the mapping between identifiers in the source code and their type information. Judgments represent the core type checking and inference tasks that remain to be completed, encoding operations like subtyping checks, type synthesis, and type verification.

This tripartite structure allows the algorithm to defer complex decisions while systematically accumulating the information needed to make them correctly. By separating variable management from the actual type checking work, the algorithm can handle intricate dependencies and ensure that all constraints are resolved in the proper order.

Type Variable Kinds

#![allow(unused)]
fn main() {
use std::collections::HashMap;
use crate::core::{CoreBinOp, CorePattern, CoreTerm, CoreType};
use crate::errors::{TypeError, TypeResult};
/// DK Worklist Algorithm for System-F-ω
/// Based on "A Mechanical Formalization of Higher-Ranked Polymorphic Type
/// Inference"
pub type TyVar = String;
pub type TmVar = String;
#[derive(Debug, Clone, PartialEq)]
pub enum WorklistEntry {
    /// Type variable binding: α
    TVar(TyVar, TyVarKind),
    /// Term variable binding: x : T
    Var(TmVar, CoreType),
    /// Judgment: Sub A B | Inf e ⊢ A | Chk e ⇐ A
    Judgment(Judgment),
}
#[derive(Debug, Clone, PartialEq)]
pub enum Judgment {
    /// Subtyping: A <: B
    Sub { left: CoreType, right: CoreType },
    /// Type inference: e ⊢ A
    Inf { term: CoreTerm, ty: CoreType },
    /// Type checking: e ⇐ A
    Chk { term: CoreTerm, ty: CoreType },
    /// Application inference helper: A • e ⊢ C
    InfApp {
        func_ty: CoreType,
        arg: CoreTerm,
        result_ty: CoreType,
    },
}
#[derive(Debug, Clone)]
pub struct Worklist {
    entries: Vec<WorklistEntry>,
    next_var: usize,
}
impl Default for Worklist {
    fn default() -> Self {
        Self::new()
    }
}
impl Worklist {
    pub fn new() -> Self {
        Worklist {
            entries: Vec::new(),
            next_var: 0,
        }
    }

    pub fn fresh_var(&mut self) -> TyVar {
        let var = format!("α{}", self.next_var);
        self.next_var += 1;
        var
    }

    pub fn fresh_evar(&mut self) -> TyVar {
        let var = format!("^α{}", self.next_var);
        self.next_var += 1;
        var
    }

    pub fn push(&mut self, entry: WorklistEntry) {
        self.entries.push(entry);
    }

    pub fn pop(&mut self) -> Option<WorklistEntry> {
        self.entries.pop()
    }

    pub fn find_var(&self, name: &str) -> Option<&CoreType> {
        for entry in self.entries.iter().rev() {
            if let WorklistEntry::Var(var_name, ty) = entry {
                if var_name == name {
                    return Some(ty);
                }
            }
        }
        None
    }

    pub fn solve_evar(&mut self, name: &str, ty: CoreType) -> TypeResult<()> {
        for entry in self.entries.iter_mut() {
            if let WorklistEntry::TVar(var_name, kind) = entry {
                if var_name == name {
                    match kind {
                        TyVarKind::Existential => {
                            *kind = TyVarKind::Solved(ty);
                            return Ok(());
                        }
                        TyVarKind::Solved(_) => {
                            // Variable already solved, that's OK
                            return Ok(());
                        }
                        _ => {
                            // Skip universal variables, markers, etc.
                            continue;
                        }
                    }
                }
            }
        }
        Err(TypeError::UnboundVariable {
            name: name.to_string(),
            span: None,
        })
    }

    pub fn before(&self, a: &str, b: &str) -> bool {
        let mut pos_a = None;
        let mut pos_b = None;

        for (i, entry) in self.entries.iter().enumerate() {
            if let WorklistEntry::TVar(name, _) = entry {
                if name == a {
                    pos_a = Some(i);
                }
                if name == b {
                    pos_b = Some(i);
                }
            }
        }

        match (pos_a, pos_b) {
            (Some(pa), Some(pb)) => pa < pb,
            _ => false,
        }
    }
}
pub struct DKInference {
    worklist: Worklist,
    trace: Vec<String>,
    data_constructors: HashMap<String, CoreType>,
    /// Variable typing context for pattern-bound variables
    var_context: HashMap<String, CoreType>,
}
impl DKInference {
    pub fn with_context(
        data_constructors: HashMap<String, CoreType>,
        var_context: HashMap<String, CoreType>,
    ) -> Self {
        DKInference {
            worklist: Worklist::new(),
            trace: Vec::new(),
            data_constructors,
            var_context,
        }
    }

    pub fn check_type(&mut self, term: &CoreTerm, expected_ty: &CoreType) -> TypeResult<()> {
        self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
            term: term.clone(),
            ty: expected_ty.clone(),
        }));

        self.solve()
    }

    fn solve(&mut self) -> TypeResult<()> {
        while let Some(entry) = self.worklist.pop() {
            match entry {
                WorklistEntry::TVar(_, _) => {
                    // Skip variable bindings during processing
                    continue;
                }
                WorklistEntry::Var(_, _) => {
                    // Skip term variable bindings during processing
                    continue;
                }
                WorklistEntry::Judgment(judgment) => {
                    self.solve_judgment(judgment)?;
                }
            }
        }
        Ok(())
    }

    fn solve_judgment(&mut self, judgment: Judgment) -> TypeResult<()> {
        match judgment {
            Judgment::Sub { left, right } => self.solve_subtype(left, right),
            Judgment::Inf { term, ty } => self.solve_inference(term, ty),
            Judgment::Chk { term, ty } => self.solve_checking(term, ty),
            Judgment::InfApp {
                func_ty,
                arg,
                result_ty,
            } => self.solve_inf_app(func_ty, arg, result_ty),
        }
    }

    fn solve_subtype(&mut self, left: CoreType, right: CoreType) -> TypeResult<()> {
        self.trace.push(format!("Sub {} <: {}", left, right));

        if left == right {
            return Ok(());
        }

        match (&left, &right) {
            // Reflexivity
            (CoreType::Con(a), CoreType::Con(b)) if a == b => Ok(()),
            (CoreType::Var(a), CoreType::Var(b)) if a == b => Ok(()),
            (CoreType::ETVar(a), CoreType::ETVar(b)) if a == b => Ok(()),

            // Function subtyping (contravariant in argument, covariant in result)
            (CoreType::Arrow(a1, a2), CoreType::Arrow(b1, b2)) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *b1.clone(),
                    right: *a1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *a2.clone(),
                    right: *b2.clone(),
                }));
                Ok(())
            }

            // Application subtyping (covariant in both components)
            (CoreType::App(a1, a2), CoreType::App(b1, b2)) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *a1.clone(),
                    right: *b1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *a2.clone(),
                    right: *b2.clone(),
                }));
                Ok(())
            }

            // Forall right
            (_, CoreType::Forall(var, ty)) => {
                let fresh_var = self.worklist.fresh_var();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
                let substituted_ty = self.substitute_type(var, &CoreType::Var(fresh_var), ty);
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left,
                    right: substituted_ty,
                }));
                Ok(())
            }

            // Forall left
            (CoreType::Forall(var, ty), _) => {
                let fresh_evar = self.worklist.fresh_evar();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_evar.clone(), TyVarKind::Marker));
                self.worklist.push(WorklistEntry::TVar(
                    fresh_evar.clone(),
                    TyVarKind::Existential,
                ));
                let substituted_ty = self.substitute_type(var, &CoreType::ETVar(fresh_evar), ty);
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: substituted_ty,
                    right,
                }));
                Ok(())
            }

            // Existential variable instantiation
            (CoreType::ETVar(a), _) if !self.occurs_check(a, &right) => {
                self.instantiate_left(a, &right)
            }
            (_, CoreType::ETVar(a)) if !self.occurs_check(a, &left) => {
                self.instantiate_right(&left, a)
            }

            _ => Err(TypeError::SubtypingError {
                left,
                right,
                span: None,
            }),
        }
    }

    fn solve_inference(&mut self, term: CoreTerm, ty: CoreType) -> TypeResult<()> {
        self.trace
            .push(format!("Inf {} ⊢ {}", self.term_to_string(&term), ty));

        match term {
            CoreTerm::Var(name) => {
                // Check pattern variable context first
                if let Some(var_ty) = self.var_context.get(&name).cloned() {
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: var_ty,
                        right: ty,
                    }));
                    Ok(())
                } else if let Some(var_ty) = self.worklist.find_var(&name).cloned() {
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: var_ty,
                        right: ty,
                    }));
                    Ok(())
                } else if let Some(var_ty) = self.data_constructors.get(&name).cloned() {
                    // Check data constructors for constructor variables
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: var_ty,
                        right: ty,
                    }));
                    Ok(())
                } else {
                    Err(TypeError::UnboundVariable {
                        name: name.to_string(),
                        span: None,
                    })
                }
            }

            CoreTerm::LitInt(_) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::Con("Int".to_string()),
                    right: ty,
                }));
                Ok(())
            }

            CoreTerm::Lambda {
                param,
                param_ty,
                body,
            } => {
                let result_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &result_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                let arrow_ty =
                    CoreType::Arrow(Box::new(param_ty.clone()), Box::new(result_ty.clone()));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: arrow_ty,
                    right: ty,
                }));

                self.worklist.push(WorklistEntry::Var(param, param_ty));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term: *body,
                    ty: result_ty,
                }));

                Ok(())
            }

            CoreTerm::App { func, arg } => {
                let func_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &func_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                self.worklist
                    .push(WorklistEntry::Judgment(Judgment::InfApp {
                        func_ty: func_ty.clone(),
                        arg: *arg,
                        result_ty: ty,
                    }));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term: *func,
                    ty: func_ty,
                }));

                Ok(())
            }

            CoreTerm::TypeLambda { param, body } => {
                let body_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &body_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                let forall_ty = CoreType::Forall(param.clone(), Box::new(body_ty.clone()));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: forall_ty,
                    right: ty,
                }));

                self.worklist
                    .push(WorklistEntry::TVar(param, TyVarKind::Universal));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term: *body,
                    ty: body_ty,
                }));

                Ok(())
            }

            CoreTerm::Constructor { name, args: _ } => {
                if let Some(constructor_ty) = self.data_constructors.get(&name) {
                    // Instantiate the constructor type with fresh existential variables
                    let instantiated_ty = self.instantiate_constructor_type(constructor_ty.clone());

                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: instantiated_ty,
                        right: ty,
                    }));
                    Ok(())
                } else {
                    Err(TypeError::UnboundDataConstructor {
                        name: name.clone(),
                        span: None,
                    })
                }
            }

            CoreTerm::BinOp { op, left, right } => {
                let (left_ty, right_ty, result_ty) = self.infer_binop_types(&op);

                // Add any existential variables to the worklist
                self.add_etvars_to_worklist(&left_ty);
                self.add_etvars_to_worklist(&right_ty);
                self.add_etvars_to_worklist(&result_ty);

                // Check that the operands have the expected types
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *left,
                    ty: left_ty,
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *right,
                    ty: right_ty,
                }));

                // Check that the result type matches the expected type
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: result_ty,
                    right: ty,
                }));

                Ok(())
            }

            CoreTerm::If {
                cond,
                then_branch,
                else_branch,
            } => {
                // The condition must be Bool
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *cond,
                    ty: CoreType::Con("Bool".to_string()),
                }));

                // Both branches must have the same type as the expected result
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *then_branch,
                    ty: ty.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *else_branch,
                    ty,
                }));

                Ok(())
            }

            CoreTerm::Case { scrutinee, arms } => {
                // Create a fresh type variable for the scrutinee
                let scrutinee_ty = CoreType::ETVar(self.worklist.fresh_evar());
                self.add_etvars_to_worklist(&scrutinee_ty);

                // Check that the scrutinee has the inferred type
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *scrutinee,
                    ty: scrutinee_ty.clone(),
                }));

                // Process each pattern arm
                for arm in arms {
                    // Check pattern constraints and get pattern variable bindings
                    let pattern_bindings = self.check_pattern(&arm.pattern, &scrutinee_ty)?;

                    // Add pattern variable bindings to worklist as regular variables
                    for (var_name, var_type) in pattern_bindings {
                        self.worklist.push(WorklistEntry::Var(var_name, var_type));
                    }

                    // Check the body with pattern variables in the worklist
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                        term: arm.body.clone(),
                        ty: ty.clone(),
                    }));
                }

                Ok(())
            }
        }
    }

    fn solve_checking(&mut self, term: CoreTerm, ty: CoreType) -> TypeResult<()> {
        self.trace
            .push(format!("Chk {} ⇐ {}", self.term_to_string(&term), ty));

        match (&term, &ty) {
            (
                CoreTerm::Lambda {
                    param,
                    param_ty,
                    body,
                },
                CoreType::Arrow(expected_param, result_ty),
            ) => {
                // Check that parameter types match
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: param_ty.clone(),
                    right: *expected_param.clone(),
                }));

                self.worklist
                    .push(WorklistEntry::Var(param.clone(), param_ty.clone()));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *body.clone(),
                    ty: *result_ty.clone(),
                }));

                Ok(())
            }

            (_, CoreType::Forall(var, body_ty)) => {
                let fresh_var = self.worklist.fresh_var();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
                let substituted_ty = self.substitute_type(var, &CoreType::Var(fresh_var), body_ty);
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term,
                    ty: substituted_ty,
                }));
                Ok(())
            }

            _ => {
                // Fallback: infer type and check subtyping
                let inferred_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &inferred_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: inferred_ty.clone(),
                    right: ty,
                }));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term,
                    ty: inferred_ty,
                }));

                Ok(())
            }
        }
    }

    fn solve_inf_app(
        &mut self,
        func_ty: CoreType,
        arg: CoreTerm,
        result_ty: CoreType,
    ) -> TypeResult<()> {
        match func_ty {
            CoreType::Arrow(param_ty, ret_ty) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *ret_ty,
                    right: result_ty,
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: arg,
                    ty: *param_ty,
                }));
                Ok(())
            }

            CoreType::Forall(var, body) => {
                let fresh_evar = self.worklist.fresh_evar();
                self.worklist.push(WorklistEntry::TVar(
                    fresh_evar.clone(),
                    TyVarKind::Existential,
                ));
                let substituted = self.substitute_type(&var, &CoreType::ETVar(fresh_evar), &body);
                self.worklist
                    .push(WorklistEntry::Judgment(Judgment::InfApp {
                        func_ty: substituted,
                        arg,
                        result_ty,
                    }));
                Ok(())
            }

            CoreType::ETVar(a) => {
                let param_ty_name = self.worklist.fresh_evar();
                let ret_ty_name = self.worklist.fresh_evar();
                let param_ty = CoreType::ETVar(param_ty_name.clone());
                let ret_ty = CoreType::ETVar(ret_ty_name.clone());

                // Add the fresh existential variables to the worklist
                self.worklist
                    .push(WorklistEntry::TVar(param_ty_name, TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(ret_ty_name, TyVarKind::Existential));

                let arrow_ty =
                    CoreType::Arrow(Box::new(param_ty.clone()), Box::new(ret_ty.clone()));
                self.worklist.solve_evar(&a, arrow_ty)?;

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: ret_ty,
                    right: result_ty,
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: arg,
                    ty: param_ty,
                }));

                Ok(())
            }

            _ => Err(TypeError::NotAFunction {
                ty: func_ty,
                span: None,
            }),
        }
    }

    fn instantiate_left(&mut self, var: &str, ty: &CoreType) -> TypeResult<()> {
        match ty {
            CoreType::ETVar(b) if self.worklist.before(var, b) => {
                self.worklist
                    .solve_evar(b, CoreType::ETVar(var.to_string()))?;
                Ok(())
            }
            CoreType::Arrow(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let arrow_ty = CoreType::Arrow(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, arrow_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t1.clone(),
                    right: CoreType::ETVar(a1),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a2),
                    right: *t2.clone(),
                }));

                Ok(())
            }
            CoreType::App(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let app_ty = CoreType::App(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, app_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a1),
                    right: *t1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a2),
                    right: *t2.clone(),
                }));

                Ok(())
            }
            CoreType::Forall(b, t) => {
                let fresh_var = self.worklist.fresh_var();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
                let substituted = self.substitute_type(b, &CoreType::Var(fresh_var), t);
                self.instantiate_left(var, &substituted)
            }
            _ if self.is_monotype(ty) => {
                self.worklist.solve_evar(var, ty.clone())?;
                Ok(())
            }
            _ => Err(TypeError::InstantiationError {
                var: var.to_string(),
                ty: ty.clone(),
                span: None,
            }),
        }
    }

    fn instantiate_right(&mut self, ty: &CoreType, var: &str) -> TypeResult<()> {
        match ty {
            CoreType::ETVar(a) if self.worklist.before(var, a) => {
                self.worklist
                    .solve_evar(a, CoreType::ETVar(var.to_string()))?;
                Ok(())
            }
            CoreType::Arrow(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let arrow_ty = CoreType::Arrow(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, arrow_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a1),
                    right: *t1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t2.clone(),
                    right: CoreType::ETVar(a2),
                }));

                Ok(())
            }
            CoreType::App(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let app_ty = CoreType::App(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, app_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t1.clone(),
                    right: CoreType::ETVar(a1),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t2.clone(),
                    right: CoreType::ETVar(a2),
                }));

                Ok(())
            }
            CoreType::Forall(a, t) => {
                let fresh_evar = self.worklist.fresh_evar();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_evar.clone(), TyVarKind::Marker));
                self.worklist.push(WorklistEntry::TVar(
                    fresh_evar.clone(),
                    TyVarKind::Existential,
                ));
                let substituted = self.substitute_type(a, &CoreType::ETVar(fresh_evar), t);
                self.instantiate_right(&substituted, var)
            }
            _ if self.is_monotype(ty) => {
                self.worklist.solve_evar(var, ty.clone())?;
                Ok(())
            }
            _ => Err(TypeError::InstantiationError {
                var: var.to_string(),
                ty: ty.clone(),
                span: None,
            }),
        }
    }

    fn occurs_check(&self, var: &str, ty: &CoreType) -> bool {
        match ty {
            CoreType::ETVar(name) | CoreType::Var(name) => name == var,
            CoreType::Arrow(t1, t2) | CoreType::App(t1, t2) => {
                self.occurs_check(var, t1) || self.occurs_check(var, t2)
            }
            CoreType::Forall(_, t) => self.occurs_check(var, t),
            CoreType::Product(types) => types.iter().any(|t| self.occurs_check(var, t)),
            _ => false,
        }
    }

    fn is_monotype(&self, ty: &CoreType) -> bool {
        match ty {
            CoreType::Con(_) | CoreType::Var(_) | CoreType::ETVar(_) => true,
            CoreType::Arrow(t1, t2) | CoreType::App(t1, t2) => {
                self.is_monotype(t1) && self.is_monotype(t2)
            }
            CoreType::Product(types) => types.iter().all(|t| self.is_monotype(t)),
            CoreType::Forall(_, _) => false,
        }
    }

    fn substitute_type(&self, var: &str, replacement: &CoreType, ty: &CoreType) -> CoreType {
        match ty {
            CoreType::Var(name) if name == var => replacement.clone(),
            CoreType::Arrow(t1, t2) => CoreType::Arrow(
                Box::new(self.substitute_type(var, replacement, t1)),
                Box::new(self.substitute_type(var, replacement, t2)),
            ),
            CoreType::Forall(bound_var, body) if bound_var != var => CoreType::Forall(
                bound_var.clone(),
                Box::new(self.substitute_type(var, replacement, body)),
            ),
            CoreType::App(t1, t2) => CoreType::App(
                Box::new(self.substitute_type(var, replacement, t1)),
                Box::new(self.substitute_type(var, replacement, t2)),
            ),
            CoreType::Product(types) => CoreType::Product(
                types
                    .iter()
                    .map(|t| self.substitute_type(var, replacement, t))
                    .collect(),
            ),
            _ => ty.clone(),
        }
    }

    fn term_to_string(&self, term: &CoreTerm) -> String {
        match term {
            CoreTerm::Var(name) => name.clone(),
            CoreTerm::LitInt(n) => n.to_string(),
            _ => format!("{:?}", term), // Simplified for now
        }
    }

    pub fn get_trace(&self) -> &[String] {
        &self.trace
    }

    fn infer_binop_types(&mut self, op: &CoreBinOp) -> (CoreType, CoreType, CoreType) {
        match op {
            // Arithmetic operations: Int -> Int -> Int
            CoreBinOp::Add | CoreBinOp::Sub | CoreBinOp::Mul | CoreBinOp::Div => (
                CoreType::Con("Int".to_string()),
                CoreType::Con("Int".to_string()),
                CoreType::Con("Int".to_string()),
            ),
            // Comparison operations: Int -> Int -> Bool
            CoreBinOp::Lt | CoreBinOp::Le => (
                CoreType::Con("Int".to_string()),
                CoreType::Con("Int".to_string()),
                CoreType::Con("Bool".to_string()),
            ),
        }
    }

    fn add_etvars_to_worklist(&mut self, ty: &CoreType) {
        match ty {
            CoreType::ETVar(var_name) => {
                self.worklist.push(WorklistEntry::TVar(
                    var_name.clone(),
                    TyVarKind::Existential,
                ));
            }
            CoreType::Arrow(t1, t2) | CoreType::App(t1, t2) => {
                self.add_etvars_to_worklist(t1);
                self.add_etvars_to_worklist(t2);
            }
            CoreType::Forall(_, t) => {
                self.add_etvars_to_worklist(t);
            }
            CoreType::Product(types) => {
                for t in types {
                    self.add_etvars_to_worklist(t);
                }
            }
            CoreType::Var(_) | CoreType::Con(_) => {
                // No existential variables to add
            }
        }
    }

    /// Check a pattern against a type and return variable bindings
    /// This implements pattern type checking with unification
    fn check_pattern(
        &mut self,
        pattern: &CorePattern,
        expected_ty: &CoreType,
    ) -> TypeResult<HashMap<String, CoreType>> {
        let mut bindings = HashMap::new();

        match pattern {
            CorePattern::Wildcard => {
                // Wildcard matches anything, no bindings
                Ok(bindings)
            }

            CorePattern::Var(var_name) => {
                // Variable patterns bind the scrutinee type
                bindings.insert(var_name.clone(), expected_ty.clone());
                Ok(bindings)
            }

            CorePattern::Constructor { name, args } => {
                // Look up constructor type from data constructor environment
                if let Some(constructor_ty) = self.data_constructors.get(name).cloned() {
                    // Constructor type should be of the form: T1 -> T2 -> ... -> DataType
                    // We need to unify the result type with expected_ty and
                    // check argument patterns against parameter types

                    let (param_types, result_type) =
                        self.extract_constructor_signature(&constructor_ty)?;

                    // The result type should unify with the expected type
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: result_type,
                        right: expected_ty.clone(),
                    }));

                    // Check that we have the right number of pattern arguments
                    if args.len() != param_types.len() {
                        return Err(TypeError::ArityMismatch {
                            expected: param_types.len(),
                            actual: args.len(),
                            span: None,
                        });
                    }

                    // Recursively check argument patterns
                    for (arg_pattern, param_ty) in args.iter().zip(param_types.iter()) {
                        let arg_bindings = self.check_pattern(arg_pattern, param_ty)?;
                        // Merge bindings (should check for conflicts in a real implementation)
                        for (var, ty) in arg_bindings {
                            bindings.insert(var, ty);
                        }
                    }

                    Ok(bindings)
                } else {
                    Err(TypeError::UnboundDataConstructor {
                        name: name.clone(),
                        span: None,
                    })
                }
            }
        }
    }

    /// Extract parameter types and result type from a constructor type
    /// signature e.g., Int -> String -> Bool -> MyData => ([Int, String,
    /// Bool], MyData)
    fn extract_constructor_signature(
        &mut self,
        constructor_ty: &CoreType,
    ) -> TypeResult<(Vec<CoreType>, CoreType)> {
        let mut param_types = Vec::new();
        let mut current_ty = constructor_ty.clone();
        let mut substitutions = HashMap::new();

        // Instantiate forall quantifiers with fresh existential variables
        while let CoreType::Forall(var, body) = current_ty {
            let fresh_var_name = self.worklist.fresh_evar();
            let fresh_evar = CoreType::ETVar(fresh_var_name.clone());

            // Add the existential variable to the worklist
            self.worklist.push(WorklistEntry::TVar(
                fresh_var_name.clone(),
                TyVarKind::Existential,
            ));

            substitutions.insert(var.clone(), fresh_evar);
            current_ty = *body;
        }

        // Apply substitutions to the type
        current_ty = self.apply_type_substitutions(&current_ty, &substitutions);

        // Extract parameter types from arrows
        while let CoreType::Arrow(param_ty, result_ty) = current_ty {
            param_types.push(*param_ty.clone());
            current_ty = *result_ty;
        }

        // The final type is the result type
        Ok((param_types, current_ty))
    }

    fn instantiate_constructor_type(&mut self, constructor_ty: CoreType) -> CoreType {
        let mut current_ty = constructor_ty;
        let mut substitutions = HashMap::new();

        // Instantiate forall quantifiers with fresh existential variables
        while let CoreType::Forall(var, body) = current_ty {
            let fresh_var_name = self.worklist.fresh_evar();
            let fresh_evar = CoreType::ETVar(fresh_var_name.clone());

            // Add the existential variable to the worklist
            self.worklist.push(WorklistEntry::TVar(
                fresh_var_name.clone(),
                TyVarKind::Existential,
            ));

            substitutions.insert(var.clone(), fresh_evar);
            current_ty = *body;
        }

        // Apply substitutions to the type
        self.apply_type_substitutions(&current_ty, &substitutions)
    }

    fn apply_type_substitutions(
        &self,
        ty: &CoreType,
        substitutions: &HashMap<String, CoreType>,
    ) -> CoreType {
        match ty {
            CoreType::Var(name) => substitutions
                .get(name)
                .cloned()
                .unwrap_or_else(|| ty.clone()),
            CoreType::Con(name) => CoreType::Con(name.clone()),
            CoreType::ETVar(name) => CoreType::ETVar(name.clone()),
            CoreType::Arrow(left, right) => CoreType::Arrow(
                Box::new(self.apply_type_substitutions(left, substitutions)),
                Box::new(self.apply_type_substitutions(right, substitutions)),
            ),
            CoreType::App(left, right) => CoreType::App(
                Box::new(self.apply_type_substitutions(left, substitutions)),
                Box::new(self.apply_type_substitutions(right, substitutions)),
            ),
            CoreType::Forall(var, body) => {
                // Don't substitute under forall bindings - this is a simplification
                CoreType::Forall(
                    var.clone(),
                    Box::new(self.apply_type_substitutions(body, substitutions)),
                )
            }
            CoreType::Product(types) => CoreType::Product(
                types
                    .iter()
                    .map(|t| self.apply_type_substitutions(t, substitutions))
                    .collect(),
            ),
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum TyVarKind {
    /// Universal type variable: α
    Universal,
    /// Existential type variable: ^α
    Existential,
    /// Solved existential: ^α = τ
    Solved(CoreType),
    /// Marker: ►α (for scoping)
    Marker,
}
}

Type variables in the worklist can have different statuses that reflect their role in the type checking process. Universal type variables represent ordinary type variables that arise from explicit quantifiers in the source code, such as the \( \alpha \) in \( \forall\alpha. \ldots \). These variables have fixed scope and represent abstract types that can be instantiated with concrete types during polymorphic function application. Existential type variables, denoted as \( \hat{\alpha} \), represent unknown types that need to be inferred through constraint solving. The algorithm generates these variables when it encounters expressions whose types are not immediately apparent and must be determined through unification and constraint propagation.

Solved existential variables represent the successful resolution of inference problems, taking the form \( \hat{\alpha} = \tau \) where the unknown type \( \hat{\alpha} \) has been determined to be the concrete type \( \tau \). Marker variables, denoted as \( \triangleright\alpha \), serve as scope markers that enable garbage collection of type variables when their scope is exited. This stratification enables precise control over variable scoping and solution propagation, ensuring that type variables are managed correctly throughout the inference process.

Judgment Types

#![allow(unused)]
fn main() {
use std::collections::HashMap;
use crate::core::{CoreBinOp, CorePattern, CoreTerm, CoreType};
use crate::errors::{TypeError, TypeResult};
/// DK Worklist Algorithm for System-F-ω
/// Based on "A Mechanical Formalization of Higher-Ranked Polymorphic Type
/// Inference"
pub type TyVar = String;
pub type TmVar = String;
#[derive(Debug, Clone, PartialEq)]
pub enum WorklistEntry {
    /// Type variable binding: α
    TVar(TyVar, TyVarKind),
    /// Term variable binding: x : T
    Var(TmVar, CoreType),
    /// Judgment: Sub A B | Inf e ⊢ A | Chk e ⇐ A
    Judgment(Judgment),
}
#[derive(Debug, Clone, PartialEq)]
pub enum TyVarKind {
    /// Universal type variable: α
    Universal,
    /// Existential type variable: ^α
    Existential,
    /// Solved existential: ^α = τ
    Solved(CoreType),
    /// Marker: ►α (for scoping)
    Marker,
}
#[derive(Debug, Clone)]
pub struct Worklist {
    entries: Vec<WorklistEntry>,
    next_var: usize,
}
impl Default for Worklist {
    fn default() -> Self {
        Self::new()
    }
}
impl Worklist {
    pub fn new() -> Self {
        Worklist {
            entries: Vec::new(),
            next_var: 0,
        }
    }

    pub fn fresh_var(&mut self) -> TyVar {
        let var = format!("α{}", self.next_var);
        self.next_var += 1;
        var
    }

    pub fn fresh_evar(&mut self) -> TyVar {
        let var = format!("^α{}", self.next_var);
        self.next_var += 1;
        var
    }

    pub fn push(&mut self, entry: WorklistEntry) {
        self.entries.push(entry);
    }

    pub fn pop(&mut self) -> Option<WorklistEntry> {
        self.entries.pop()
    }

    pub fn find_var(&self, name: &str) -> Option<&CoreType> {
        for entry in self.entries.iter().rev() {
            if let WorklistEntry::Var(var_name, ty) = entry {
                if var_name == name {
                    return Some(ty);
                }
            }
        }
        None
    }

    pub fn solve_evar(&mut self, name: &str, ty: CoreType) -> TypeResult<()> {
        for entry in self.entries.iter_mut() {
            if let WorklistEntry::TVar(var_name, kind) = entry {
                if var_name == name {
                    match kind {
                        TyVarKind::Existential => {
                            *kind = TyVarKind::Solved(ty);
                            return Ok(());
                        }
                        TyVarKind::Solved(_) => {
                            // Variable already solved, that's OK
                            return Ok(());
                        }
                        _ => {
                            // Skip universal variables, markers, etc.
                            continue;
                        }
                    }
                }
            }
        }
        Err(TypeError::UnboundVariable {
            name: name.to_string(),
            span: None,
        })
    }

    pub fn before(&self, a: &str, b: &str) -> bool {
        let mut pos_a = None;
        let mut pos_b = None;

        for (i, entry) in self.entries.iter().enumerate() {
            if let WorklistEntry::TVar(name, _) = entry {
                if name == a {
                    pos_a = Some(i);
                }
                if name == b {
                    pos_b = Some(i);
                }
            }
        }

        match (pos_a, pos_b) {
            (Some(pa), Some(pb)) => pa < pb,
            _ => false,
        }
    }
}
pub struct DKInference {
    worklist: Worklist,
    trace: Vec<String>,
    data_constructors: HashMap<String, CoreType>,
    /// Variable typing context for pattern-bound variables
    var_context: HashMap<String, CoreType>,
}
impl DKInference {
    pub fn with_context(
        data_constructors: HashMap<String, CoreType>,
        var_context: HashMap<String, CoreType>,
    ) -> Self {
        DKInference {
            worklist: Worklist::new(),
            trace: Vec::new(),
            data_constructors,
            var_context,
        }
    }

    pub fn check_type(&mut self, term: &CoreTerm, expected_ty: &CoreType) -> TypeResult<()> {
        self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
            term: term.clone(),
            ty: expected_ty.clone(),
        }));

        self.solve()
    }

    fn solve(&mut self) -> TypeResult<()> {
        while let Some(entry) = self.worklist.pop() {
            match entry {
                WorklistEntry::TVar(_, _) => {
                    // Skip variable bindings during processing
                    continue;
                }
                WorklistEntry::Var(_, _) => {
                    // Skip term variable bindings during processing
                    continue;
                }
                WorklistEntry::Judgment(judgment) => {
                    self.solve_judgment(judgment)?;
                }
            }
        }
        Ok(())
    }

    fn solve_judgment(&mut self, judgment: Judgment) -> TypeResult<()> {
        match judgment {
            Judgment::Sub { left, right } => self.solve_subtype(left, right),
            Judgment::Inf { term, ty } => self.solve_inference(term, ty),
            Judgment::Chk { term, ty } => self.solve_checking(term, ty),
            Judgment::InfApp {
                func_ty,
                arg,
                result_ty,
            } => self.solve_inf_app(func_ty, arg, result_ty),
        }
    }

    fn solve_subtype(&mut self, left: CoreType, right: CoreType) -> TypeResult<()> {
        self.trace.push(format!("Sub {} <: {}", left, right));

        if left == right {
            return Ok(());
        }

        match (&left, &right) {
            // Reflexivity
            (CoreType::Con(a), CoreType::Con(b)) if a == b => Ok(()),
            (CoreType::Var(a), CoreType::Var(b)) if a == b => Ok(()),
            (CoreType::ETVar(a), CoreType::ETVar(b)) if a == b => Ok(()),

            // Function subtyping (contravariant in argument, covariant in result)
            (CoreType::Arrow(a1, a2), CoreType::Arrow(b1, b2)) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *b1.clone(),
                    right: *a1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *a2.clone(),
                    right: *b2.clone(),
                }));
                Ok(())
            }

            // Application subtyping (covariant in both components)
            (CoreType::App(a1, a2), CoreType::App(b1, b2)) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *a1.clone(),
                    right: *b1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *a2.clone(),
                    right: *b2.clone(),
                }));
                Ok(())
            }

            // Forall right
            (_, CoreType::Forall(var, ty)) => {
                let fresh_var = self.worklist.fresh_var();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
                let substituted_ty = self.substitute_type(var, &CoreType::Var(fresh_var), ty);
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left,
                    right: substituted_ty,
                }));
                Ok(())
            }

            // Forall left
            (CoreType::Forall(var, ty), _) => {
                let fresh_evar = self.worklist.fresh_evar();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_evar.clone(), TyVarKind::Marker));
                self.worklist.push(WorklistEntry::TVar(
                    fresh_evar.clone(),
                    TyVarKind::Existential,
                ));
                let substituted_ty = self.substitute_type(var, &CoreType::ETVar(fresh_evar), ty);
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: substituted_ty,
                    right,
                }));
                Ok(())
            }

            // Existential variable instantiation
            (CoreType::ETVar(a), _) if !self.occurs_check(a, &right) => {
                self.instantiate_left(a, &right)
            }
            (_, CoreType::ETVar(a)) if !self.occurs_check(a, &left) => {
                self.instantiate_right(&left, a)
            }

            _ => Err(TypeError::SubtypingError {
                left,
                right,
                span: None,
            }),
        }
    }

    fn solve_inference(&mut self, term: CoreTerm, ty: CoreType) -> TypeResult<()> {
        self.trace
            .push(format!("Inf {} ⊢ {}", self.term_to_string(&term), ty));

        match term {
            CoreTerm::Var(name) => {
                // Check pattern variable context first
                if let Some(var_ty) = self.var_context.get(&name).cloned() {
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: var_ty,
                        right: ty,
                    }));
                    Ok(())
                } else if let Some(var_ty) = self.worklist.find_var(&name).cloned() {
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: var_ty,
                        right: ty,
                    }));
                    Ok(())
                } else if let Some(var_ty) = self.data_constructors.get(&name).cloned() {
                    // Check data constructors for constructor variables
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: var_ty,
                        right: ty,
                    }));
                    Ok(())
                } else {
                    Err(TypeError::UnboundVariable {
                        name: name.to_string(),
                        span: None,
                    })
                }
            }

            CoreTerm::LitInt(_) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::Con("Int".to_string()),
                    right: ty,
                }));
                Ok(())
            }

            CoreTerm::Lambda {
                param,
                param_ty,
                body,
            } => {
                let result_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &result_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                let arrow_ty =
                    CoreType::Arrow(Box::new(param_ty.clone()), Box::new(result_ty.clone()));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: arrow_ty,
                    right: ty,
                }));

                self.worklist.push(WorklistEntry::Var(param, param_ty));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term: *body,
                    ty: result_ty,
                }));

                Ok(())
            }

            CoreTerm::App { func, arg } => {
                let func_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &func_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                self.worklist
                    .push(WorklistEntry::Judgment(Judgment::InfApp {
                        func_ty: func_ty.clone(),
                        arg: *arg,
                        result_ty: ty,
                    }));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term: *func,
                    ty: func_ty,
                }));

                Ok(())
            }

            CoreTerm::TypeLambda { param, body } => {
                let body_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &body_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                let forall_ty = CoreType::Forall(param.clone(), Box::new(body_ty.clone()));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: forall_ty,
                    right: ty,
                }));

                self.worklist
                    .push(WorklistEntry::TVar(param, TyVarKind::Universal));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term: *body,
                    ty: body_ty,
                }));

                Ok(())
            }

            CoreTerm::Constructor { name, args: _ } => {
                if let Some(constructor_ty) = self.data_constructors.get(&name) {
                    // Instantiate the constructor type with fresh existential variables
                    let instantiated_ty = self.instantiate_constructor_type(constructor_ty.clone());

                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: instantiated_ty,
                        right: ty,
                    }));
                    Ok(())
                } else {
                    Err(TypeError::UnboundDataConstructor {
                        name: name.clone(),
                        span: None,
                    })
                }
            }

            CoreTerm::BinOp { op, left, right } => {
                let (left_ty, right_ty, result_ty) = self.infer_binop_types(&op);

                // Add any existential variables to the worklist
                self.add_etvars_to_worklist(&left_ty);
                self.add_etvars_to_worklist(&right_ty);
                self.add_etvars_to_worklist(&result_ty);

                // Check that the operands have the expected types
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *left,
                    ty: left_ty,
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *right,
                    ty: right_ty,
                }));

                // Check that the result type matches the expected type
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: result_ty,
                    right: ty,
                }));

                Ok(())
            }

            CoreTerm::If {
                cond,
                then_branch,
                else_branch,
            } => {
                // The condition must be Bool
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *cond,
                    ty: CoreType::Con("Bool".to_string()),
                }));

                // Both branches must have the same type as the expected result
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *then_branch,
                    ty: ty.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *else_branch,
                    ty,
                }));

                Ok(())
            }

            CoreTerm::Case { scrutinee, arms } => {
                // Create a fresh type variable for the scrutinee
                let scrutinee_ty = CoreType::ETVar(self.worklist.fresh_evar());
                self.add_etvars_to_worklist(&scrutinee_ty);

                // Check that the scrutinee has the inferred type
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *scrutinee,
                    ty: scrutinee_ty.clone(),
                }));

                // Process each pattern arm
                for arm in arms {
                    // Check pattern constraints and get pattern variable bindings
                    let pattern_bindings = self.check_pattern(&arm.pattern, &scrutinee_ty)?;

                    // Add pattern variable bindings to worklist as regular variables
                    for (var_name, var_type) in pattern_bindings {
                        self.worklist.push(WorklistEntry::Var(var_name, var_type));
                    }

                    // Check the body with pattern variables in the worklist
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                        term: arm.body.clone(),
                        ty: ty.clone(),
                    }));
                }

                Ok(())
            }
        }
    }

    fn solve_checking(&mut self, term: CoreTerm, ty: CoreType) -> TypeResult<()> {
        self.trace
            .push(format!("Chk {} ⇐ {}", self.term_to_string(&term), ty));

        match (&term, &ty) {
            (
                CoreTerm::Lambda {
                    param,
                    param_ty,
                    body,
                },
                CoreType::Arrow(expected_param, result_ty),
            ) => {
                // Check that parameter types match
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: param_ty.clone(),
                    right: *expected_param.clone(),
                }));

                self.worklist
                    .push(WorklistEntry::Var(param.clone(), param_ty.clone()));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: *body.clone(),
                    ty: *result_ty.clone(),
                }));

                Ok(())
            }

            (_, CoreType::Forall(var, body_ty)) => {
                let fresh_var = self.worklist.fresh_var();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
                let substituted_ty = self.substitute_type(var, &CoreType::Var(fresh_var), body_ty);
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term,
                    ty: substituted_ty,
                }));
                Ok(())
            }

            _ => {
                // Fallback: infer type and check subtyping
                let inferred_ty = CoreType::ETVar(self.worklist.fresh_evar());
                if let CoreType::ETVar(var_name) = &inferred_ty {
                    self.worklist.push(WorklistEntry::TVar(
                        var_name.clone(),
                        TyVarKind::Existential,
                    ));
                }

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: inferred_ty.clone(),
                    right: ty,
                }));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                    term,
                    ty: inferred_ty,
                }));

                Ok(())
            }
        }
    }

    fn solve_inf_app(
        &mut self,
        func_ty: CoreType,
        arg: CoreTerm,
        result_ty: CoreType,
    ) -> TypeResult<()> {
        match func_ty {
            CoreType::Arrow(param_ty, ret_ty) => {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *ret_ty,
                    right: result_ty,
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: arg,
                    ty: *param_ty,
                }));
                Ok(())
            }

            CoreType::Forall(var, body) => {
                let fresh_evar = self.worklist.fresh_evar();
                self.worklist.push(WorklistEntry::TVar(
                    fresh_evar.clone(),
                    TyVarKind::Existential,
                ));
                let substituted = self.substitute_type(&var, &CoreType::ETVar(fresh_evar), &body);
                self.worklist
                    .push(WorklistEntry::Judgment(Judgment::InfApp {
                        func_ty: substituted,
                        arg,
                        result_ty,
                    }));
                Ok(())
            }

            CoreType::ETVar(a) => {
                let param_ty_name = self.worklist.fresh_evar();
                let ret_ty_name = self.worklist.fresh_evar();
                let param_ty = CoreType::ETVar(param_ty_name.clone());
                let ret_ty = CoreType::ETVar(ret_ty_name.clone());

                // Add the fresh existential variables to the worklist
                self.worklist
                    .push(WorklistEntry::TVar(param_ty_name, TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(ret_ty_name, TyVarKind::Existential));

                let arrow_ty =
                    CoreType::Arrow(Box::new(param_ty.clone()), Box::new(ret_ty.clone()));
                self.worklist.solve_evar(&a, arrow_ty)?;

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: ret_ty,
                    right: result_ty,
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: arg,
                    ty: param_ty,
                }));

                Ok(())
            }

            _ => Err(TypeError::NotAFunction {
                ty: func_ty,
                span: None,
            }),
        }
    }

    fn instantiate_left(&mut self, var: &str, ty: &CoreType) -> TypeResult<()> {
        match ty {
            CoreType::ETVar(b) if self.worklist.before(var, b) => {
                self.worklist
                    .solve_evar(b, CoreType::ETVar(var.to_string()))?;
                Ok(())
            }
            CoreType::Arrow(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let arrow_ty = CoreType::Arrow(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, arrow_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t1.clone(),
                    right: CoreType::ETVar(a1),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a2),
                    right: *t2.clone(),
                }));

                Ok(())
            }
            CoreType::App(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let app_ty = CoreType::App(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, app_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a1),
                    right: *t1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a2),
                    right: *t2.clone(),
                }));

                Ok(())
            }
            CoreType::Forall(b, t) => {
                let fresh_var = self.worklist.fresh_var();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
                let substituted = self.substitute_type(b, &CoreType::Var(fresh_var), t);
                self.instantiate_left(var, &substituted)
            }
            _ if self.is_monotype(ty) => {
                self.worklist.solve_evar(var, ty.clone())?;
                Ok(())
            }
            _ => Err(TypeError::InstantiationError {
                var: var.to_string(),
                ty: ty.clone(),
                span: None,
            }),
        }
    }

    fn instantiate_right(&mut self, ty: &CoreType, var: &str) -> TypeResult<()> {
        match ty {
            CoreType::ETVar(a) if self.worklist.before(var, a) => {
                self.worklist
                    .solve_evar(a, CoreType::ETVar(var.to_string()))?;
                Ok(())
            }
            CoreType::Arrow(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let arrow_ty = CoreType::Arrow(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, arrow_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: CoreType::ETVar(a1),
                    right: *t1.clone(),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t2.clone(),
                    right: CoreType::ETVar(a2),
                }));

                Ok(())
            }
            CoreType::App(t1, t2) => {
                let a1 = self.worklist.fresh_evar();
                let a2 = self.worklist.fresh_evar();
                let app_ty = CoreType::App(
                    Box::new(CoreType::ETVar(a1.clone())),
                    Box::new(CoreType::ETVar(a2.clone())),
                );
                self.worklist.solve_evar(var, app_ty)?;

                self.worklist
                    .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
                self.worklist
                    .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t1.clone(),
                    right: CoreType::ETVar(a1),
                }));
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: *t2.clone(),
                    right: CoreType::ETVar(a2),
                }));

                Ok(())
            }
            CoreType::Forall(a, t) => {
                let fresh_evar = self.worklist.fresh_evar();
                self.worklist
                    .push(WorklistEntry::TVar(fresh_evar.clone(), TyVarKind::Marker));
                self.worklist.push(WorklistEntry::TVar(
                    fresh_evar.clone(),
                    TyVarKind::Existential,
                ));
                let substituted = self.substitute_type(a, &CoreType::ETVar(fresh_evar), t);
                self.instantiate_right(&substituted, var)
            }
            _ if self.is_monotype(ty) => {
                self.worklist.solve_evar(var, ty.clone())?;
                Ok(())
            }
            _ => Err(TypeError::InstantiationError {
                var: var.to_string(),
                ty: ty.clone(),
                span: None,
            }),
        }
    }

    fn occurs_check(&self, var: &str, ty: &CoreType) -> bool {
        match ty {
            CoreType::ETVar(name) | CoreType::Var(name) => name == var,
            CoreType::Arrow(t1, t2) | CoreType::App(t1, t2) => {
                self.occurs_check(var, t1) || self.occurs_check(var, t2)
            }
            CoreType::Forall(_, t) => self.occurs_check(var, t),
            CoreType::Product(types) => types.iter().any(|t| self.occurs_check(var, t)),
            _ => false,
        }
    }

    fn is_monotype(&self, ty: &CoreType) -> bool {
        match ty {
            CoreType::Con(_) | CoreType::Var(_) | CoreType::ETVar(_) => true,
            CoreType::Arrow(t1, t2) | CoreType::App(t1, t2) => {
                self.is_monotype(t1) && self.is_monotype(t2)
            }
            CoreType::Product(types) => types.iter().all(|t| self.is_monotype(t)),
            CoreType::Forall(_, _) => false,
        }
    }

    fn substitute_type(&self, var: &str, replacement: &CoreType, ty: &CoreType) -> CoreType {
        match ty {
            CoreType::Var(name) if name == var => replacement.clone(),
            CoreType::Arrow(t1, t2) => CoreType::Arrow(
                Box::new(self.substitute_type(var, replacement, t1)),
                Box::new(self.substitute_type(var, replacement, t2)),
            ),
            CoreType::Forall(bound_var, body) if bound_var != var => CoreType::Forall(
                bound_var.clone(),
                Box::new(self.substitute_type(var, replacement, body)),
            ),
            CoreType::App(t1, t2) => CoreType::App(
                Box::new(self.substitute_type(var, replacement, t1)),
                Box::new(self.substitute_type(var, replacement, t2)),
            ),
            CoreType::Product(types) => CoreType::Product(
                types
                    .iter()
                    .map(|t| self.substitute_type(var, replacement, t))
                    .collect(),
            ),
            _ => ty.clone(),
        }
    }

    fn term_to_string(&self, term: &CoreTerm) -> String {
        match term {
            CoreTerm::Var(name) => name.clone(),
            CoreTerm::LitInt(n) => n.to_string(),
            _ => format!("{:?}", term), // Simplified for now
        }
    }

    pub fn get_trace(&self) -> &[String] {
        &self.trace
    }

    fn infer_binop_types(&mut self, op: &CoreBinOp) -> (CoreType, CoreType, CoreType) {
        match op {
            // Arithmetic operations: Int -> Int -> Int
            CoreBinOp::Add | CoreBinOp::Sub | CoreBinOp::Mul | CoreBinOp::Div => (
                CoreType::Con("Int".to_string()),
                CoreType::Con("Int".to_string()),
                CoreType::Con("Int".to_string()),
            ),
            // Comparison operations: Int -> Int -> Bool
            CoreBinOp::Lt | CoreBinOp::Le => (
                CoreType::Con("Int".to_string()),
                CoreType::Con("Int".to_string()),
                CoreType::Con("Bool".to_string()),
            ),
        }
    }

    fn add_etvars_to_worklist(&mut self, ty: &CoreType) {
        match ty {
            CoreType::ETVar(var_name) => {
                self.worklist.push(WorklistEntry::TVar(
                    var_name.clone(),
                    TyVarKind::Existential,
                ));
            }
            CoreType::Arrow(t1, t2) | CoreType::App(t1, t2) => {
                self.add_etvars_to_worklist(t1);
                self.add_etvars_to_worklist(t2);
            }
            CoreType::Forall(_, t) => {
                self.add_etvars_to_worklist(t);
            }
            CoreType::Product(types) => {
                for t in types {
                    self.add_etvars_to_worklist(t);
                }
            }
            CoreType::Var(_) | CoreType::Con(_) => {
                // No existential variables to add
            }
        }
    }

    /// Check a pattern against a type and return variable bindings
    /// This implements pattern type checking with unification
    fn check_pattern(
        &mut self,
        pattern: &CorePattern,
        expected_ty: &CoreType,
    ) -> TypeResult<HashMap<String, CoreType>> {
        let mut bindings = HashMap::new();

        match pattern {
            CorePattern::Wildcard => {
                // Wildcard matches anything, no bindings
                Ok(bindings)
            }

            CorePattern::Var(var_name) => {
                // Variable patterns bind the scrutinee type
                bindings.insert(var_name.clone(), expected_ty.clone());
                Ok(bindings)
            }

            CorePattern::Constructor { name, args } => {
                // Look up constructor type from data constructor environment
                if let Some(constructor_ty) = self.data_constructors.get(name).cloned() {
                    // Constructor type should be of the form: T1 -> T2 -> ... -> DataType
                    // We need to unify the result type with expected_ty and
                    // check argument patterns against parameter types

                    let (param_types, result_type) =
                        self.extract_constructor_signature(&constructor_ty)?;

                    // The result type should unify with the expected type
                    self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                        left: result_type,
                        right: expected_ty.clone(),
                    }));

                    // Check that we have the right number of pattern arguments
                    if args.len() != param_types.len() {
                        return Err(TypeError::ArityMismatch {
                            expected: param_types.len(),
                            actual: args.len(),
                            span: None,
                        });
                    }

                    // Recursively check argument patterns
                    for (arg_pattern, param_ty) in args.iter().zip(param_types.iter()) {
                        let arg_bindings = self.check_pattern(arg_pattern, param_ty)?;
                        // Merge bindings (should check for conflicts in a real implementation)
                        for (var, ty) in arg_bindings {
                            bindings.insert(var, ty);
                        }
                    }

                    Ok(bindings)
                } else {
                    Err(TypeError::UnboundDataConstructor {
                        name: name.clone(),
                        span: None,
                    })
                }
            }
        }
    }

    /// Extract parameter types and result type from a constructor type
    /// signature e.g., Int -> String -> Bool -> MyData => ([Int, String,
    /// Bool], MyData)
    fn extract_constructor_signature(
        &mut self,
        constructor_ty: &CoreType,
    ) -> TypeResult<(Vec<CoreType>, CoreType)> {
        let mut param_types = Vec::new();
        let mut current_ty = constructor_ty.clone();
        let mut substitutions = HashMap::new();

        // Instantiate forall quantifiers with fresh existential variables
        while let CoreType::Forall(var, body) = current_ty {
            let fresh_var_name = self.worklist.fresh_evar();
            let fresh_evar = CoreType::ETVar(fresh_var_name.clone());

            // Add the existential variable to the worklist
            self.worklist.push(WorklistEntry::TVar(
                fresh_var_name.clone(),
                TyVarKind::Existential,
            ));

            substitutions.insert(var.clone(), fresh_evar);
            current_ty = *body;
        }

        // Apply substitutions to the type
        current_ty = self.apply_type_substitutions(&current_ty, &substitutions);

        // Extract parameter types from arrows
        while let CoreType::Arrow(param_ty, result_ty) = current_ty {
            param_types.push(*param_ty.clone());
            current_ty = *result_ty;
        }

        // The final type is the result type
        Ok((param_types, current_ty))
    }

    fn instantiate_constructor_type(&mut self, constructor_ty: CoreType) -> CoreType {
        let mut current_ty = constructor_ty;
        let mut substitutions = HashMap::new();

        // Instantiate forall quantifiers with fresh existential variables
        while let CoreType::Forall(var, body) = current_ty {
            let fresh_var_name = self.worklist.fresh_evar();
            let fresh_evar = CoreType::ETVar(fresh_var_name.clone());

            // Add the existential variable to the worklist
            self.worklist.push(WorklistEntry::TVar(
                fresh_var_name.clone(),
                TyVarKind::Existential,
            ));

            substitutions.insert(var.clone(), fresh_evar);
            current_ty = *body;
        }

        // Apply substitutions to the type
        self.apply_type_substitutions(&current_ty, &substitutions)
    }

    fn apply_type_substitutions(
        &self,
        ty: &CoreType,
        substitutions: &HashMap<String, CoreType>,
    ) -> CoreType {
        match ty {
            CoreType::Var(name) => substitutions
                .get(name)
                .cloned()
                .unwrap_or_else(|| ty.clone()),
            CoreType::Con(name) => CoreType::Con(name.clone()),
            CoreType::ETVar(name) => CoreType::ETVar(name.clone()),
            CoreType::Arrow(left, right) => CoreType::Arrow(
                Box::new(self.apply_type_substitutions(left, substitutions)),
                Box::new(self.apply_type_substitutions(right, substitutions)),
            ),
            CoreType::App(left, right) => CoreType::App(
                Box::new(self.apply_type_substitutions(left, substitutions)),
                Box::new(self.apply_type_substitutions(right, substitutions)),
            ),
            CoreType::Forall(var, body) => {
                // Don't substitute under forall bindings - this is a simplification
                CoreType::Forall(
                    var.clone(),
                    Box::new(self.apply_type_substitutions(body, substitutions)),
                )
            }
            CoreType::Product(types) => CoreType::Product(
                types
                    .iter()
                    .map(|t| self.apply_type_substitutions(t, substitutions))
                    .collect(),
            ),
        }
    }
}
#[derive(Debug, Clone, PartialEq)]
pub enum Judgment {
    /// Subtyping: A <: B
    Sub { left: CoreType, right: CoreType },
    /// Type inference: e ⊢ A
    Inf { term: CoreTerm, ty: CoreType },
    /// Type checking: e ⇐ A
    Chk { term: CoreTerm, ty: CoreType },
    /// Application inference helper: A • e ⊢ C
    InfApp {
        func_ty: CoreType,
        arg: CoreTerm,
        result_ty: CoreType,
    },
}
}

Judgments represent the core type checking tasks that drive the DK algorithm forward. Subtyping judgments, written as \( A \leq B \), verify that type \( A \) is a subtype of type \( B \), which is essential for handling the contravariant and covariant relationships that arise in function types and polymorphic instantiation. Inference judgments, denoted \( e \Rightarrow A \), synthesize a type \( A \) for a given expression \( e \), allowing the algorithm to determine types from expressions when no expected type is available. Checking judgments, written as \( e \Leftarrow A \), verify that a given expression \( e \) conforms to an expected type \( A \), which is often more efficient than synthesis when the target type is known.

Instantiation judgments handle the complex process of polymorphic type instantiation, managing the creation and resolution of existential variables that arise when applying polymorphic functions to arguments. Application judgments provide specialized handling for function application inference, dealing with the intricate constraint generation that occurs when the function type may not be immediately known. The algorithm processes these judgments by pattern matching on their structure and generating new worklist entries as needed, creating a systematic approach to constraint solving that ensures all type relationships are properly established.

Bidirectional Type Checking

The DK algorithm employs bidirectional type checking, splitting inference into two complementary modes that work together to handle System F-ω’s complexity.

Synthesis Mode (⇒)

#![allow(unused)]
fn main() {
fn solve_inference(&mut self, term: CoreTerm, ty: CoreType) -> TypeResult<()> {
    self.trace
        .push(format!("Inf {} ⊢ {}", self.term_to_string(&term), ty));

    match term {
        CoreTerm::Var(name) => {
            // Check pattern variable context first
            if let Some(var_ty) = self.var_context.get(&name).cloned() {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: var_ty,
                    right: ty,
                }));
                Ok(())
            } else if let Some(var_ty) = self.worklist.find_var(&name).cloned() {
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: var_ty,
                    right: ty,
                }));
                Ok(())
            } else if let Some(var_ty) = self.data_constructors.get(&name).cloned() {
                // Check data constructors for constructor variables
                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: var_ty,
                    right: ty,
                }));
                Ok(())
            } else {
                Err(TypeError::UnboundVariable {
                    name: name.to_string(),
                    span: None,
                })
            }
        }

        CoreTerm::LitInt(_) => {
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: CoreType::Con("Int".to_string()),
                right: ty,
            }));
            Ok(())
        }

        CoreTerm::Lambda {
            param,
            param_ty,
            body,
        } => {
            let result_ty = CoreType::ETVar(self.worklist.fresh_evar());
            if let CoreType::ETVar(var_name) = &result_ty {
                self.worklist.push(WorklistEntry::TVar(
                    var_name.clone(),
                    TyVarKind::Existential,
                ));
            }

            let arrow_ty =
                CoreType::Arrow(Box::new(param_ty.clone()), Box::new(result_ty.clone()));
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: arrow_ty,
                right: ty,
            }));

            self.worklist.push(WorklistEntry::Var(param, param_ty));
            self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                term: *body,
                ty: result_ty,
            }));

            Ok(())
        }

        CoreTerm::App { func, arg } => {
            let func_ty = CoreType::ETVar(self.worklist.fresh_evar());
            if let CoreType::ETVar(var_name) = &func_ty {
                self.worklist.push(WorklistEntry::TVar(
                    var_name.clone(),
                    TyVarKind::Existential,
                ));
            }

            self.worklist
                .push(WorklistEntry::Judgment(Judgment::InfApp {
                    func_ty: func_ty.clone(),
                    arg: *arg,
                    result_ty: ty,
                }));

            self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                term: *func,
                ty: func_ty,
            }));

            Ok(())
        }

        CoreTerm::TypeLambda { param, body } => {
            let body_ty = CoreType::ETVar(self.worklist.fresh_evar());
            if let CoreType::ETVar(var_name) = &body_ty {
                self.worklist.push(WorklistEntry::TVar(
                    var_name.clone(),
                    TyVarKind::Existential,
                ));
            }

            let forall_ty = CoreType::Forall(param.clone(), Box::new(body_ty.clone()));
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: forall_ty,
                right: ty,
            }));

            self.worklist
                .push(WorklistEntry::TVar(param, TyVarKind::Universal));
            self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                term: *body,
                ty: body_ty,
            }));

            Ok(())
        }

        CoreTerm::Constructor { name, args: _ } => {
            if let Some(constructor_ty) = self.data_constructors.get(&name) {
                // Instantiate the constructor type with fresh existential variables
                let instantiated_ty = self.instantiate_constructor_type(constructor_ty.clone());

                self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                    left: instantiated_ty,
                    right: ty,
                }));
                Ok(())
            } else {
                Err(TypeError::UnboundDataConstructor {
                    name: name.clone(),
                    span: None,
                })
            }
        }

        CoreTerm::BinOp { op, left, right } => {
            let (left_ty, right_ty, result_ty) = self.infer_binop_types(&op);

            // Add any existential variables to the worklist
            self.add_etvars_to_worklist(&left_ty);
            self.add_etvars_to_worklist(&right_ty);
            self.add_etvars_to_worklist(&result_ty);

            // Check that the operands have the expected types
            self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                term: *left,
                ty: left_ty,
            }));
            self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                term: *right,
                ty: right_ty,
            }));

            // Check that the result type matches the expected type
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: result_ty,
                right: ty,
            }));

            Ok(())
        }

        CoreTerm::If {
            cond,
            then_branch,
            else_branch,
        } => {
            // The condition must be Bool
            self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                term: *cond,
                ty: CoreType::Con("Bool".to_string()),
            }));

            // Both branches must have the same type as the expected result
            self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                term: *then_branch,
                ty: ty.clone(),
            }));
            self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                term: *else_branch,
                ty,
            }));

            Ok(())
        }

        CoreTerm::Case { scrutinee, arms } => {
            // Create a fresh type variable for the scrutinee
            let scrutinee_ty = CoreType::ETVar(self.worklist.fresh_evar());
            self.add_etvars_to_worklist(&scrutinee_ty);

            // Check that the scrutinee has the inferred type
            self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                term: *scrutinee,
                ty: scrutinee_ty.clone(),
            }));

            // Process each pattern arm
            for arm in arms {
                // Check pattern constraints and get pattern variable bindings
                let pattern_bindings = self.check_pattern(&arm.pattern, &scrutinee_ty)?;

                // Add pattern variable bindings to worklist as regular variables
                for (var_name, var_type) in pattern_bindings {
                    self.worklist.push(WorklistEntry::Var(var_name, var_type));
                }

                // Check the body with pattern variables in the worklist
                self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                    term: arm.body.clone(),
                    ty: ty.clone(),
                }));
            }

            Ok(())
        }
    }
}
}

Synthesis mode analyzes expressions to determine their types, working from the structure of expressions to produce type information. When encountering variables, the algorithm looks up their types in the current typing context, instantiating any polymorphic schemes as needed to produce concrete types for the current usage. Function applications require the algorithm to infer both the function and argument types, then unify them to ensure the application is well-typed. Type applications instantiate polymorphic types with concrete type arguments, requiring careful management of type variable substitution and scope.

Type annotations provide explicit guidance to the inference process, allowing programmers to specify expected types that the algorithm can use to constrain its search space. Synthesis mode produces not only a type but also any constraints that must hold for the typing to be valid, enabling the algorithm to handle complex interdependencies between different parts of the expression being analyzed.

Checking Mode (⇐)

#![allow(unused)]
fn main() {
fn solve_checking(&mut self, term: CoreTerm, ty: CoreType) -> TypeResult<()> {
    self.trace
        .push(format!("Chk {} ⇐ {}", self.term_to_string(&term), ty));

    match (&term, &ty) {
        (
            CoreTerm::Lambda {
                param,
                param_ty,
                body,
            },
            CoreType::Arrow(expected_param, result_ty),
        ) => {
            // Check that parameter types match
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: param_ty.clone(),
                right: *expected_param.clone(),
            }));

            self.worklist
                .push(WorklistEntry::Var(param.clone(), param_ty.clone()));
            self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                term: *body.clone(),
                ty: *result_ty.clone(),
            }));

            Ok(())
        }

        (_, CoreType::Forall(var, body_ty)) => {
            let fresh_var = self.worklist.fresh_var();
            self.worklist
                .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
            let substituted_ty = self.substitute_type(var, &CoreType::Var(fresh_var), body_ty);
            self.worklist.push(WorklistEntry::Judgment(Judgment::Chk {
                term,
                ty: substituted_ty,
            }));
            Ok(())
        }

        _ => {
            // Fallback: infer type and check subtyping
            let inferred_ty = CoreType::ETVar(self.worklist.fresh_evar());
            if let CoreType::ETVar(var_name) = &inferred_ty {
                self.worklist.push(WorklistEntry::TVar(
                    var_name.clone(),
                    TyVarKind::Existential,
                ));
            }

            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: inferred_ty.clone(),
                right: ty,
            }));

            self.worklist.push(WorklistEntry::Judgment(Judgment::Inf {
                term,
                ty: inferred_ty,
            }));

            Ok(())
        }
    }
}
}

Checking mode verifies that expressions conform to expected types, working backwards from known type information to validate expression structure. For lambda expressions, the algorithm can immediately decompose the expected function type to extract parameter and return types, then check the lambda parameter against the expected parameter type and recursively check the body against the expected return type. When checking against polymorphic types, the algorithm introduces fresh type variables for the quantified variables and checks the expression against the instantiated type.

When direct checking strategies are not applicable, the algorithm falls back to a synthesis-plus-subtyping approach, where it first synthesizes a type for the expression and then verifies that this synthesized type is a subtype of the expected type. Checking mode is often more efficient than synthesis because it can make stronger assumptions about the expected type structure, allowing the algorithm to prune the search space and make more directed inferences.

Existential Type Variables

Existential variables represent the unknown types that make type inference possible. They act as placeholders that get solved through unification and constraint propagation.

Variable Generation

#![allow(unused)]
fn main() {
pub fn fresh_evar(&mut self) -> TyVar {
    let var = format!("^α{}", self.next_var);
    self.next_var += 1;
    var
}
}

Fresh existential variables are generated with unique names and added to the worklist. The algorithm ensures that each unknown type gets a distinct variable, preventing accidental unification.

Constraint Generation

When the algorithm encounters expressions with unknown types, it follows a systematic constraint generation process. The algorithm creates fresh existential variables to represent the unknown types, ensuring each unknown gets a unique placeholder that can be tracked throughout the solving process. These existential variables are then related to known types through constraint generation, creating equations and inequalities that capture the type relationships implied by the program structure.

The generated constraints are added to the worklist for systematic resolution, allowing the algorithm to defer complex solving decisions until sufficient information is available. As constraints get resolved and existential variables receive concrete solutions, these solutions are propagated throughout the constraint system, potentially enabling the resolution of additional constraints in a cascading effect that gradually determines all unknown types.

Solving Process

#![allow(unused)]
fn main() {
fn solve(&mut self) -> TypeResult<()> {
    while let Some(entry) = self.worklist.pop() {
        match entry {
            WorklistEntry::TVar(_, _) => {
                // Skip variable bindings during processing
                continue;
            }
            WorklistEntry::Var(_, _) => {
                // Skip term variable bindings during processing
                continue;
            }
            WorklistEntry::Judgment(judgment) => {
                self.solve_judgment(judgment)?;
            }
        }
    }
    Ok(())
}
}

Constraint solving proceeds through a systematic pattern matching process that determines which constraint solving rule applies to each judgment in the worklist. The algorithm examines the structure of constraints and applies appropriate decomposition strategies to break complex constraints into simpler, more manageable pieces. For instance, a constraint involving function types might be decomposed into separate constraints for the argument and return types.

Substitution plays a crucial role as solved variables are applied throughout the constraint system, replacing existential variables with their determined types wherever they appear. The occurs check prevents the creation of infinite types during unification by ensuring that a type variable does not appear within its own solution, maintaining the soundness of the type system. The solving process is guaranteed to terminate because System F-ω has decidable type inference, meaning that every well-typed program will eventually reach a state where all constraints are resolved and all existential variables have concrete solutions.

Subtyping and Polymorphic Instantiation

System F-ω’s subtyping relation captures the idea that more polymorphic types are subtypes of less polymorphic ones. This enables flexible use of polymorphic functions.

Subtyping Rules

#![allow(unused)]
fn main() {
fn solve_subtype(&mut self, left: CoreType, right: CoreType) -> TypeResult<()> {
    self.trace.push(format!("Sub {} <: {}", left, right));

    if left == right {
        return Ok(());
    }

    match (&left, &right) {
        // Reflexivity
        (CoreType::Con(a), CoreType::Con(b)) if a == b => Ok(()),
        (CoreType::Var(a), CoreType::Var(b)) if a == b => Ok(()),
        (CoreType::ETVar(a), CoreType::ETVar(b)) if a == b => Ok(()),

        // Function subtyping (contravariant in argument, covariant in result)
        (CoreType::Arrow(a1, a2), CoreType::Arrow(b1, b2)) => {
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: *b1.clone(),
                right: *a1.clone(),
            }));
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: *a2.clone(),
                right: *b2.clone(),
            }));
            Ok(())
        }

        // Application subtyping (covariant in both components)
        (CoreType::App(a1, a2), CoreType::App(b1, b2)) => {
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: *a1.clone(),
                right: *b1.clone(),
            }));
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: *a2.clone(),
                right: *b2.clone(),
            }));
            Ok(())
        }

        // Forall right
        (_, CoreType::Forall(var, ty)) => {
            let fresh_var = self.worklist.fresh_var();
            self.worklist
                .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
            let substituted_ty = self.substitute_type(var, &CoreType::Var(fresh_var), ty);
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left,
                right: substituted_ty,
            }));
            Ok(())
        }

        // Forall left
        (CoreType::Forall(var, ty), _) => {
            let fresh_evar = self.worklist.fresh_evar();
            self.worklist
                .push(WorklistEntry::TVar(fresh_evar.clone(), TyVarKind::Marker));
            self.worklist.push(WorklistEntry::TVar(
                fresh_evar.clone(),
                TyVarKind::Existential,
            ));
            let substituted_ty = self.substitute_type(var, &CoreType::ETVar(fresh_evar), ty);
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: substituted_ty,
                right,
            }));
            Ok(())
        }

        // Existential variable instantiation
        (CoreType::ETVar(a), _) if !self.occurs_check(a, &right) => {
            self.instantiate_left(a, &right)
        }
        (_, CoreType::ETVar(a)) if !self.occurs_check(a, &left) => {
            self.instantiate_right(&left, a)
        }

        _ => Err(TypeError::SubtypingError {
            left,
            right,
            span: None,
        }),
    }
}
}

The subtyping algorithm handles several fundamental relationships that govern type compatibility in System F-ω. Reflexivity ensures that every type is considered a subtype of itself, providing the base case for subtyping derivations. Transitivity allows subtyping relationships to compose through intermediate types, so if \( A \leq B \) and \( B \leq C \), then \( A \leq C \) holds automatically.

Function types exhibit the classic contravariant-covariant pattern, where a function type \( A_1 \to B_1 \) is a subtype of \( A_2 \to B_2 \) if \( A_2 \leq A_1 \) (contravariant in arguments) and \( B_1 \leq B_2 \) (covariant in results). Universal quantification follows the principle that \( \forall\alpha. A \leq B \) holds if \( A \leq B \) when \( \alpha \) is instantiated with a fresh existential variable. Existential instantiation requires the algorithm to solve existential variables in ways that satisfy the subtyping constraints, often leading to the creation of additional constraints that must be resolved.

Polymorphic Instantiation

#![allow(unused)]
fn main() {
fn instantiate_left(&mut self, var: &str, ty: &CoreType) -> TypeResult<()> {
    match ty {
        CoreType::ETVar(b) if self.worklist.before(var, b) => {
            self.worklist
                .solve_evar(b, CoreType::ETVar(var.to_string()))?;
            Ok(())
        }
        CoreType::Arrow(t1, t2) => {
            let a1 = self.worklist.fresh_evar();
            let a2 = self.worklist.fresh_evar();
            let arrow_ty = CoreType::Arrow(
                Box::new(CoreType::ETVar(a1.clone())),
                Box::new(CoreType::ETVar(a2.clone())),
            );
            self.worklist.solve_evar(var, arrow_ty)?;

            self.worklist
                .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
            self.worklist
                .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: *t1.clone(),
                right: CoreType::ETVar(a1),
            }));
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: CoreType::ETVar(a2),
                right: *t2.clone(),
            }));

            Ok(())
        }
        CoreType::App(t1, t2) => {
            let a1 = self.worklist.fresh_evar();
            let a2 = self.worklist.fresh_evar();
            let app_ty = CoreType::App(
                Box::new(CoreType::ETVar(a1.clone())),
                Box::new(CoreType::ETVar(a2.clone())),
            );
            self.worklist.solve_evar(var, app_ty)?;

            self.worklist
                .push(WorklistEntry::TVar(a1.clone(), TyVarKind::Existential));
            self.worklist
                .push(WorklistEntry::TVar(a2.clone(), TyVarKind::Existential));

            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: CoreType::ETVar(a1),
                right: *t1.clone(),
            }));
            self.worklist.push(WorklistEntry::Judgment(Judgment::Sub {
                left: CoreType::ETVar(a2),
                right: *t2.clone(),
            }));

            Ok(())
        }
        CoreType::Forall(b, t) => {
            let fresh_var = self.worklist.fresh_var();
            self.worklist
                .push(WorklistEntry::TVar(fresh_var.clone(), TyVarKind::Universal));
            let substituted = self.substitute_type(b, &CoreType::Var(fresh_var), t);
            self.instantiate_left(var, &substituted)
        }
        _ if self.is_monotype(ty) => {
            self.worklist.solve_evar(var, ty.clone())?;
            Ok(())
        }
        _ => Err(TypeError::InstantiationError {
            var: var.to_string(),
            ty: ty.clone(),
            span: None,
        }),
    }
}
}

Instantiation handles the complex process of applying polymorphic functions by systematically managing the relationship between universal and existential quantification. The algorithm begins by identifying quantified variables in polymorphic types, analyzing the structure of \( \forall \)-types to determine which type variables need instantiation. For each quantified variable, the algorithm generates fresh existential variables that will serve as placeholders for the concrete types to be determined through constraint solving.

The substitution phase replaces quantified variables with their corresponding existential variables throughout the type, effectively converting a polymorphic type scheme into a concrete type with unknown components. Additional constraints are added to ensure proper instantiation, capturing any relationships that must hold between the instantiated types and the context in which the polymorphic function is used. This systematic process enables the same polymorphic function to be used with different types in different contexts while maintaining type safety and ensuring that all instantiations are consistent with the function’s type signature.

Error Handling and Diagnostics

The DK algorithm provides precise error reporting by tracking the source of constraints and the reasoning that led to type checking failures.

#![allow(unused)]
fn main() {
use std::ops::Range;
use ariadne::{Color, Label, Report, ReportKind};
use thiserror::Error;
use crate::core::CoreType;
pub type Span = Range<usize>;
#[derive(Error, Debug, Clone)]
pub enum ParseError {
    #[error("Unexpected token '{token}' at position {span:?}")]
    UnexpectedToken { token: String, span: Span },

    #[error("Expected {expected} but found '{found}' at position {span:?}")]
    Expected {
        expected: String,
        found: String,
        span: Span,
    },

    #[error("Unexpected end of file, expected {expected}")]
    UnexpectedEof { expected: String },
}
#[derive(Error, Debug, Clone)]
pub enum CompilerError {
    #[error("Parse error")]
    Parse(#[from] ParseError),

    #[error("Type error")]
    Type(#[from] TypeError),
}
pub type ParseResult<T> = Result<T, ParseError>;
pub type TypeResult<T> = Result<T, TypeError>;
pub type CompilerResult<T> = Result<T, CompilerError>;
impl ParseError {
    pub fn report<'a>(
        &self,
        source: &str,
        filename: &'a str,
    ) -> Report<'a, (&'a str, Range<usize>)> {
        match self {
            ParseError::UnexpectedToken { token, span } => {
                Report::build(ReportKind::Error, filename, span.start)
                    .with_message(format!("Unexpected token '{}'", token))
                    .with_label(
                        Label::new((filename, span.clone()))
                            .with_message("unexpected token here")
                            .with_color(Color::Red),
                    )
                    .finish()
            }
            ParseError::Expected {
                expected,
                found,
                span,
            } => Report::build(ReportKind::Error, filename, span.start)
                .with_message(format!("Expected {}, found '{}'", expected, found))
                .with_label(
                    Label::new((filename, span.clone()))
                        .with_message(format!("expected {}", expected))
                        .with_color(Color::Red),
                )
                .finish(),
            ParseError::UnexpectedEof { expected } => {
                Report::build(ReportKind::Error, filename, source.len())
                    .with_message(format!("Unexpected end of file, expected {}", expected))
                    .finish()
            }
        }
    }
}
impl TypeError {
    pub fn report<'a>(
        &self,
        source: &str,
        filename: &'a str,
    ) -> Report<'a, (&'a str, Range<usize>)> {
        match self {
            TypeError::UnboundVariable { name, span } => {
                let span = span.clone().unwrap_or(0..source.len());
                Report::build(ReportKind::Error, filename, span.start)
                    .with_message(format!("Variable '{}' not found in scope", name))
                    .with_label(
                        Label::new((filename, span))
                            .with_message("undefined variable")
                            .with_color(Color::Red),
                    )
                    .finish()
            }
            // Add more specific error reporting as needed
            _ => Report::build(ReportKind::Error, filename, 0)
                .with_message(self.to_string())
                .finish(),
        }
    }
}
#[derive(Error, Debug, Clone)]
pub enum TypeError {
    #[error("Variable '{name}' not found in scope")]
    UnboundVariable { name: String, span: Option<Span> },

    #[error("Data constructor '{name}' not found")]
    UnboundDataConstructor { name: String, span: Option<Span> },

    #[error("Type '{ty}' is not a function type")]
    NotAFunction { ty: CoreType, span: Option<Span> },

    #[error("Arity mismatch: expected {expected} arguments, got {actual}")]
    ArityMismatch {
        expected: usize,
        actual: usize,
        span: Option<Span>,
    },

    #[error("Function '{name}' has definition but no type signature")]
    MissingTypeSignature { name: String, span: Option<Span> },

    #[error("Subtyping failure: '{left}' is not a subtype of '{right}'")]
    SubtypingError {
        left: CoreType,
        right: CoreType,
        span: Option<Span>,
    },

    #[error("Instantiation failure: cannot instantiate '{var}' with '{ty}'")]
    InstantiationError {
        var: String,
        ty: CoreType,
        span: Option<Span>,
    },
}
}

The error system encompasses several categories of failures that can occur during type checking. Unification failures arise when the algorithm attempts to make two types equal but discovers they are fundamentally incompatible, such as trying to unify \( \text{Int} \) with \( \text{Bool} \to \text{String} \). Occurs check violations prevent the creation of infinite types by detecting when a type variable would appear within its own solution, such as attempting to solve \( \hat{\alpha} = \hat{\alpha} \to \text{Int} \).

Scope violations occur when type variables escape their intended scope, typically when an existential variable that should be local to a particular constraint solving context appears in the final type. Kind mismatches arise when type constructors are applied incorrectly, such as applying a type constructor that expects a higher-kinded argument to a concrete type. Each error includes location information and a description of the type checking rule that failed, enabling precise diagnostics that help programmers understand and fix type errors in their code.

When the algorithm succeeds, it finds the most general type for each expression. This means users get the maximum possible polymorphism without losing type safety.

Worklist Optimizations

There are a couple of optimizations that can also be added to make this process more efficient.

#![allow(unused)]
fn main() {
fn solve_judgment(&mut self, judgment: Judgment) -> TypeResult<()> {
    match judgment {
        Judgment::Sub { left, right } => self.solve_subtype(left, right),
        Judgment::Inf { term, ty } => self.solve_inference(term, ty),
        Judgment::Chk { term, ty } => self.solve_checking(term, ty),
        Judgment::InfApp {
            func_ty,
            arg,
            result_ty,
        } => self.solve_inf_app(func_ty, arg, result_ty),
    }
}
}

Smart scheduling of worklist entries improves performance through strategic constraint ordering. The algorithm prioritizes simple constraints that can be solved immediately, such as reflexive subtyping relationships or direct unification of concrete types, allowing these trivial cases to be resolved quickly and potentially simplify other constraints. Complex constraints that require extensive reasoning or depend on the resolution of other constraints are deferred until more information becomes available through the solving of simpler constraints.

Related constraints are batched together to reduce redundant work, enabling the algorithm to solve groups of interdependent constraints in a single pass rather than revisiting them multiple times. This scheduling strategy significantly reduces the overall computational cost of constraint solving by ensuring that each constraint is processed when it is most likely to be resolvable.

Constraints undergo systematic simplification before being added to the worklist to improve both performance and clarity. The algorithm eliminates trivial constraints such as \( A \leq A \) that are always satisfied, preventing unnecessary work and reducing clutter in the constraint system. Related constraints are combined when possible to reduce the overall worklist size, such as merging multiple subtyping constraints involving the same types into more comprehensive relationships.

The simplification process also enables early detection of unsatisfiable constraints, allowing the algorithm to report type errors immediately rather than continuing with doomed constraint solving attempts. This preprocessing step ensures that the worklist contains only meaningful constraints that contribute to the type inference process, making the overall algorithm more efficient and the resulting error messages more precise.

Putting It All Together

And there we have it - a complete System F-ω type checker with bidirectional inference, higher-kinded types, and constraint solving! The DK algorithm transforms what could be an intractable type inference problem into a systematic, decidable process that handles some of the most advanced features in type theory.

Let’s see our type checker in action with a simple polymorphic example. First, create a test file:

-- test_identity.hs
identity :: forall a. a -> a;
identity x = x;

test_int :: Int;
test_int = identity 42;

Now run the type checker:

$ cd system-f-omega
$ cargo run -- check test_identity.hs

The system produces output showing the successful type checking:

Parsed 4 declarations
Compiled to 0 type definitions and 2 term definitions
Checking identity : ∀a. a -> a ... ✓
Checking test_int : Int ... ✓
✓ Module 'test_identity.hs' typechecks successfully!

Perfect! Our type checker correctly verified that identity has the polymorphic type \( \forall a. a \to a \) and that test_int properly instantiates it with Int. The DK algorithm handled the universal quantification, constraint generation, and instantiation with all the sophistication of a production-quality type system.

Examples

Our System F-ω implementation demonstrates its capabilities through a comprehensive suite of working examples that showcase the full range of the type system’s features. These examples progress from basic algebraic data types through higher-order polymorphic functions, illustrating how System F-ω enables advanced programming patterns while maintaining type safety.

The examples serve both as demonstrations of the implementation’s correctness and as practical illustrations of how System F-ω’s theoretical power translates into useful programming language features. Each example successfully type checks under our implementation, proving that the algorithms can handle real-world programming scenarios.

Basic Data Types and Pattern Matching

The foundation of our System F-ω implementation lies in its support for algebraic data types with comprehensive pattern matching. These features provide the building blocks for more programming patterns.

#![allow(unused)]
fn main() {
-- Algebraic Data Types with constructors
data Bool = True | False;
data Maybe a = Nothing | Just a;
data Either a b = Left a | Right b;
data List a = Nil | Cons a (List a);

-- Functions with pattern matching
not :: Bool -> Bool;
not b = match b {
  True -> False;
  False -> True;
};

isJust :: Maybe a -> Bool;
isJust m = match m {
  Nothing -> False;
  Just x -> True;
};

fromMaybe :: a -> Maybe a -> a;
fromMaybe def m = match m {
  Nothing -> def;
  Just x -> x;
};

-- Polymorphic functions
id :: a -> a;
id x = x;

-- Higher-order functions with explicit parameters
map :: (a -> b) -> List a -> List b;
map f lst = match lst {
  Nil -> Nil;
  Cons x xs -> Cons (f x) (map f xs);
};

-- Arithmetic and comparison operations
add :: Int -> Int -> Int;
add x y = x + y;

multiply :: Int -> Int -> Int;
multiply x y = x * y;

lessThan :: Int -> Int -> Bool;
lessThan x y = x < y;

-- WORKING: Constructor applications
testBool :: Bool;
testBool = not True;

testMaybe :: Maybe Int;
testMaybe = Just 42;

testEither :: Either Bool Int;
testEither = Right 123;

testList :: List Int;
testList = Cons 1 (Cons 2 (Cons 3 Nil));

-- Function composition and application
composed :: Int;
composed = add (multiply 6 7) 8;

-- Nested pattern matching
listLength :: List a -> Int;
listLength lst = match lst {
  Nil -> 0;
  Cons x xs -> add 1 (listLength xs);
};
}

Algebraic Data Type Declarations

The implementation supports rich data type definitions that demonstrate System F-ω’s kind system:

  • Simple Enumerations: data Bool = True | False creates a basic sum type
  • Parameterized Types: data Maybe a = Nothing | Just a shows kind * -> *
  • Multi-Parameter Types: data Either a b = Left a | Right b demonstrates kind * -> * -> *
  • Recursive Types: data List a = Nil | Cons a (List a) enables inductive data structures

Each declaration automatically infers appropriate kinds for the type constructors, showing how the implementation handles the kind system transparently.

Pattern Matching with Type Safety

Pattern matching provides safe destructuring of algebraic data types:

not :: Bool -> Bool;
not b = match b {
  True -> False;
  False -> True;
};

isJust :: Maybe a -> Bool;
isJust m = match m {
  Nothing -> False;
  Just x -> True;
};

The type checker verifies that:

  • All patterns cover the correct constructors
  • Pattern variables receive appropriate types
  • Branch expressions have compatible return types
  • Polymorphic types are handled consistently across branches

Polymorphic Functions

System F-ω’s universal quantification enables functions that work uniformly across all types, demonstrating parametric polymorphism in action.

Basic Polymorphic Functions

id :: a -> a;
id x = x;

const :: a -> b -> a;
const x y = x;

These functions showcase:

  • Type Variable Scope: Variables like a and b are implicitly quantified
  • Principal Types: The implementation infers the most general possible types
  • Polymorphic Instantiation: Each use can instantiate types differently

Higher-Order Polymorphic Functions

map :: (a -> b) -> List a -> List b;
map f lst = match lst {
  Nil -> Nil;
  Cons x xs -> Cons (f x) (map f xs);
};

fromMaybe :: a -> Maybe a -> a;
fromMaybe def m = match m {
  Nothing -> def;
  Just x -> x;
};

These examples demonstrate:

  • Function Types as Arguments: (a -> b) shows higher-order typing
  • Recursive Polymorphic Functions: map calls itself with consistent types
  • Type-Safe Default Values: fromMaybe maintains type consistency

Complex Polymorphic Programming

More examples show how System F-ω handles complex interactions between polymorphism, higher-order functions, and algebraic data types.

Arithmetic and Comparison Operations

add :: Int -> Int -> Int;
add x y = x + y;

multiply :: Int -> Int -> Int;
multiply x y = x * y;

lessThan :: Int -> Int -> Bool;
lessThan x y = x < y;

Built-in operations integrate seamlessly with the user-defined type system, showing how primitive types participate in the same type-theoretic framework as algebraic data types.

Function Composition and Application

composed :: Int;
composed = add (multiply 6 7) 8;

listLength :: List a -> Int;
listLength lst = match lst {
  Nil -> 0;
  Cons x xs -> add 1 (listLength xs);
};

These examples demonstrate:

  • Nested Function Applications: Complex expressions type check correctly
  • Polymorphic Recursion: listLength works for lists of any type
  • Type Preservation: All intermediate computations maintain type safety

Advanced Programming Patterns

Our implementation handles programming patterns that require the full power of System F-ω’s type system.

Constructor Applications and Type Inference

testBool :: Bool;
testBool = not True;

testMaybe :: Maybe Int;
testMaybe = Just 42;

testList :: List Int;
testList = Cons 1 (Cons 2 (Cons 3 Nil));

The type checker correctly infers types for constructor applications, handling:

  • Type Application: Just applied to 42 infers Maybe Int
  • Nested Constructors: Complex list structure maintains type consistency
  • Polymorphic Instantiation: Each constructor use gets appropriate type arguments

Pattern Matching with Complex Types

either :: (a -> c) -> (b -> c) -> Either a b -> c;
either f g e = match e {
  Left x -> f x;
  Right y -> g y;
};

mapMaybe :: (a -> b) -> Maybe a -> Maybe b;
mapMaybe f m = match m {
  Nothing -> Nothing;
  Just x -> Just (f x);
};

These functions showcase:

  • Higher-Order Pattern Matching: Functions as arguments in pattern contexts
  • Type-Safe Elimination: Pattern matching preserves all type relationships
  • Functor Patterns: mapMaybe demonstrates structure-preserving transformations

Working Example Programs

The implementation includes several complete programs that demonstrate all features working together:

Fibonacci with Polymorphic Utilities

One example program implements Fibonacci numbers using polymorphic helper functions, showing how System F-ω enables code reuse:

fibonacci :: Int -> Int;
fibonacci n = match lessThan n 2 {
  True -> n;
  False -> add (fibonacci (subtract n 1)) (fibonacci (subtract n 2));
};

List Processing with Higher-Order Functions

Another example demonstrates functional programming patterns with lists:

filter :: (a -> Bool) -> List a -> List a;
filter pred lst = match lst {
  Nil -> Nil;
  Cons x xs -> match pred x {
    True -> Cons x (filter pred xs);
    False -> filter pred xs;
  };
};

foldRight :: (a -> b -> b) -> b -> List a -> b;
foldRight f acc lst = match lst {
  Nil -> acc;
  Cons x xs -> f x (foldRight f acc xs);
};

Type Inference in Action

The examples demonstrate how the DK worklist algorithm handles complex type inference scenarios:

Existential Variable Resolution

When processing expressions like map (add 1) someList, the algorithm:

  1. Generates existential variables for unknown types
  2. Propagates constraints through function applications
  3. Unifies types to discover that the list must have type List Int
  4. Reports the final type as List Int

Polymorphic Instantiation

For expressions using polymorphic functions multiple times:

example = (id True, id 42, id someList)

The algorithm correctly instantiates id with different types:

  • id :: Bool -> Bool for the first component
  • id :: Int -> Int for the second component
  • id :: List a -> List a for the third component

Higher-Rank Polymorphism

The implementation handles functions that accept polymorphic arguments:

applyToEach :: (forall a. a -> a) -> (Bool, Int) -> (Bool, Int);
applyToEach f (x, y) = (f x, f y);

This demonstrates the implementation’s support for higher-rank types where polymorphic types appear in argument positions.

Calculus of Constructions

The Calculus of Constructions represents the pinnacle of the lambda cube, occupying the most expressive corner where all three dimensions of abstraction converge. This system unifies terms, types, and kinds into a single syntactic framework, eliminating the artificial boundaries that separate computation from logic and enabling types to express arbitrary mathematical propositions with computational content.

Where previous systems in our exploration maintained strict hierarchies between terms, types, and kinds, the Calculus of Constructions dissolves these distinctions. Types become first-class citizens that can be manipulated, passed to functions, and returned as results. This unification enables unprecedented expressiveness while maintaining logical consistency through a carefully constructed universe hierarchy.

Position in the Lambda Cube

The Calculus of Constructions sits at vertex λ2ωP of the lambda cube, combining all three forms of abstraction that define the cube’s dimensions:

Terms depending on Types (\( \uparrow\)-axis): Polymorphic functions like \(\mathsf{id} : \forall A : \mathsf{Type}.; A \to A \) where terms abstract over type parameters, enabling parametric polymorphism across all types in the system.

Types depending on Types (\( \nearrow \)-axis): Type constructors like \( \mathsf{List} : \mathsf{Type} \to \mathsf{Type} \) and \( \Sigma : (A : \mathsf{Type}) \to (A \to \mathsf{Type}) \to \mathsf{Type} \) where types can abstract over other types, enabling higher-kinded polymorphism and type-level computation.

Types depending on Terms (\( \rightarrow \) -axis): Dependent types like \( \mathsf{Vec} : \mathsf{Nat} \to \mathsf{Type} \to \mathsf{Type} \) where the structure of types depends on the values of terms, enabling precise specification of data structure properties and program invariants.

The convergence of these three dimensions creates a system of unprecedented expressiveness. Unlike System F-ω, which provides polymorphism but maintains a clear separation between terms and types, the Calculus of Constructions allows types to depend on arbitrary term-level computations while maintaining decidable type checking through normalization properties.

The Curry-Howard Correspondence

The Calculus of Constructions realizes the profound connection between computation and logic known as the Curry-Howard correspondence. In this correspondence, types represent logical propositions and terms represent constructive proofs of those propositions. This isomorphism enables the same syntactic framework to express both programs and mathematical theorems with their proofs.

Propositions as Types: Every logical statement corresponds to a type. The proposition “for all natural numbers n, n + 0 = n” becomes the type ∀n : Nat. Eq Nat (plus n zero) n, where Eq represents propositional equality.

Proofs as Programs: Every constructive proof corresponds to a program that computes a witness for the proposition. A proof of the above proposition becomes a function that takes a natural number and produces evidence that adding zero preserves the number.

Proof Checking as Type Checking: Verifying the correctness of a mathematical proof reduces to checking that a program has the expected type. The type checker becomes a proof checker, ensuring that purported proofs actually establish their claimed propositions.

This correspondence transforms programming into theorem proving and theorem proving into programming, creating a unified framework where mathematical rigor and computational efficiency coexist naturally.

Dependent Types: The Foundation

The key innovation that enables the Calculus of Constructions’ expressiveness is the dependent product type, or Π-type. This construct generalizes the familiar function arrow to create types whose structure depends on the values they abstract over.

Dependent Products (Π-Types)

The dependent product type \( \Pi x : A.; B \) represents functions where the return type \( B \) can depend on the input value \( x \). When the variable \( x \) does not appear in \( B \), this reduces to the simple function type \( A \to B \). When \( x \) does appear in \( B \), we obtain true dependency where the return type varies based on the input value.

-- Simple function type (non-dependent)
add : Nat → Nat → Nat

-- Dependent function type
vec : (n : Nat) → Type → Type
create_vec : (n : Nat) → (A : Type) → vec n A

-- The return type depends on the input value n
lookup : (n : Nat) → (A : Type) → (v : vec n A) → (i : Fin n) → A

The dependent product enables precise type-level specifications that capture program invariants directly in the type system. A vector lookup function can guarantee at compile time that the index falls within the vector bounds, eliminating runtime bounds checking while maintaining type safety.

Dependent Sums (Σ-Types)

The dependent sum type \( \Sigma x : A.; B \) represents pairs where the type of the second component depends on the value of the first component. This enables existential quantification and the creation of heterogeneous data structures with precise type relationships.

-- Dependent pair: a number and a vector of that length
sized_vec : Type := Σ n : Nat. vec n Int

-- Create a sized vector
example_vec : sized_vec := ⟨3, [1, 2, 3]⟩

-- Pattern match to extract components
process_vec : sized_vec → Int :=
  fun ⟨n, v⟩ => sum_vector v  -- Type checker knows v has length n

Dependent sums enable the expression of existential propositions where we assert the existence of a value with specific properties without revealing the exact value, while maintaining the ability to use that value computationally.

Universe Hierarchy and Logical Consistency

The power of dependent types raises fundamental questions about self-reference and logical consistency. If types can contain arbitrary values and types themselves are values, what prevents the construction of paradoxical types like the set of all sets that do not contain themselves?

The Calculus of Constructions addresses this challenge through a universe hierarchy that stratifies types by their complexity level. Each universe contains types of bounded complexity, and the universe hierarchy prevents the construction of self-referential types that would lead to logical inconsistency.

Universe Levels

-- Universe hierarchy
Prop : Type          -- Propositions with no computational content
Type : Type 1        -- Small types (Nat, Bool, etc.)
Type 1 : Type 2      -- Types of type constructors
Type 2 : Type 3      -- Higher-order type constructors
-- ... infinite hierarchy

Prop: The universe of propositions represents logical statements that, when proven, carry no computational information beyond their truth. Proof irrelevance means that all proofs of the same proposition are considered equal, enabling efficient compilation where proof terms can be erased.

Type n: The universe hierarchy Type 0, Type 1, Type 2, ... represents computational types at increasing levels of abstraction. Types like Nat and Bool inhabit Type 0, while type constructors like List : Type 0 → Type 0 inhabit Type 1.

Universe Polymorphism: Definitions can be polymorphic over universe levels, enabling generic constructions that work across the entire hierarchy. The identity function can be defined once and work for types at any universe level.

The universe hierarchy maintains predicativity for computational types, meaning that a type constructor at level n can only quantify over types at levels strictly less than n. This restriction prevents the construction of large elimination paradoxes while maintaining logical consistency.

However, the proposition universe Prop is impredicative, allowing quantification over arbitrary types including propositions themselves. This impredicativity enables the expression of powerful logical principles while maintaining consistency through proof irrelevance.

Implementation Architecture

Our Calculus of Constructions implementation demonstrates how these theoretical concepts translate into practical type checking algorithms and programming language features.

#![allow(unused)]
fn main() {
use std::fmt;
/// Universe levels for type theory
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Universe {
    Const(u32),                         // Concrete level: 0, 1, 2, ...
    ScopedVar(String, String),          // Scoped universe variable: (scope_name, var_name)
    Meta(u32),                          // Universe metavariable: ?u₀, ?u₁, ...
    Add(Box<Universe>, u32),            // Level + n
    Max(Box<Universe>, Box<Universe>),  // max(u, v)
    IMax(Box<Universe>, Box<Universe>), // imax(u, v)
}
/// Universe level constraints for polymorphism
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UniverseConstraint {
    /// u ≤ v
    LessEq(Universe, Universe),
    /// u = v
    Equal(Universe, Universe),
}
/// Context for universe level variables
#[derive(Debug, Clone, PartialEq)]
pub struct UniverseContext {
    /// Currently bound universe variables
    pub variables: Vec<String>,
    /// Constraints on universe levels
    pub constraints: Vec<UniverseConstraint>,
}
/// Pattern matching arms
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Term,
}
/// Patterns for match expressions
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Variable pattern: x
    Var(String),

    /// Constructor pattern: Cons x xs
    Constructor(String, Vec<Pattern>),

    /// Wildcard pattern: _
    Wildcard,
}
/// Top-level declarations
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Constant definition: def name {u...} (params) : type := body
    Definition {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        body: Term,
    },

    /// Axiom: axiom name {u...} : type
    Axiom {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        ty: Term,
    },

    /// Inductive type: inductive Name {u...} (params) : Type := constructors
    Inductive {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        constructors: Vec<Constructor>,
    },

    /// Structure (single constructor inductive): structure Name {u...} :=
    /// (fields)
    Structure {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        fields: Vec<Field>,
    },
}
/// Function parameters
#[derive(Debug, Clone, PartialEq)]
pub struct Parameter {
    pub name: String,
    pub ty: Term,
    pub implicit: bool, // for {x : A} vs (x : A)
}
/// Constructor for inductive types
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub ty: Term,
}
/// Structure fields
#[derive(Debug, Clone, PartialEq)]
pub struct Field {
    pub name: String,
    pub ty: Term,
}
/// Module containing multiple declarations
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
impl fmt::Display for Universe {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Universe::Const(n) => write!(f, "{}", n),
            Universe::ScopedVar(scope, var) => write!(f, "{}::{}", scope, var),
            Universe::Meta(id) => write!(f, "?u{}", id),
            Universe::Add(u, n) => write!(f, "{}+{}", u, n),
            Universe::Max(u, v) => write!(f, "max({}, {})", u, v),
            Universe::IMax(u, v) => write!(f, "imax({}, {})", u, v),
        }
    }
}
impl fmt::Display for Term {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Term::Var(x) => write!(f, "{}", x),
            Term::App(t1, t2) => write!(f, "({} {})", t1, t2),
            Term::Abs(x, ty, body) => write!(f, "(fun {} : {} => {})", x, ty, body),
            Term::Pi(x, ty, body, implicit) => {
                if Self::occurs_free(x, body) {
                    if *implicit {
                        write!(f, "{{{}  : {}}} → {}", x, ty, body)
                    } else {
                        write!(f, "({} : {}) → {}", x, ty, body)
                    }
                } else {
                    write!(f, "{} → {}", ty, body)
                }
            }
            Term::Sort(u) => match u {
                Universe::Const(0) => write!(f, "Prop"),
                Universe::Const(n) => {
                    if *n == 1 {
                        write!(f, "Type")
                    } else {
                        write!(f, "Type {}", n - 1)
                    }
                }
                Universe::Add(ref base, n) => {
                    if let Universe::Const(1) = **base {
                        write!(f, "Type {}", n)
                    } else {
                        write!(f, "Sort {}", u)
                    }
                }
                _ => write!(f, "Sort {}", u),
            },
            Term::Let(x, ty, val, body) => write!(f, "(let {} : {} := {} in {})", x, ty, val, body),
            Term::Match(scrutinee, arms) => {
                writeln!(f, "match {} with", scrutinee)?;
                for arm in arms {
                    writeln!(f, "| {} => {}", arm.pattern, arm.body)?;
                }
                Ok(())
            }
            Term::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Term::Const(name) => write!(f, "{}", name),
            Term::Meta(name) => write!(f, "?{}", name),
            Term::Proj(term, field) => write!(f, "{}.{}", term, field),
            Term::Sigma(x, domain, codomain) => write!(f, "Σ ({} : {}), {}", x, domain, codomain),
            Term::Pair(fst, snd) => write!(f, "⟨{}, {}⟩", fst, snd),
            Term::Fst(pair) => write!(f, "π₁({})", pair),
            Term::Snd(pair) => write!(f, "π₂({})", pair),
        }
    }
}
impl fmt::Display for Pattern {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Pattern::Var(x) => write!(f, "{}", x),
            Pattern::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Pattern::Wildcard => write!(f, "_"),
        }
    }
}
impl Term {
    /// Check if variable occurs free in term
    pub fn occurs_free(var: &str, term: &Term) -> bool {
        match term {
            Term::Var(x) => x == var,
            Term::App(t1, t2) => Self::occurs_free(var, t1) || Self::occurs_free(var, t2),
            Term::Abs(x, ty, body) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Pi(x, ty, body, _) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Sort(_) => false,
            Term::Let(x, ty, val, body) => {
                Self::occurs_free(var, ty)
                    || Self::occurs_free(var, val)
                    || (x != var && Self::occurs_free(var, body))
            }
            Term::Match(scrutinee, arms) => {
                Self::occurs_free(var, scrutinee)
                    || arms.iter().any(|arm| Self::occurs_free(var, &arm.body))
            }
            Term::Constructor(_, args) => args.iter().any(|arg| Self::occurs_free(var, arg)),
            Term::Const(_) => false,
            Term::Meta(_) => false,
            Term::Proj(term, _) => Self::occurs_free(var, term),
            Term::Sigma(x, domain, codomain) => {
                Self::occurs_free(var, domain) || (x != var && Self::occurs_free(var, codomain))
            }
            Term::Pair(fst, snd) => Self::occurs_free(var, fst) || Self::occurs_free(var, snd),
            Term::Fst(pair) | Term::Snd(pair) => Self::occurs_free(var, pair),
        }
    }
}
impl UniverseContext {
    pub fn new() -> Self {
        UniverseContext {
            variables: Vec::new(),
            constraints: Vec::new(),
        }
    }

    /// Add a universe variable to the context
    pub fn add_var(&mut self, name: String) {
        if !self.variables.contains(&name) {
            self.variables.push(name);
        }
    }

    /// Add a constraint to the context
    pub fn add_constraint(&mut self, constraint: UniverseConstraint) {
        self.constraints.push(constraint);
    }

    /// Check if a universe variable is bound in this context
    pub fn contains(&self, name: &str) -> bool {
        self.variables.contains(&name.to_string())
    }

    /// Extend context with variables from another context
    pub fn extend(&self, other: &UniverseContext) -> Self {
        let mut new_ctx = self.clone();
        for var in &other.variables {
            new_ctx.add_var(var.clone());
        }
        for constraint in &other.constraints {
            new_ctx.add_constraint(constraint.clone());
        }
        new_ctx
    }

    /// Generate a fresh universe variable name
    pub fn fresh_var(&self, base: &str) -> String {
        let mut counter = 0;
        loop {
            let name = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };
            if !self.contains(&name) {
                return name;
            }
            counter += 1;
        }
    }
}
impl Default for UniverseContext {
    fn default() -> Self {
        Self::new()
    }
}
impl Universe {
    /// Create a scoped universe variable
    pub fn scoped_var(scope: String, name: String) -> Self {
        Universe::ScopedVar(scope, name)
    }
}
/// Core terms in the Calculus of Constructions
#[derive(Debug, Clone, PartialEq)]
pub enum Term {
    /// Variables: x, y, z
    Var(String),

    /// Application: f a
    App(Box<Term>, Box<Term>),

    /// Lambda abstraction: λ x : A, t  (written as fun x : A => t)
    Abs(String, Box<Term>, Box<Term>),

    /// Dependent product: Π x : A, B  (written as (x : A) → B or {x : A} → B)
    Pi(String, Box<Term>, Box<Term>, bool), // bool = implicit

    /// Sort/Type: Sort u, Type, Prop
    Sort(Universe),

    /// Let binding: let x := t in s
    Let(String, Box<Term>, Box<Term>, Box<Term>),

    /// Match expression with patterns
    Match(Box<Term>, Vec<MatchArm>),

    /// Inductive type constructor
    Constructor(String, Vec<Term>),

    /// Local constant (for definitions and axioms)
    Const(String),

    /// Meta-variable for type inference
    Meta(String),

    /// Field projection: term.field
    Proj(Box<Term>, String),

    /// Dependent pair type (Sigma type): Σ (x : A), B
    Sigma(String, Box<Term>, Box<Term>),

    /// Pair constructor: ⟨a, b⟩
    Pair(Box<Term>, Box<Term>),

    /// First projection: π₁
    Fst(Box<Term>),

    /// Second projection: π₂
    Snd(Box<Term>),
}
}

The term language unifies all syntactic categories into a single framework where the same constructs serve multiple roles depending on context. Lambda abstractions create both functions and proof terms, applications represent both function calls and modus ponens, and products represent both function types and universal quantification.

Bidirectional Type Checking

The implementation employs bidirectional type checking that splits type checking into complementary synthesis and checking modes. This approach handles the complexity of dependent types while maintaining decidability and providing informative error messages.

Synthesis Mode: Given a term, determines its type by analyzing the term’s structure and propagating type information through the syntax tree.

Checking Mode: Given a term and an expected type, verifies that the term inhabits the expected type by checking compatibility modulo definitional equality.

Constraint-Based Inference

The system includes advanced constraint solving for implicit argument inference and unification of dependent types. Meta-variables represent unknown types that get resolved through unification with complex dependency tracking.

#![allow(unused)]
fn main() {
use std::collections::{HashMap, HashSet, VecDeque};
use std::fmt;
use crate::ast::{Term, Universe};
use crate::context::Context;
use crate::errors::TypeError;
/// Meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct MetaId(pub u64);
impl fmt::Display for MetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?m{}", self.0)
    }
}
/// Universe meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct UniverseMetaId(pub u64);
impl fmt::Display for UniverseMetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?u{}", self.0)
    }
}
/// Information about a meta-variable
#[derive(Debug, Clone)]
pub struct MetaInfo {
    pub id: MetaId,
    pub context: Vec<String>, // Variables in scope when meta was created
    pub expected_type: Option<Term>, // Expected type of the meta-variable
    pub solution: Option<Term>, // Solution if solved
    pub dependencies: HashSet<MetaId>, // Meta-variables this depends on
    pub occurrences: Vec<ConstraintId>, // Constraints this meta appears in
}
/// Constraint identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct ConstraintId(pub u64);
/// Constraint strength for prioritization
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ConstraintStrength {
    Required,  // Must be solved
    Preferred, // Should be solved if possible
    Weak,      // Can be postponed
}
/// Substitution mapping meta-variables to terms
#[derive(Debug, Clone, Default)]
pub struct Substitution {
    mapping: HashMap<MetaId, Term>,
    universe_mapping: HashMap<u32, Universe>,
}
impl Substitution {
    pub fn new() -> Self {
        Self::default()
    }

    pub fn insert(&mut self, meta: MetaId, term: Term) {
        self.mapping.insert(meta, term);
    }

    pub fn get(&self, meta: &MetaId) -> Option<&Term> {
        self.mapping.get(meta)
    }

    pub fn insert_universe(&mut self, meta_id: u32, universe: Universe) {
        self.universe_mapping.insert(meta_id, universe);
    }

    pub fn get_universe(&self, meta_id: &u32) -> Option<&Universe> {
        self.universe_mapping.get(meta_id)
    }

    /// Apply substitution to a term
    pub fn apply(&self, term: &Term) -> Term {
        match term {
            Term::Meta(name) => {
                // Try to parse meta ID and look up substitution
                if let Some(meta_id) = self.parse_meta_name(name) {
                    if let Some(subst) = self.mapping.get(&meta_id) {
                        // Recursively apply substitution
                        return self.apply(subst);
                    }
                }
                term.clone()
            }
            Term::App(f, arg) => Term::App(Box::new(self.apply(f)), Box::new(self.apply(arg))),
            Term::Abs(x, ty, body) => Term::Abs(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
            ),
            Term::Pi(x, ty, body, implicit) => Term::Pi(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
                *implicit,
            ),
            Term::Let(x, ty, val, body) => Term::Let(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(val)),
                Box::new(self.apply(body)),
            ),
            Term::Sort(u) => Term::Sort(self.apply_universe(u)),
            // Other cases remain unchanged
            _ => term.clone(),
        }
    }

    /// Apply substitution to a universe
    pub fn apply_universe(&self, universe: &Universe) -> Universe {
        let result = match universe {
            Universe::Meta(id) => {
                if let Some(subst) = self.universe_mapping.get(id) {
                    // Recursively apply substitution
                    self.apply_universe(subst)
                } else {
                    universe.clone()
                }
            }
            Universe::Add(base, n) => Universe::Add(Box::new(self.apply_universe(base)), *n),
            Universe::Max(u, v) => Universe::Max(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            Universe::IMax(u, v) => Universe::IMax(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            // Constants, variables, and scoped variables remain unchanged during substitution
            Universe::Const(_) | Universe::ScopedVar(_, _) => universe.clone(),
        };
        // Normalize the result to simplify expressions like Const(0) + 1 -> Const(1)
        self.normalize_universe_static(&result)
    }

    /// Static normalization (doesn't require &self)
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe_static(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe_static(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe_static(u1);
                let u2_norm = self.normalize_universe_static(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }
}
/// Advanced constraint solver with dependency tracking
pub struct Solver {
    /// Meta-variables
    metas: HashMap<MetaId, MetaInfo>,
    /// Active constraints
    constraints: HashMap<ConstraintId, Constraint>,
    /// Constraint queue (ordered by priority)
    queue: VecDeque<ConstraintId>,
    /// Solved substitutions
    substitution: Substitution,
    /// Next meta ID
    next_meta_id: u64,
    /// Next universe meta ID
    next_universe_meta_id: u32,
    /// Next constraint ID
    next_constraint_id: u64,
    /// Type checking context
    context: Context,
    /// Dependency graph: meta -> constraints that depend on it
    dependencies: HashMap<MetaId, HashSet<ConstraintId>>,
    /// Delayed constraints that couldn't be solved immediately
    delayed_constraints: HashSet<ConstraintId>,
    /// Recursion depth counter to prevent stack overflow
    recursion_depth: u32,
    /// Flag to enable Miller pattern solving (advanced)
    enable_miller_patterns: bool,
    /// Flag to enable HasType constraint solving (advanced)
    enable_has_type_solving: bool,
}
impl Solver {
    pub fn new(context: Context) -> Self {
        Self {
            metas: HashMap::new(),
            constraints: HashMap::new(),
            queue: VecDeque::new(),
            substitution: Substitution::new(),
            next_meta_id: 0,
            next_universe_meta_id: 0,
            next_constraint_id: 0,
            context,
            dependencies: HashMap::new(),
            delayed_constraints: HashSet::new(),
            recursion_depth: 0,
            enable_miller_patterns: false,  // Advanced feature
            enable_has_type_solving: false, // Advanced feature
        }
    }

    /// Enable Miller pattern solving (advanced)
    pub fn enable_miller_patterns(&mut self) {
        self.enable_miller_patterns = true;
    }

    /// Check if Miller pattern solving is enabled
    pub fn miller_patterns_enabled(&self) -> bool {
        self.enable_miller_patterns
    }

    /// Enable HasType constraint solving (advanced)
    pub fn enable_has_type_solving(&mut self) {
        self.enable_has_type_solving = true;
    }

    /// Check if HasType constraint solving is enabled
    pub fn has_type_solving_enabled(&self) -> bool {
        self.enable_has_type_solving
    }

    /// Create a fresh meta-variable
    pub fn fresh_meta(&mut self, ctx_vars: Vec<String>, expected_type: Option<Term>) -> MetaId {
        let id = MetaId(self.next_meta_id);
        self.next_meta_id += 1;

        let info = MetaInfo {
            id,
            context: ctx_vars,
            expected_type,
            solution: None,
            dependencies: HashSet::new(),
            occurrences: Vec::new(),
        };

        self.metas.insert(id, info);
        id
    }

    /// Create a fresh universe meta-variable
    pub fn fresh_universe_meta(&mut self) -> Universe {
        let id = self.next_universe_meta_id;
        self.next_universe_meta_id += 1;
        Universe::Meta(id)
    }

    /// Add a constraint to the solver
    pub fn add_constraint(&mut self, constraint: Constraint) -> ConstraintId {
        let id = match &constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        // Track which metas appear in this constraint
        self.track_meta_occurrences(&constraint);

        self.constraints.insert(id, constraint);
        self.queue.push_back(id);
        id
    }

    /// Create a new constraint ID
    pub fn new_constraint_id(&mut self) -> ConstraintId {
        let id = ConstraintId(self.next_constraint_id);
        self.next_constraint_id += 1;
        id
    }

    /// Track meta-variable occurrences in constraints
    fn track_meta_occurrences(&mut self, constraint: &Constraint) {
        let metas = self.collect_metas_in_constraint(constraint);
        let constraint_id = match constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        for meta_id in metas {
            if let Some(info) = self.metas.get_mut(&meta_id) {
                info.occurrences.push(constraint_id);
            }
            self.dependencies
                .entry(meta_id)
                .or_default()
                .insert(constraint_id);
        }
    }

    /// Collect all meta-variables in a constraint
    fn collect_metas_in_constraint(&self, constraint: &Constraint) -> HashSet<MetaId> {
        match constraint {
            Constraint::Unify { left, right, .. } => {
                let mut metas = self.collect_metas_in_term(left);
                metas.extend(self.collect_metas_in_term(right));
                metas
            }
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                let mut metas = self.collect_metas_in_term(term);
                metas.extend(self.collect_metas_in_term(expected_type));
                metas
            }
            Constraint::UnifyUniverse { .. } => {
                // Universe constraints don't directly contain term meta-variables
                // but they might contain universe meta-variables (which we track separately)
                HashSet::new()
            }
            Constraint::UniverseLevel { .. } => {
                // Same as above
                HashSet::new()
            }
            Constraint::Delayed {
                constraint,
                waiting_on,
                ..
            } => {
                let mut metas = waiting_on.clone();
                metas.extend(self.collect_metas_in_constraint(constraint));
                metas
            }
        }
    }

    /// Collect all meta-variables in a term
    fn collect_metas_in_term(&self, term: &Term) -> HashSet<MetaId> {
        let mut metas = HashSet::new();
        self.collect_metas_recursive(term, &mut metas);
        metas
    }

    fn collect_metas_recursive(&self, term: &Term, metas: &mut HashSet<MetaId>) {
        match term {
            Term::Meta(name) => {
                if let Some(id) = self.parse_meta_name(name) {
                    metas.insert(id);
                }
            }
            Term::App(f, arg) => {
                self.collect_metas_recursive(f, metas);
                self.collect_metas_recursive(arg, metas);
            }
            Term::Abs(_, ty, body) | Term::Pi(_, ty, body, _) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(body, metas);
            }
            Term::Let(_, ty, val, body) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(val, metas);
                self.collect_metas_recursive(body, metas);
            }
            _ => {}
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }

    /// Main solving loop
    pub fn solve(&mut self) -> Result<Substitution, TypeError> {
        let max_iterations = 1000;
        let mut iterations = 0;

        while !self.queue.is_empty() && iterations < max_iterations {
            iterations += 1;

            // Pick the best constraint to solve
            if let Some(constraint_id) = self.pick_constraint() {
                let constraint = self.constraints.remove(&constraint_id).unwrap();

                match self.solve_constraint(constraint.clone()) {
                    Ok(progress) => {
                        if progress {
                            // Propagate the solution
                            self.propagate_solution()?;

                            // Wake up delayed constraints that might now be solvable
                            let woken_constraints = self.wake_delayed_constraints()?;

                            // Add woken constraints back to queue with high priority
                            for woken_id in woken_constraints {
                                self.queue.push_front(woken_id);
                            }
                        }
                    }
                    Err(e) => {
                        // Try to delay the constraint if possible
                        if self.can_delay(&constraint_id) {
                            // Determine which meta-variables to wait for
                            let waiting_on = match &constraint {
                                Constraint::Unify { left, right, .. } => {
                                    let mut metas = self.collect_metas_in_term(left);
                                    metas.extend(self.collect_metas_in_term(right));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                Constraint::HasType {
                                    term,
                                    expected_type,
                                    ..
                                } => {
                                    let mut metas = self.collect_metas_in_term(term);
                                    metas.extend(self.collect_metas_in_term(expected_type));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                _ => HashSet::new(),
                            };

                            if !waiting_on.is_empty() {
                                // Put constraint back and delay it
                                self.constraints.insert(constraint_id, constraint);
                                self.delay_constraint(constraint_id, waiting_on)?;
                            } else {
                                // Can't delay, return error
                                return Err(e);
                            }
                        } else {
                            return Err(e);
                        }
                    }
                }
            }
        }

        if iterations >= max_iterations {
            return Err(TypeError::Internal {
                message: "Constraint solving did not converge".to_string(),
            });
        }

        // Check for unsolved required constraints
        for constraint in self.constraints.values() {
            if let Constraint::Unify {
                strength: ConstraintStrength::Required,
                ..
            } = constraint
            {
                return Err(TypeError::Internal {
                    message: "Unsolved required constraints remain".to_string(),
                });
            }
        }

        Ok(self.substitution.clone())
    }

    /// Pick the next constraint to solve based on heuristics
    fn pick_constraint(&mut self) -> Option<ConstraintId> {
        // Priority order:
        // 1. Constraints with no meta-variables
        // 2. Constraints with only solved meta-variables
        // 3. Simple pattern constraints (Miller patterns)
        // 4. Other constraints

        let mut best_constraint = None;
        let mut best_score = i32::MAX;

        for &id in &self.queue {
            if let Some(constraint) = self.constraints.get(&id) {
                let score = self.score_constraint(constraint);
                if score < best_score {
                    best_score = score;
                    best_constraint = Some(id);
                }
            }
        }

        if let Some(id) = best_constraint {
            self.queue.retain(|&x| x != id);
            Some(id)
        } else {
            self.queue.pop_front()
        }
    }

    /// Score a constraint for solving priority (lower is better)
    fn score_constraint(&self, constraint: &Constraint) -> i32 {
        let metas = self.collect_metas_in_constraint(constraint);

        if metas.is_empty() {
            return 0; // No metas, can solve immediately
        }

        let unsolved_count = metas
            .iter()
            .filter(|m| {
                self.metas
                    .get(m)
                    .and_then(|info| info.solution.as_ref())
                    .is_none()
            })
            .count();

        if unsolved_count == 0 {
            return 1; // All metas solved
        }

        // Check for Miller patterns
        if let Constraint::Unify { left, right, .. } = constraint {
            if self.is_miller_pattern_unification(left, right)
                || self.is_miller_pattern_unification(right, left)
            {
                return 10 + unsolved_count as i32;
            }
        }

        100 + unsolved_count as i32
    }

    /// Check if a unification is a Miller pattern (?M x1 ... xn = t)
    fn is_miller_pattern_unification(&self, left: &Term, right: &Term) -> bool {
        self.is_simple_miller_pattern(left, right)
    }

    /// Check if left is a simple Miller pattern and right is safe to solve with
    pub fn is_simple_miller_pattern(&self, left: &Term, right: &Term) -> bool {
        let (head, args) = self.decompose_application(left);

        if let Term::Meta(meta_name) = head {
            // Must have at least one argument to be interesting
            if args.is_empty() {
                return false;
            }

            // Check if all arguments are distinct variables
            let mut seen = HashSet::new();
            for arg in &args {
                if let Term::Var(x) = arg {
                    if !seen.insert(x.clone()) {
                        return false; // Not distinct
                    }
                } else {
                    return false; // Not a variable
                }
            }

            // Check occurs check - right must not contain the meta-variable
            if let Some(meta_id) = self.parse_meta_name(meta_name) {
                if self.collect_metas_in_term(right).contains(&meta_id) {
                    return false; // Occurs check failure
                }
            }

            // For now, only handle simple cases where right is a variable or constant
            match right {
                Term::Var(_) | Term::Const(_) | Term::Sort(_) => true,
                Term::Meta(_) => {
                    // Allow meta-to-meta if they're different
                    if let Term::Meta(right_name) = right {
                        meta_name != right_name
                    } else {
                        false
                    }
                }
                _ => false, // More complex cases need careful handling
            }
        } else {
            false
        }
    }

    /// Decompose an application into head and arguments
    fn decompose_application<'a>(&self, term: &'a Term) -> (&'a Term, Vec<&'a Term>) {
        let mut current = term;
        let mut args = Vec::new();

        while let Term::App(f, arg) = current {
            args.push(arg.as_ref());
            current = f;
        }

        args.reverse();
        (current, args)
    }

    /// Solve a single constraint
    fn solve_constraint(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 100;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in solve_constraint",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.solve_constraint_impl(constraint);
        self.recursion_depth -= 1;
        result
    }

    fn solve_constraint_impl(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        match constraint {
            Constraint::Unify {
                left,
                right,
                strength,
                ..
            } => self.unify(&left, &right, strength),
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                if self.enable_has_type_solving {
                    self.solve_has_type_constraint(&term, &expected_type)
                } else {
                    // Default behavior: don't solve HasType constraints
                    Ok(false)
                }
            }
            Constraint::UnifyUniverse {
                left,
                right,
                strength,
                ..
            } => self.unify_universe(&left, &right, strength),
            Constraint::UniverseLevel { left, right, .. } => {
                // For now, treat level constraints as unification constraints
                self.unify_universe(&left, &right, ConstraintStrength::Preferred)
            }
            Constraint::Delayed { constraint, .. } => {
                // Re-queue the delayed constraint
                let inner = *constraint;
                self.add_constraint(inner);
                Ok(false)
            }
        }
    }

    /// Unify two terms
    fn unify(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 50;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in unify",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.unify_impl(left, right, strength);
        self.recursion_depth -= 1;
        result
    }

    fn unify_impl(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply(left);
        let right = self.substitution.apply(right);

        // Check if already equal
        if self.alpha_equal(&left, &right) {
            return Ok(false);
        }

        // Miller pattern detection and solving (if enabled)
        if self.enable_miller_patterns {
            if self.is_simple_miller_pattern(&left, &right) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&left) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &right);
                }
            } else if self.is_simple_miller_pattern(&right, &left) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&right) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &left);
                }
            }
        }

        match (&left, &right) {
            // Meta-variable cases
            (Term::Meta(m), t) | (t, Term::Meta(m)) => {
                if let Some(meta_id) = self.parse_meta_name(m) {
                    self.solve_meta(meta_id, t)
                } else {
                    Err(TypeError::Internal {
                        message: format!("Invalid meta-variable: {}", m),
                    })
                }
            }

            // Structural cases
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *f1.clone(),
                    right: *f2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: *a1.clone(),
                    right: *a2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                // Rename bound variables to be consistent
                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            // Universe (Sort) cases
            (Term::Sort(u1), Term::Sort(u2)) => self.unify_universe(u1, u2, strength),

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::TypeMismatch {
                        expected: left,
                        actual: right,
                    })
                } else {
                    Ok(false) // Can't solve, but not an error for weak
                              // constraints
                }
            }
        }
    }

    /// Unify two universes
    fn unify_universe(
        &mut self,
        left: &Universe,
        right: &Universe,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply_universe(left);
        let right = self.substitution.apply_universe(right);

        // Normalize universes (simplify expressions like 0+1 -> 1)
        let left = self.normalize_universe(&left);
        let right = self.normalize_universe(&right);

        // Check if already equal
        if left == right {
            return Ok(false);
        }

        match (&left, &right) {
            // Same constants
            (Universe::Const(n1), Universe::Const(n2)) if n1 == n2 => Ok(false),

            // Universe variable cases
            (Universe::ScopedVar(s1, v1), Universe::ScopedVar(s2, v2)) if s1 == s2 && v1 == v2 => {
                Ok(false)
            }
            (Universe::ScopedVar(_, _), Universe::ScopedVar(_, _)) => {
                // Different universe variables cannot be unified
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!(
                            "Cannot unify distinct universe variables {} and {}",
                            left, right
                        ),
                    })
                } else {
                    Ok(false)
                }
            }

            // Meta-variable cases
            (Universe::Meta(id), u) | (u, Universe::Meta(id)) => self.solve_universe_meta(*id, u),

            // Structural cases
            (Universe::Add(base1, n1), Universe::Add(base2, n2)) if n1 == n2 => {
                let constraint_id = self.new_constraint_id();
                self.add_constraint(Constraint::UnifyUniverse {
                    id: constraint_id,
                    left: *base1.clone(),
                    right: *base2.clone(),
                    strength,
                });
                Ok(true)
            }

            // Arithmetic constraints: ?u+n = m means ?u = m-n
            (Universe::Add(base, n), Universe::Const(m))
            | (Universe::Const(m), Universe::Add(base, n)) => {
                if *m >= *n {
                    if let Universe::Meta(id) = base.as_ref() {
                        self.solve_universe_meta(*id, &Universe::Const(m - n))
                    } else {
                        // More complex case - generate constraint for base
                        let constraint_id = self.new_constraint_id();
                        self.add_constraint(Constraint::UnifyUniverse {
                            id: constraint_id,
                            left: *base.clone(),
                            right: Universe::Const(m - n),
                            strength,
                        });
                        Ok(true)
                    }
                } else {
                    Err(TypeError::Internal {
                        message: format!("Cannot solve constraint: {} cannot equal {} (would require negative universe)",
                                       if matches!(&left, Universe::Add(_, _)) { &left } else { &right },
                                       if matches!(&left, Universe::Const(_)) { &left } else { &right }),
                    })
                }
            }

            (Universe::Max(u1, v1), Universe::Max(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Universe::IMax(u1, v1), Universe::IMax(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!("Cannot unify universes {} and {}", left, right),
                    })
                } else {
                    Ok(false)
                }
            }
        }
    }

    /// Normalize a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe(u1);
                let u2_norm = self.normalize_universe(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    /// Solve a universe meta-variable
    fn solve_universe_meta(
        &mut self,
        meta_id: u32,
        solution: &Universe,
    ) -> Result<bool, TypeError> {
        // Occurs check for universe meta-variables
        if self.occurs_in_universe(meta_id, solution) {
            return Err(TypeError::Internal {
                message: format!("Universe occurs check failed for ?u{}", meta_id),
            });
        }

        // Record solution
        self.substitution.insert_universe(meta_id, solution.clone());
        Ok(true)
    }

    /// Check if a universe meta-variable occurs in a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn occurs_in_universe(&self, meta_id: u32, universe: &Universe) -> bool {
        match universe {
            Universe::Meta(id) => *id == meta_id,
            Universe::Add(base, _) => self.occurs_in_universe(meta_id, base),
            Universe::Max(u, v) | Universe::IMax(u, v) => {
                self.occurs_in_universe(meta_id, u) || self.occurs_in_universe(meta_id, v)
            }
            _ => false,
        }
    }

    /// Solve a meta-variable
    fn solve_meta(&mut self, meta_id: MetaId, solution: &Term) -> Result<bool, TypeError> {
        // Occurs check
        if self.collect_metas_in_term(solution).contains(&meta_id) {
            return Err(TypeError::Internal {
                message: format!("Occurs check failed for ?m{}", meta_id.0),
            });
        }

        // Check solution is well-scoped
        if let Some(meta_info) = self.metas.get(&meta_id) {
            if !self.is_well_scoped(solution, &meta_info.context) {
                return Err(TypeError::Internal {
                    message: format!("Solution for ?m{} is not well-scoped", meta_id.0),
                });
            }
        }

        // Record solution
        self.substitution.insert(meta_id, solution.clone());

        if let Some(info) = self.metas.get_mut(&meta_id) {
            info.solution = Some(solution.clone());
        }

        Ok(true)
    }

    /// Check if a term is well-scoped in a context
    fn is_well_scoped(&self, term: &Term, context: &[String]) -> bool {
        match term {
            Term::Var(x) => {
                // Check if it's a bound variable or a global constant
                context.contains(x)
                    || self.context.lookup(x).is_some()
                    || self.context.lookup_axiom(x).is_some()
                    || self.context.lookup_constructor(x).is_some()
            }
            Term::App(f, arg) => {
                self.is_well_scoped(f, context) && self.is_well_scoped(arg, context)
            }
            Term::Abs(x, ty, body) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Pi(x, ty, body, _) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Const(name) => {
                // Constants should be in the global context
                self.context.lookup_axiom(name).is_some()
                    || self.context.lookup_constructor(name).is_some()
            }
            _ => true, // Sorts, meta-variables, etc.
        }
    }

    /// Alpha equality check
    fn alpha_equal(&self, t1: &Term, t2: &Term) -> bool {
        self.alpha_equal_aux(t1, t2, &mut vec![])
    }

    #[allow(clippy::only_used_in_recursion)]
    fn alpha_equal_aux(&self, t1: &Term, t2: &Term, renamings: &mut Vec<(String, String)>) -> bool {
        match (t1, t2) {
            (Term::Var(x), Term::Var(y)) => {
                // Check if this is a bound variable that was renamed
                for (a, b) in renamings.iter() {
                    if a == x && b == y {
                        return true;
                    }
                }
                x == y
            }
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                self.alpha_equal_aux(f1, f2, renamings) && self.alpha_equal_aux(a1, a2, renamings)
            }
            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Sort(u1), Term::Sort(u2)) => u1 == u2,
            (Term::Const(c1), Term::Const(c2)) => c1 == c2,
            (Term::Meta(m1), Term::Meta(m2)) => m1 == m2,
            _ => false,
        }
    }

    /// Rename a variable in a term
    #[allow(clippy::only_used_in_recursion)]
    fn rename_var(&self, old: &str, new: &str, term: &Term) -> Term {
        match term {
            Term::Var(x) if x == old => Term::Var(new.to_string()),
            Term::Var(x) => Term::Var(x.clone()),
            Term::App(f, arg) => Term::App(
                Box::new(self.rename_var(old, new, f)),
                Box::new(self.rename_var(old, new, arg)),
            ),
            Term::Abs(x, ty, body) if x != old => Term::Abs(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
            ),
            Term::Pi(x, ty, body, implicit) if x != old => Term::Pi(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
                *implicit,
            ),
            _ => term.clone(),
        }
    }

    /// Propagate solutions to dependent constraints
    fn propagate_solution(&mut self) -> Result<(), TypeError> {
        // Apply substitution to all remaining constraints
        let constraints: Vec<_> = self.constraints.drain().collect();

        for (id, constraint) in constraints {
            let updated = self.apply_subst_to_constraint(constraint);
            self.constraints.insert(id, updated);
        }

        Ok(())
    }

    /// Apply substitution to a constraint
    fn apply_subst_to_constraint(&self, constraint: Constraint) -> Constraint {
        match constraint {
            Constraint::Unify {
                id,
                left,
                right,
                strength,
            } => Constraint::Unify {
                id,
                left: self.substitution.apply(&left),
                right: self.substitution.apply(&right),
                strength,
            },
            Constraint::HasType {
                id,
                term,
                expected_type,
            } => Constraint::HasType {
                id,
                term: self.substitution.apply(&term),
                expected_type: self.substitution.apply(&expected_type),
            },
            Constraint::UnifyUniverse {
                id,
                left,
                right,
                strength,
            } => Constraint::UnifyUniverse {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
                strength,
            },
            Constraint::UniverseLevel { id, left, right } => Constraint::UniverseLevel {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
            },
            Constraint::Delayed {
                id,
                constraint,
                waiting_on,
            } => Constraint::Delayed {
                id,
                constraint: Box::new(self.apply_subst_to_constraint(*constraint)),
                waiting_on,
            },
        }
    }

    /// Solve a HasType constraint by type inference and unification
    fn solve_has_type_constraint(
        &mut self,
        term: &Term,
        expected_type: &Term,
    ) -> Result<bool, TypeError> {
        match term {
            Term::Meta(meta_name) => {
                // If the term is a meta-variable, we can potentially solve it
                if let Some(meta_id) = self.parse_meta_name(meta_name) {
                    if let Some(meta_info) = self.metas.get(&meta_id) {
                        if meta_info.solution.is_none() {
                            // Try to construct a term that has the expected type
                            if let Some(solution) = self.synthesize_term_of_type(expected_type)? {
                                return self.solve_meta(meta_id, &solution);
                            }
                        }
                    }
                }
                // If we can't solve the meta, try unifying with expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }

            Term::App(f, arg) => {
                // For applications, we need to ensure the function type is compatible
                // Create a fresh meta for the function type: ?F : ?A -> ?B
                let arg_type_meta_id = self.fresh_meta(vec![], None);
                let result_type_meta_id = self.fresh_meta(vec![], None);
                let arg_type_meta = Term::Meta(format!("?m{}", arg_type_meta_id.0));
                let result_type_meta = Term::Meta(format!("?m{}", result_type_meta_id.0));
                let func_type = Term::Pi(
                    "_".to_string(),
                    Box::new(arg_type_meta.clone()),
                    Box::new(result_type_meta.clone()),
                    false,
                );

                // Add constraints:
                // 1. f : ?A -> ?B
                // 2. arg : ?A
                // 3. ?B ≡ expected_type
                let f_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: f.as_ref().clone(),
                    expected_type: func_type,
                };
                let arg_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: arg.as_ref().clone(),
                    expected_type: arg_type_meta,
                };
                let result_constraint = Constraint::Unify {
                    id: self.new_constraint_id(),
                    left: result_type_meta,
                    right: expected_type.clone(),
                    strength: ConstraintStrength::Required,
                };

                self.add_constraint(f_constraint);
                self.add_constraint(arg_constraint);
                self.add_constraint(result_constraint);

                Ok(true)
            }

            Term::Abs(param, param_type, body) => {
                // For lambda abstractions, expected type should be a Pi type
                match expected_type {
                    Term::Pi(_, expected_param_type, expected_body_type, _) => {
                        // Unify parameter types and check body type
                        let param_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: param_type.as_ref().clone(),
                            right: expected_param_type.as_ref().clone(),
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: expected_body_type.as_ref().clone(),
                        };

                        self.add_constraint(param_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    Term::Meta(_) => {
                        // Expected type is a meta-variable, create a Pi type for it
                        let body_type_meta_id = self.fresh_meta(vec![param.clone()], None);
                        let body_type_meta = Term::Meta(format!("?m{}", body_type_meta_id.0));
                        let pi_type = Term::Pi(
                            param.clone(),
                            param_type.clone(),
                            Box::new(body_type_meta.clone()),
                            false,
                        );

                        let type_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: expected_type.clone(),
                            right: pi_type,
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: body_type_meta,
                        };

                        self.add_constraint(type_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    _ => {
                        // Type mismatch - lambda can't have non-function type
                        Err(TypeError::TypeMismatch {
                            expected: expected_type.clone(),
                            actual: Term::Pi(
                                param.clone(),
                                param_type.clone(),
                                Box::new(Term::Meta("?unknown".to_string())),
                                false,
                            ),
                        })
                    }
                }
            }

            _ => {
                // For other terms, just unify with the expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }
        }
    }

    /// Attempt to synthesize a term of a given type (basic heuristics)
    fn synthesize_term_of_type(&self, expected_type: &Term) -> Result<Option<Term>, TypeError> {
        match expected_type {
            Term::Sort(_) => {
                // For Sort types, we could return a fresh meta or a simple type
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
            Term::Pi(param, param_type, _body_type, _implicit) => {
                // For Pi types, synthesize a lambda that matches the Pi structure
                // Use a fresh meta-variable for the body to avoid infinite recursion
                let body_term = Term::Meta(format!("?body{}", self.next_meta_id));

                Ok(Some(Term::Abs(
                    param.clone(),
                    param_type.clone(),
                    Box::new(body_term),
                )))
            }
            Term::Meta(_) => {
                // Can't synthesize for unknown types
                Ok(None)
            }
            _ => {
                // For concrete types, return a meta-variable
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
        }
    }

    /// Check if a term is a Miller pattern
    /// Miller patterns are terms of the form: ?M x1 x2 ... xn where xi are
    /// distinct bound variables
    fn is_miller_pattern(&self, term: &Term) -> bool {
        let (head, args) = self.decompose_application(term);

        match head {
            Term::Meta(_) => {
                // Check that all arguments are distinct variables
                let mut seen = HashSet::new();
                for arg in args {
                    if let Term::Var(x) = arg {
                        if !seen.insert(x.clone()) {
                            return false; // Not distinct
                        }
                    } else {
                        return false; // Not a variable
                    }
                }
                true
            }
            _ => false,
        }
    }

    /// Extract the spine of a Miller pattern (the meta-variable and its
    /// arguments)
    fn extract_miller_spine(&self, term: &Term) -> Option<(String, Vec<String>)> {
        // First validate it's actually a Miller pattern
        if !self.is_miller_pattern(term) {
            return None;
        }

        let (head, args) = self.decompose_application(term);

        if let Term::Meta(meta_name) = head {
            // Collect variable names (we already know they're all variables from
            // is_miller_pattern)
            let var_names: Vec<String> = args
                .iter()
                .map(|arg| {
                    if let Term::Var(name) = arg {
                        name.clone()
                    } else {
                        unreachable!()
                    }
                })
                .collect();

            Some((meta_name.clone(), var_names))
        } else {
            None
        }
    }

    /// Solve simple Miller pattern unification
    /// Only handles simple cases
    fn solve_miller_pattern(
        &mut self,
        meta_name: &str,
        spine: &[String],
        other: &Term,
    ) -> Result<bool, TypeError> {
        // Check if we can abstract over the spine variables
        if !Self::can_abstract_over_variables(other, spine) {
            // Can't abstract, fall back to regular unification
            return self.unify(
                &Term::Meta(meta_name.to_string()),
                other,
                ConstraintStrength::Required,
            );
        }

        if let Some(meta_id) = self.parse_meta_name(meta_name) {
            // Only handle simple cases to avoid complexity
            match other {
                // Case 1: ?M x ≡ y (different variable) -> ?M := λx. y
                Term::Var(y) if spine.len() == 1 && spine[0] != *y => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))), /* Type inferred later */
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                // Case 2: ?M x ≡ c (constant) -> ?M := λx. c
                Term::Const(_) | Term::Sort(_) if spine.len() == 1 => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))),
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                _ => {
                    // For more complex cases, we could build the full lambda
                    // abstraction but for now, fall back to
                    // regular unification
                }
            }
        }

        // Fall back to regular unification
        self.unify(
            &Term::Meta(meta_name.to_string()),
            other,
            ConstraintStrength::Required,
        )
    }

    /// Check if a term can be abstracted over the given variables
    fn can_abstract_over_variables(term: &Term, vars: &[String]) -> bool {
        // For now, implement a simple check
        // In a full implementation, this would check that the term only uses variables
        // that are either in the spine or are bound locally
        match term {
            Term::Var(v) => vars.contains(v), // Variable must be in spine
            Term::App(f, arg) => {
                Self::can_abstract_over_variables(f, vars)
                    && Self::can_abstract_over_variables(arg, vars)
            }
            Term::Abs(param, ty, body) => {
                // Extend the scope with the bound parameter
                let mut extended_vars = vars.to_vec();
                extended_vars.push(param.clone());
                Self::can_abstract_over_variables(ty, vars)
                    && Self::can_abstract_over_variables(body, &extended_vars)
            }
            Term::Meta(_) => true,  // Meta-variables are always abstractable
            Term::Const(_) => true, // Constants are always abstractable
            Term::Sort(_) => true,  // Sorts are always abstractable
            _ => false,             // Conservative for complex constructs
        }
    }

    /// Delay a constraint for later processing
    fn delay_constraint(
        &mut self,
        constraint_id: ConstraintId,
        waiting_on: HashSet<MetaId>,
    ) -> Result<(), TypeError> {
        if let Some(constraint) = self.constraints.remove(&constraint_id) {
            let delayed = Constraint::Delayed {
                id: constraint_id,
                constraint: Box::new(constraint),
                waiting_on,
            };
            self.constraints.insert(constraint_id, delayed);
            self.delayed_constraints.insert(constraint_id);
        }
        Ok(())
    }

    /// Wake up delayed constraints that are no longer blocked
    fn wake_delayed_constraints(&mut self) -> Result<Vec<ConstraintId>, TypeError> {
        let mut woken = Vec::new();
        let mut to_wake = Vec::new();

        // Check which delayed constraints can now be woken up
        for constraint_id in &self.delayed_constraints {
            if let Some(Constraint::Delayed {
                waiting_on,
                constraint,
                ..
            }) = self.constraints.get(constraint_id)
            {
                // Check if all blocking metas are now solved
                let all_solved = waiting_on.iter().all(|meta_id| {
                    self.metas
                        .get(meta_id)
                        .is_some_and(|info| info.solution.is_some())
                });

                if all_solved {
                    to_wake.push((*constraint_id, constraint.as_ref().clone()));
                }
            }
        }

        // Wake up the constraints
        for (constraint_id, original_constraint) in to_wake {
            self.constraints.insert(constraint_id, original_constraint);
            self.delayed_constraints.remove(&constraint_id);
            woken.push(constraint_id);
        }

        Ok(woken)
    }

    /// Check if we can delay a constraint
    fn can_delay(&self, constraint_id: &ConstraintId) -> bool {
        if let Some(constraint) = self.constraints.get(constraint_id) {
            match constraint {
                Constraint::Unify {
                    left,
                    right,
                    strength,
                    ..
                } => {
                    // We can delay weak constraints that involve unsolved metas
                    if *strength == ConstraintStrength::Weak {
                        let left_metas = self.collect_metas_in_term(left);
                        let right_metas = self.collect_metas_in_term(right);

                        // Check if any involved metas are unsolved
                        for meta_id in left_metas.iter().chain(right_metas.iter()) {
                            if self.is_unsolved_meta(meta_id) {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::HasType {
                    term,
                    expected_type,
                    ..
                } => {
                    // Can delay HasType constraints involving unsolved metas
                    let term_metas = self.collect_metas_in_term(term);
                    let type_metas = self.collect_metas_in_term(expected_type);

                    for meta_id in term_metas.iter().chain(type_metas.iter()) {
                        if let Some(meta_info) = self.metas.get(meta_id) {
                            if meta_info.solution.is_none() {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::Delayed { .. } => true, // Already delayed
                _ => false,
            }
        } else {
            false
        }
    }

    /// Check if a meta-variable is unsolved
    fn is_unsolved_meta(&self, meta_id: &MetaId) -> bool {
        self.metas
            .get(meta_id)
            .is_some_and(|info| info.solution.is_none())
    }
}
/// Constraint types
#[derive(Debug, Clone)]
pub enum Constraint {
    /// Unification: t1 ≡ t2
    Unify {
        id: ConstraintId,
        left: Term,
        right: Term,
        strength: ConstraintStrength,
    },
    /// Type constraint: term : type
    HasType {
        id: ConstraintId,
        term: Term,
        expected_type: Term,
    },
    /// Universe unification: u1 ≡ u2
    UnifyUniverse {
        id: ConstraintId,
        left: Universe,
        right: Universe,
        strength: ConstraintStrength,
    },
    /// Universe level constraint: u1 ≤ u2
    UniverseLevel {
        id: ConstraintId,
        left: Universe,
        right: Universe,
    },
    /// Delayed constraint (for complex patterns)
    Delayed {
        id: ConstraintId,
        constraint: Box<Constraint>,
        waiting_on: HashSet<MetaId>,
    },
}
}

The constraint solver handles higher-order unification patterns, universe level constraints, and delayed constraint resolution to enable practical programming with dependent types while maintaining theoretical soundness.

Theoretical Significance

The Calculus of Constructions is more than just another extension of System-F; it has a profound theoretical significance. It essentially demonstrates the fundamental unity between computation and mathematics. Which is one of the most profound ideas of theoretical computer science.

By showing that every constructive mathematical proof corresponds to a program and every program type-checks according to logical principles, CoC bridges the gap between formal verification and practical programming. And thats pretty amazing!

The system also provides a foundation for interactive theorem proving, where mathematical proofs are constructed through programming and verified through type checking. Proof assistants like Coq and Lean build upon the theoretical foundations established by the Calculus of Constructions, demonstrating the practical value of this unification.

Although Coq is interesting historically, we’re going to mostly be inspired by the Lean 4 model and adopt a small version of its type system and syntax in our toy implementation.

Dependent Types

Dependent types represent one of the most profound advances in type theory, fundamentally changing the relationship between computation and logic by allowing types to depend on the values they classify. This capability transforms types from static labels into dynamic specifications that can express precise mathematical properties and program invariants directly within the type system.

The journey from simple types to dependent types mirrors the evolution from basic arithmetic to advanced mathematics. Just as mathematics progresses from counting discrete objects to expressing relationships between abstract structures, type systems evolve from classifying basic values to encoding complex logical propositions and computational specifications.

The Limitation of Simple Types

Traditional type systems, even ones like System F-ω, maintain a fundamental separation between the computational world of terms and the classificatory world of types. Types serve as static labels that group values by their structural properties, enabling compile-time safety checks and optimization opportunities.

-- Simple types classify values statically
length : List a -> Int
head : List a -> a  -- Partial function - what if the list is empty?

This separation creates a gap between what programmers know about their data and what the type system can express. A programmer might know that a particular list is non-empty, but the type system cannot capture this knowledge, forcing runtime checks that could theoretically be eliminated.

Simple types excel at preventing basic errors like applying functions to arguments of incompatible types, but they cannot express relationships between values or capture domain-specific invariants. The result is a tension between type safety and expressiveness that dependent types resolve by eliminating the artificial boundary between terms and types.

The Dependent Type Revolution

Dependent types dissolve the separation between computation and classification by allowing types to depend on computational values. This dependency enables types to express arbitrarily precise specifications about the values they classify, transforming the type system into a specification language capable of expressing mathematical theorems.

Dependent Functions (Π-Types)

The dependent product type Π x : A. B generalizes the familiar function arrow A → B by allowing the result type B to depend on the input value x. This dependency enables function types that specify not just the structure of inputs and outputs, but the precise relationship between them.

-- Non-dependent function type
length : List A → Nat

-- Dependent function type
vec : (n : Nat) → Type → Type
create_vec : (n : Nat) → (A : Type) → vec n A

-- The return type depends on the input value
safe_head : (n : Nat) → (A : Type) → (v : vec (n + 1) A) → A

The safe_head function demonstrates the power of dependent types: by requiring that the vector length be n + 1 rather than just any natural number, the type system guarantees that the vector is non-empty, eliminating the possibility of runtime failures when extracting the head element.

Dependent Pairs (Σ-Types)

The dependent sum type Σ x : A. B represents pairs where the type of the second component depends on the value of the first component. This construct enables existential quantification and the creation of data structures that maintain precise relationships between their components.

-- Simple pair type
Pair A B : Type := A × B

-- Dependent pair type
DPair A B : Type := Σ x : A. B x

-- Concrete example: a vector with its length
sized_vec : Type := Σ n : Nat. vec n Int

-- Pattern matching preserves dependencies
process_sized : sized_vec → Int
process_sized ⟨n, v⟩ = sum_vec v  -- Type checker knows v has length n

Dependent pairs enable the expression of existential statements within the type system. Rather than asserting “there exists a natural number n such that property P holds,” we can construct a concrete witness that demonstrates the existence while providing computational access to both the witness and the proof of the property.

Dependent Types as Specifications

The true power of dependent types emerges when we recognize that they function as executable specifications. Unlike traditional specifications written in separate specification languages, dependent types are integrated into the programming language itself, enabling specifications that are checked automatically by the type system.

Precise Array Bounds

Traditional array operations require runtime bounds checking to ensure memory safety. Dependent types enable compile-time verification of array bounds, eliminating both the runtime overhead and the possibility of bounds violations.

-- Array type indexed by its length
Array : Nat → Type → Type

-- Bounds-safe array indexing
get : (n : Nat) → (A : Type) → Array n A → (i : Nat) → i < n → A

-- Usage requires proof that index is in bounds
example_access : Array 5 Int → Int
example_access arr = get 5 Int arr 2 (by norm_num)  -- Proof that 2 < 5

The type system now captures the relationship between array size and valid indices, transforming a runtime safety property into a compile-time guarantee. Programs that would cause array bounds violations become syntactically ill-formed, preventing an entire class of common programming errors.

Correctness Conditions

Dependent types can express correctness conditions that ensure algorithms satisfy their intended properties. A sorting function can be specified to produce a result that is both a permutation of its input and satisfies the sorted property.

-- Specification of sorted lists
Sorted : List Nat → Prop

-- Specification of permutations
Permutation : List A → List A → Prop

-- Type of correct sorting functions
sort : (l : List Nat) → {l' : List Nat // Sorted l' ∧ Permutation l l'}

This specification transforms the informal notion of “correct sorting” into a precise mathematical statement that the type system can verify. Implementations that fail to maintain the required properties will be rejected at compile time, providing strong correctness guarantees.

The Curry-Howard Correspondence in Practice

Dependent types realize the Curry-Howard correspondence in a practical programming context. The correspondence establishes a deep connection between logical propositions and types, between mathematical proofs and programs, and between proof verification and type checking.

Propositions as Types

Every mathematical proposition corresponds to a type in the dependent type system. The proposition “for all natural numbers n, n + 0 = n” becomes the type ∀n : Nat, n + 0 = n, where equality is represented as a type family that is inhabited precisely when its arguments are equal.

-- Mathematical proposition as a type
zero_right_identity : ∀n : Nat, n + 0 = n

-- Constructive proof as a program
zero_right_identity = fun n =>
  Nat.rec
    (Eq.refl 0)                    -- Base case: 0 + 0 = 0
    (fun k ih => congrArg succ ih) -- Inductive step: (k+1) + 0 = k+1

The proof term demonstrates the proposition by providing a computational witness. The type checker verifies that this witness actually establishes the claimed proposition, ensuring that only valid proofs are accepted.

Programs as Proofs

Conversely, every constructive proof corresponds to a program that computes evidence for the proven proposition. Complex mathematical theorems become programs that construct witnesses through computation.

-- Theorem: Every list has a decidable equality test
list_eq_decidable : (A : Type) → [DecidableEq A] → DecidableEq (List A)
list_eq_decidable A inst =
  -- Construction of decidable equality procedure for lists
  -- This is both a proof of decidability and an algorithm for testing equality

The program serves dual roles: it provides algorithmic content that can be executed computationally, and it serves as a proof that establishes the mathematical property. This duality eliminates the gap between specification and implementation.

Challenges and Solutions in Dependent Types

The expressive power of dependent types introduces new challenges that require solutions. The integration of computation with specification creates complexities that simpler type systems avoid.

Definitional Equality

In dependent type systems, type checking requires determining when two types are equal. However, since types can contain computational content, type equality becomes computational equivalence, which is generally undecidable.

Practical dependent type systems address this challenge by defining definitional equality as equality modulo certain computational rules. Two types are considered definitionally equal if they normalize to identical forms under β-reduction, η-expansion, and other definitional reductions.

-- These types are definitionally equal
Vector (2 + 3) Int  ≡  Vector 5 Int

-- Because (2 + 3) normalizes to 5

This approach maintains decidability by restricting definitional equality to normalizing computations while providing sufficient flexibility for practical programming patterns.

Universe Hierarchy

The power to construct types that depend on arbitrary values raises logical consistency concerns. If types can contain values and values can be types, what prevents the construction of paradoxical self-referential types?

Dependent type systems maintain consistency through universe stratification, where types are organized into a hierarchy of universes with strict inclusion relationships. Each universe contains types of bounded complexity, preventing the construction of impredicative types that would lead to logical paradoxes.

Mathematical Formulation

The universe hierarchy can be precisely formulated as an infinite sequence of universes with inclusion relationships:

\[\mathcal{U}_0 \subseteq \mathcal{U}_1 \subseteq \mathcal{U}_2 \subseteq \cdots \subseteq \mathcal{U}_\omega\]

Each universe \(\mathcal{U}_i\) serves as the domain for types at level \(i\), while \(\text{Type}_i\) denotes the type of types in universe \(\mathcal{U}_i\). The fundamental typing rules establish the hierarchy:

\[\frac{A : \text{Type}_i}{\text{Type}_i : \text{Type}_{i+1}} \quad \text{(Universe Formation)}\]

\[\frac{A : \text{Type}_i \quad i \leq j}{A : \text{Type}_j} \quad \text{(Cumulativity)}\]

The cumulativity rule enables types to be lifted to higher universes, providing flexibility while maintaining the strict stratification needed for consistency.

Type Formation Rules

The universe hierarchy governs how types can be formed at each level. Basic types inhabit universe \(\mathcal{U}_0\):

\[\text{Nat} : \text{Type}_0 \qquad \text{Bool} : \text{Type}_0\]

Type constructors must respect universe levels. For dependent function types, the result universe is the maximum of the argument and result universes:

\[\frac{\Gamma \vdash A : \text{Type}_i \quad \Gamma, x : A \vdash B : \text{Type}_j}{\Gamma \vdash \Pi x : A. B : \text{Type}_{\max(i,j)}} \quad \text{(Pi Formation)}\]

For inductive types, the universe level is determined by the constructors’ argument types:

\[\frac{\text{each constructor } c_k \text{ has type } \Pi \overrightarrow{x} : \overrightarrow{A}. T \quad \max(\text{levels}(\overrightarrow{A})) \leq i}{\text{inductive } T : \text{Type}_i} \quad \text{(Inductive Formation)}\]

Predicativity and Consistency

The universe hierarchy ensures predicativity, meaning that type constructors at level \(i\) can only quantify over types at levels strictly less than \(i\). This restriction prevents Russell-style paradoxes:

\[\frac{\Gamma \vdash A : \text{Type}_i \quad \Gamma, x : A \vdash B : \text{Type}_j \quad j < i}{\Gamma \vdash \Pi x : A. B : \text{Type}_i} \quad \text{(Predicative Pi)}\]

Without this restriction, we could construct the type of all types that do not contain themselves, leading to contradiction. The predicativity constraint ensures that self-reference is impossible within the type system.

Universe Polymorphism

In our implemenetation we support universe polymorphism, allowing definitions to abstract over universe levels:

\[\text{id} : \Pi u : \text{Level}. \Pi A : \text{Type}_u. A \to A\]

Universe level variables enable generic programming across the entire universe hierarchy:

\[\frac{\Gamma \vdash e : \Pi u : \text{Level}. T \quad \ell : \text{Level}}{\Gamma \vdash e[\ell] : T[u := \ell]} \quad \text{(Universe Application)}\]

Universe Arithmetic

Universe polymorphism often requires arithmetic operations on universe levels. We support the following operations:

  • Successor: \(u + 1\) represents the universe immediately above level \(u\)
  • Maximum: \(\max(u, v)\) represents the least universe containing both \(u\) and \(v\)
  • Addition: \(u + n\) represents the universe \(n\) levels above \(u\)

Which have the following properties:

\[\max(u, v) = \max(v, u) \quad \text{(Symmetry)}\] \[\max(u, \max(v, w)) = \max(\max(u, v), w) \quad \text{(Associativity)}\] \[(u + m) + n = u + (m + n) \quad \text{(Addition Associativity)}\]

Consistency and Normalization

The universe hierarchy ensures several crucial properties for dependent type systems:

Strong Normalization: Every well-typed term has a finite normal form, ensuring that type checking terminates.

Logical Consistency: The system admits no proof of false, maintaining its utility as a logical foundation.

Decidable Type Checking: The combination of strong normalization and definitional equality makes type checking decidable.

The formal treatment of universes requires careful attention to:

  1. Universe Level Inference: Automatically determining appropriate universe levels for polymorphic definitions
  2. Constraint Solving: Resolving systems of universe level constraints that arise during type checking
  3. Cumulative Subtyping: Implementing the coercion of types to higher universes efficiently

Implementation in the Calculus of Constructions

We adopt the following naming convention for type universes:

  • Prop - The universe of propositions, containing logical statements that may or may not have computational content

  • Type - The universe of types (equivalent to Type 0), containing ordinary data types like natural numbers and lists

  • Type u - A universe at level u, where u is a universe level variable

  • Sort u - A general universe at level u that can represent either Prop (when u = 0) or Type u (when u > 0)

The key principle is that a universe at level \(u\) can only classify types that are at levels lower than \(u\). This creates a hierarchy that prevents logical paradoxes while enabling expressive type-level programming.

-- Basic types live in Type (= Type 0)
Nat : Type
Bool : Type
String : Type

-- Type constructors respect universe levels
List : Type → Type
Option : Type → Type

-- Universe hierarchy
Type : Type 1
Type 1 : Type 2
Type 2 : Type 3
-- ... infinite hierarchy

-- Universe polymorphic definitions
def id.{u} (A : Type u) (x : A) : A := x
def compose.{u,v,w} (A : Type u) (B : Type v) (C : Type w)
                    (g : B → C) (f : A → B) (x : A) : C := g (f x)

-- Propositions live in Prop
inductive Eq.{u} {A : Sort u} (a : A) : A → Prop where
  | refl : Eq a a

-- Sort unifies Prop and Type
def identity.{u} {A : Sort u} (x : A) : A := x

Type Inference and Elaboration

The expressiveness of dependent types creates challenges for type inference, as the system must often infer not just types but also proof terms and computational content. Modern dependent type systems employ elaboration processes that insert implicit arguments, resolve type class instances, and construct proof terms automatically.

-- Surface syntax with implicit arguments
map : {A B : Type} → (A → B) → List A → List B

-- Elaborated form with explicit arguments
map : (A : Type) → (B : Type) → (A → B) → List A → List B

-- Usage with automatic elaboration
result = map succ [1, 2, 3]  -- Elaborates to: map Nat Nat succ [1, 2, 3]

The elaboration process bridges the gap between the concise syntax that programmers want to write and the fully explicit form that the type checker requires for verification.

Type System

Our Calculus of Constructions implementation centers around a type system that unifies terms, types, and kinds into a single syntactic framework. The implementation demonstrates how dependent types, universe polymorphism, and definitional equality work together to create a practical dependently-typed programming language.

AST Design and Term Language

The core of our implementation lies in the unified term language that represents all syntactic categories within a single AST structure. This design reflects the CoC principle that terms, types, and kinds are all inhabitants of the same computational universe.

#![allow(unused)]
fn main() {
use std::fmt;
/// Universe levels for type theory
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Universe {
    Const(u32),                         // Concrete level: 0, 1, 2, ...
    ScopedVar(String, String),          // Scoped universe variable: (scope_name, var_name)
    Meta(u32),                          // Universe metavariable: ?u₀, ?u₁, ...
    Add(Box<Universe>, u32),            // Level + n
    Max(Box<Universe>, Box<Universe>),  // max(u, v)
    IMax(Box<Universe>, Box<Universe>), // imax(u, v)
}
/// Universe level constraints for polymorphism
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UniverseConstraint {
    /// u ≤ v
    LessEq(Universe, Universe),
    /// u = v
    Equal(Universe, Universe),
}
/// Context for universe level variables
#[derive(Debug, Clone, PartialEq)]
pub struct UniverseContext {
    /// Currently bound universe variables
    pub variables: Vec<String>,
    /// Constraints on universe levels
    pub constraints: Vec<UniverseConstraint>,
}
/// Pattern matching arms
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Term,
}
/// Patterns for match expressions
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Variable pattern: x
    Var(String),

    /// Constructor pattern: Cons x xs
    Constructor(String, Vec<Pattern>),

    /// Wildcard pattern: _
    Wildcard,
}
/// Top-level declarations
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Constant definition: def name {u...} (params) : type := body
    Definition {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        body: Term,
    },

    /// Axiom: axiom name {u...} : type
    Axiom {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        ty: Term,
    },

    /// Inductive type: inductive Name {u...} (params) : Type := constructors
    Inductive {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        constructors: Vec<Constructor>,
    },

    /// Structure (single constructor inductive): structure Name {u...} :=
    /// (fields)
    Structure {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        fields: Vec<Field>,
    },
}
/// Function parameters
#[derive(Debug, Clone, PartialEq)]
pub struct Parameter {
    pub name: String,
    pub ty: Term,
    pub implicit: bool, // for {x : A} vs (x : A)
}
/// Constructor for inductive types
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub ty: Term,
}
/// Structure fields
#[derive(Debug, Clone, PartialEq)]
pub struct Field {
    pub name: String,
    pub ty: Term,
}
/// Module containing multiple declarations
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
impl fmt::Display for Universe {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Universe::Const(n) => write!(f, "{}", n),
            Universe::ScopedVar(scope, var) => write!(f, "{}::{}", scope, var),
            Universe::Meta(id) => write!(f, "?u{}", id),
            Universe::Add(u, n) => write!(f, "{}+{}", u, n),
            Universe::Max(u, v) => write!(f, "max({}, {})", u, v),
            Universe::IMax(u, v) => write!(f, "imax({}, {})", u, v),
        }
    }
}
impl fmt::Display for Term {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Term::Var(x) => write!(f, "{}", x),
            Term::App(t1, t2) => write!(f, "({} {})", t1, t2),
            Term::Abs(x, ty, body) => write!(f, "(fun {} : {} => {})", x, ty, body),
            Term::Pi(x, ty, body, implicit) => {
                if Self::occurs_free(x, body) {
                    if *implicit {
                        write!(f, "{{{}  : {}}} → {}", x, ty, body)
                    } else {
                        write!(f, "({} : {}) → {}", x, ty, body)
                    }
                } else {
                    write!(f, "{} → {}", ty, body)
                }
            }
            Term::Sort(u) => match u {
                Universe::Const(0) => write!(f, "Prop"),
                Universe::Const(n) => {
                    if *n == 1 {
                        write!(f, "Type")
                    } else {
                        write!(f, "Type {}", n - 1)
                    }
                }
                Universe::Add(ref base, n) => {
                    if let Universe::Const(1) = **base {
                        write!(f, "Type {}", n)
                    } else {
                        write!(f, "Sort {}", u)
                    }
                }
                _ => write!(f, "Sort {}", u),
            },
            Term::Let(x, ty, val, body) => write!(f, "(let {} : {} := {} in {})", x, ty, val, body),
            Term::Match(scrutinee, arms) => {
                writeln!(f, "match {} with", scrutinee)?;
                for arm in arms {
                    writeln!(f, "| {} => {}", arm.pattern, arm.body)?;
                }
                Ok(())
            }
            Term::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Term::Const(name) => write!(f, "{}", name),
            Term::Meta(name) => write!(f, "?{}", name),
            Term::Proj(term, field) => write!(f, "{}.{}", term, field),
            Term::Sigma(x, domain, codomain) => write!(f, "Σ ({} : {}), {}", x, domain, codomain),
            Term::Pair(fst, snd) => write!(f, "⟨{}, {}⟩", fst, snd),
            Term::Fst(pair) => write!(f, "π₁({})", pair),
            Term::Snd(pair) => write!(f, "π₂({})", pair),
        }
    }
}
impl fmt::Display for Pattern {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Pattern::Var(x) => write!(f, "{}", x),
            Pattern::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Pattern::Wildcard => write!(f, "_"),
        }
    }
}
impl Term {
    /// Check if variable occurs free in term
    pub fn occurs_free(var: &str, term: &Term) -> bool {
        match term {
            Term::Var(x) => x == var,
            Term::App(t1, t2) => Self::occurs_free(var, t1) || Self::occurs_free(var, t2),
            Term::Abs(x, ty, body) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Pi(x, ty, body, _) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Sort(_) => false,
            Term::Let(x, ty, val, body) => {
                Self::occurs_free(var, ty)
                    || Self::occurs_free(var, val)
                    || (x != var && Self::occurs_free(var, body))
            }
            Term::Match(scrutinee, arms) => {
                Self::occurs_free(var, scrutinee)
                    || arms.iter().any(|arm| Self::occurs_free(var, &arm.body))
            }
            Term::Constructor(_, args) => args.iter().any(|arg| Self::occurs_free(var, arg)),
            Term::Const(_) => false,
            Term::Meta(_) => false,
            Term::Proj(term, _) => Self::occurs_free(var, term),
            Term::Sigma(x, domain, codomain) => {
                Self::occurs_free(var, domain) || (x != var && Self::occurs_free(var, codomain))
            }
            Term::Pair(fst, snd) => Self::occurs_free(var, fst) || Self::occurs_free(var, snd),
            Term::Fst(pair) | Term::Snd(pair) => Self::occurs_free(var, pair),
        }
    }
}
impl UniverseContext {
    pub fn new() -> Self {
        UniverseContext {
            variables: Vec::new(),
            constraints: Vec::new(),
        }
    }

    /// Add a universe variable to the context
    pub fn add_var(&mut self, name: String) {
        if !self.variables.contains(&name) {
            self.variables.push(name);
        }
    }

    /// Add a constraint to the context
    pub fn add_constraint(&mut self, constraint: UniverseConstraint) {
        self.constraints.push(constraint);
    }

    /// Check if a universe variable is bound in this context
    pub fn contains(&self, name: &str) -> bool {
        self.variables.contains(&name.to_string())
    }

    /// Extend context with variables from another context
    pub fn extend(&self, other: &UniverseContext) -> Self {
        let mut new_ctx = self.clone();
        for var in &other.variables {
            new_ctx.add_var(var.clone());
        }
        for constraint in &other.constraints {
            new_ctx.add_constraint(constraint.clone());
        }
        new_ctx
    }

    /// Generate a fresh universe variable name
    pub fn fresh_var(&self, base: &str) -> String {
        let mut counter = 0;
        loop {
            let name = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };
            if !self.contains(&name) {
                return name;
            }
            counter += 1;
        }
    }
}
impl Default for UniverseContext {
    fn default() -> Self {
        Self::new()
    }
}
impl Universe {
    /// Create a scoped universe variable
    pub fn scoped_var(scope: String, name: String) -> Self {
        Universe::ScopedVar(scope, name)
    }
}
/// Core terms in the Calculus of Constructions
#[derive(Debug, Clone, PartialEq)]
pub enum Term {
    /// Variables: x, y, z
    Var(String),

    /// Application: f a
    App(Box<Term>, Box<Term>),

    /// Lambda abstraction: λ x : A, t  (written as fun x : A => t)
    Abs(String, Box<Term>, Box<Term>),

    /// Dependent product: Π x : A, B  (written as (x : A) → B or {x : A} → B)
    Pi(String, Box<Term>, Box<Term>, bool), // bool = implicit

    /// Sort/Type: Sort u, Type, Prop
    Sort(Universe),

    /// Let binding: let x := t in s
    Let(String, Box<Term>, Box<Term>, Box<Term>),

    /// Match expression with patterns
    Match(Box<Term>, Vec<MatchArm>),

    /// Inductive type constructor
    Constructor(String, Vec<Term>),

    /// Local constant (for definitions and axioms)
    Const(String),

    /// Meta-variable for type inference
    Meta(String),

    /// Field projection: term.field
    Proj(Box<Term>, String),

    /// Dependent pair type (Sigma type): Σ (x : A), B
    Sigma(String, Box<Term>, Box<Term>),

    /// Pair constructor: ⟨a, b⟩
    Pair(Box<Term>, Box<Term>),

    /// First projection: π₁
    Fst(Box<Term>),

    /// Second projection: π₂
    Snd(Box<Term>),
}
}

Our Term enum demonstrates the unification principle by using the same constructors to represent functions (Abs), function types (Pi), type constructors, and logical propositions. The same application constructor (App) represents both function application and type application, eliminating the artificial boundaries present in stratified type systems.

Variables (Var) and constants (Const) form the basic building blocks of our term language. Variables represent both term-level bindings and type-level bindings, while constants refer to defined names in the global context. The implementation treats both categories uniformly, enabling the same binding mechanisms to work across all abstraction levels.

Function Types and Lambda Abstractions

The Pi constructor represents dependent product types, generalizing both simple function types and universal quantification. When the bound variable appears in the body type, we obtain dependency; when it does not appear, the Pi-type reduces to a simple function type.

#![allow(unused)]
fn main() {
// Pi(variable_name, domain_type, codomain_type, is_implicit)
Pi("x".to_string(), Box::new(nat_type), Box::new(vec_type), false)
}

The boolean flag indicates whether the parameter should be treated as implicit, enabling our implementation to support both explicit and implicit argument passing. Lambda abstractions (Abs) create inhabitants of Pi-types, with the same constructor serving for both computational functions and proof terms.

Pattern Matching and Inductive Elimination

#![allow(unused)]
fn main() {
use std::fmt;
/// Core terms in the Calculus of Constructions
#[derive(Debug, Clone, PartialEq)]
pub enum Term {
    /// Variables: x, y, z
    Var(String),

    /// Application: f a
    App(Box<Term>, Box<Term>),

    /// Lambda abstraction: λ x : A, t  (written as fun x : A => t)
    Abs(String, Box<Term>, Box<Term>),

    /// Dependent product: Π x : A, B  (written as (x : A) → B or {x : A} → B)
    Pi(String, Box<Term>, Box<Term>, bool), // bool = implicit

    /// Sort/Type: Sort u, Type, Prop
    Sort(Universe),

    /// Let binding: let x := t in s
    Let(String, Box<Term>, Box<Term>, Box<Term>),

    /// Match expression with patterns
    Match(Box<Term>, Vec<MatchArm>),

    /// Inductive type constructor
    Constructor(String, Vec<Term>),

    /// Local constant (for definitions and axioms)
    Const(String),

    /// Meta-variable for type inference
    Meta(String),

    /// Field projection: term.field
    Proj(Box<Term>, String),

    /// Dependent pair type (Sigma type): Σ (x : A), B
    Sigma(String, Box<Term>, Box<Term>),

    /// Pair constructor: ⟨a, b⟩
    Pair(Box<Term>, Box<Term>),

    /// First projection: π₁
    Fst(Box<Term>),

    /// Second projection: π₂
    Snd(Box<Term>),
}
/// Universe levels for type theory
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Universe {
    Const(u32),                         // Concrete level: 0, 1, 2, ...
    ScopedVar(String, String),          // Scoped universe variable: (scope_name, var_name)
    Meta(u32),                          // Universe metavariable: ?u₀, ?u₁, ...
    Add(Box<Universe>, u32),            // Level + n
    Max(Box<Universe>, Box<Universe>),  // max(u, v)
    IMax(Box<Universe>, Box<Universe>), // imax(u, v)
}
/// Universe level constraints for polymorphism
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UniverseConstraint {
    /// u ≤ v
    LessEq(Universe, Universe),
    /// u = v
    Equal(Universe, Universe),
}
/// Context for universe level variables
#[derive(Debug, Clone, PartialEq)]
pub struct UniverseContext {
    /// Currently bound universe variables
    pub variables: Vec<String>,
    /// Constraints on universe levels
    pub constraints: Vec<UniverseConstraint>,
}
/// Patterns for match expressions
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Variable pattern: x
    Var(String),

    /// Constructor pattern: Cons x xs
    Constructor(String, Vec<Pattern>),

    /// Wildcard pattern: _
    Wildcard,
}
/// Top-level declarations
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Constant definition: def name {u...} (params) : type := body
    Definition {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        body: Term,
    },

    /// Axiom: axiom name {u...} : type
    Axiom {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        ty: Term,
    },

    /// Inductive type: inductive Name {u...} (params) : Type := constructors
    Inductive {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        constructors: Vec<Constructor>,
    },

    /// Structure (single constructor inductive): structure Name {u...} :=
    /// (fields)
    Structure {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        fields: Vec<Field>,
    },
}
/// Function parameters
#[derive(Debug, Clone, PartialEq)]
pub struct Parameter {
    pub name: String,
    pub ty: Term,
    pub implicit: bool, // for {x : A} vs (x : A)
}
/// Constructor for inductive types
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub ty: Term,
}
/// Structure fields
#[derive(Debug, Clone, PartialEq)]
pub struct Field {
    pub name: String,
    pub ty: Term,
}
/// Module containing multiple declarations
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
impl fmt::Display for Universe {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Universe::Const(n) => write!(f, "{}", n),
            Universe::ScopedVar(scope, var) => write!(f, "{}::{}", scope, var),
            Universe::Meta(id) => write!(f, "?u{}", id),
            Universe::Add(u, n) => write!(f, "{}+{}", u, n),
            Universe::Max(u, v) => write!(f, "max({}, {})", u, v),
            Universe::IMax(u, v) => write!(f, "imax({}, {})", u, v),
        }
    }
}
impl fmt::Display for Term {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Term::Var(x) => write!(f, "{}", x),
            Term::App(t1, t2) => write!(f, "({} {})", t1, t2),
            Term::Abs(x, ty, body) => write!(f, "(fun {} : {} => {})", x, ty, body),
            Term::Pi(x, ty, body, implicit) => {
                if Self::occurs_free(x, body) {
                    if *implicit {
                        write!(f, "{{{}  : {}}} → {}", x, ty, body)
                    } else {
                        write!(f, "({} : {}) → {}", x, ty, body)
                    }
                } else {
                    write!(f, "{} → {}", ty, body)
                }
            }
            Term::Sort(u) => match u {
                Universe::Const(0) => write!(f, "Prop"),
                Universe::Const(n) => {
                    if *n == 1 {
                        write!(f, "Type")
                    } else {
                        write!(f, "Type {}", n - 1)
                    }
                }
                Universe::Add(ref base, n) => {
                    if let Universe::Const(1) = **base {
                        write!(f, "Type {}", n)
                    } else {
                        write!(f, "Sort {}", u)
                    }
                }
                _ => write!(f, "Sort {}", u),
            },
            Term::Let(x, ty, val, body) => write!(f, "(let {} : {} := {} in {})", x, ty, val, body),
            Term::Match(scrutinee, arms) => {
                writeln!(f, "match {} with", scrutinee)?;
                for arm in arms {
                    writeln!(f, "| {} => {}", arm.pattern, arm.body)?;
                }
                Ok(())
            }
            Term::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Term::Const(name) => write!(f, "{}", name),
            Term::Meta(name) => write!(f, "?{}", name),
            Term::Proj(term, field) => write!(f, "{}.{}", term, field),
            Term::Sigma(x, domain, codomain) => write!(f, "Σ ({} : {}), {}", x, domain, codomain),
            Term::Pair(fst, snd) => write!(f, "⟨{}, {}⟩", fst, snd),
            Term::Fst(pair) => write!(f, "π₁({})", pair),
            Term::Snd(pair) => write!(f, "π₂({})", pair),
        }
    }
}
impl fmt::Display for Pattern {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Pattern::Var(x) => write!(f, "{}", x),
            Pattern::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Pattern::Wildcard => write!(f, "_"),
        }
    }
}
impl Term {
    /// Check if variable occurs free in term
    pub fn occurs_free(var: &str, term: &Term) -> bool {
        match term {
            Term::Var(x) => x == var,
            Term::App(t1, t2) => Self::occurs_free(var, t1) || Self::occurs_free(var, t2),
            Term::Abs(x, ty, body) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Pi(x, ty, body, _) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Sort(_) => false,
            Term::Let(x, ty, val, body) => {
                Self::occurs_free(var, ty)
                    || Self::occurs_free(var, val)
                    || (x != var && Self::occurs_free(var, body))
            }
            Term::Match(scrutinee, arms) => {
                Self::occurs_free(var, scrutinee)
                    || arms.iter().any(|arm| Self::occurs_free(var, &arm.body))
            }
            Term::Constructor(_, args) => args.iter().any(|arg| Self::occurs_free(var, arg)),
            Term::Const(_) => false,
            Term::Meta(_) => false,
            Term::Proj(term, _) => Self::occurs_free(var, term),
            Term::Sigma(x, domain, codomain) => {
                Self::occurs_free(var, domain) || (x != var && Self::occurs_free(var, codomain))
            }
            Term::Pair(fst, snd) => Self::occurs_free(var, fst) || Self::occurs_free(var, snd),
            Term::Fst(pair) | Term::Snd(pair) => Self::occurs_free(var, pair),
        }
    }
}
impl UniverseContext {
    pub fn new() -> Self {
        UniverseContext {
            variables: Vec::new(),
            constraints: Vec::new(),
        }
    }

    /// Add a universe variable to the context
    pub fn add_var(&mut self, name: String) {
        if !self.variables.contains(&name) {
            self.variables.push(name);
        }
    }

    /// Add a constraint to the context
    pub fn add_constraint(&mut self, constraint: UniverseConstraint) {
        self.constraints.push(constraint);
    }

    /// Check if a universe variable is bound in this context
    pub fn contains(&self, name: &str) -> bool {
        self.variables.contains(&name.to_string())
    }

    /// Extend context with variables from another context
    pub fn extend(&self, other: &UniverseContext) -> Self {
        let mut new_ctx = self.clone();
        for var in &other.variables {
            new_ctx.add_var(var.clone());
        }
        for constraint in &other.constraints {
            new_ctx.add_constraint(constraint.clone());
        }
        new_ctx
    }

    /// Generate a fresh universe variable name
    pub fn fresh_var(&self, base: &str) -> String {
        let mut counter = 0;
        loop {
            let name = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };
            if !self.contains(&name) {
                return name;
            }
            counter += 1;
        }
    }
}
impl Default for UniverseContext {
    fn default() -> Self {
        Self::new()
    }
}
impl Universe {
    /// Create a scoped universe variable
    pub fn scoped_var(scope: String, name: String) -> Self {
        Universe::ScopedVar(scope, name)
    }
}
/// Pattern matching arms
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Term,
}
}

Our implementation includes comprehensive pattern matching through the Match constructor, which enables elimination of inductive types. Each match arm specifies a constructor pattern and the corresponding elimination term, providing the computational content needed for inductive reasoning.

The pattern matching implementation supports nested patterns and variable bindings, enabling destructions of complex inductive data structures while maintaining type safety through exhaustiveness checking.

Universe System Implementation

The implementation includes a comprehensive universe system that prevents logical paradoxes while enabling flexible type-level computation. Our universe design supports both concrete levels and polymorphic universe variables.

#![allow(unused)]
fn main() {
use std::fmt;
/// Core terms in the Calculus of Constructions
#[derive(Debug, Clone, PartialEq)]
pub enum Term {
    /// Variables: x, y, z
    Var(String),

    /// Application: f a
    App(Box<Term>, Box<Term>),

    /// Lambda abstraction: λ x : A, t  (written as fun x : A => t)
    Abs(String, Box<Term>, Box<Term>),

    /// Dependent product: Π x : A, B  (written as (x : A) → B or {x : A} → B)
    Pi(String, Box<Term>, Box<Term>, bool), // bool = implicit

    /// Sort/Type: Sort u, Type, Prop
    Sort(Universe),

    /// Let binding: let x := t in s
    Let(String, Box<Term>, Box<Term>, Box<Term>),

    /// Match expression with patterns
    Match(Box<Term>, Vec<MatchArm>),

    /// Inductive type constructor
    Constructor(String, Vec<Term>),

    /// Local constant (for definitions and axioms)
    Const(String),

    /// Meta-variable for type inference
    Meta(String),

    /// Field projection: term.field
    Proj(Box<Term>, String),

    /// Dependent pair type (Sigma type): Σ (x : A), B
    Sigma(String, Box<Term>, Box<Term>),

    /// Pair constructor: ⟨a, b⟩
    Pair(Box<Term>, Box<Term>),

    /// First projection: π₁
    Fst(Box<Term>),

    /// Second projection: π₂
    Snd(Box<Term>),
}
/// Universe level constraints for polymorphism
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UniverseConstraint {
    /// u ≤ v
    LessEq(Universe, Universe),
    /// u = v
    Equal(Universe, Universe),
}
/// Context for universe level variables
#[derive(Debug, Clone, PartialEq)]
pub struct UniverseContext {
    /// Currently bound universe variables
    pub variables: Vec<String>,
    /// Constraints on universe levels
    pub constraints: Vec<UniverseConstraint>,
}
/// Pattern matching arms
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Term,
}
/// Patterns for match expressions
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Variable pattern: x
    Var(String),

    /// Constructor pattern: Cons x xs
    Constructor(String, Vec<Pattern>),

    /// Wildcard pattern: _
    Wildcard,
}
/// Top-level declarations
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Constant definition: def name {u...} (params) : type := body
    Definition {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        body: Term,
    },

    /// Axiom: axiom name {u...} : type
    Axiom {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        ty: Term,
    },

    /// Inductive type: inductive Name {u...} (params) : Type := constructors
    Inductive {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        constructors: Vec<Constructor>,
    },

    /// Structure (single constructor inductive): structure Name {u...} :=
    /// (fields)
    Structure {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        fields: Vec<Field>,
    },
}
/// Function parameters
#[derive(Debug, Clone, PartialEq)]
pub struct Parameter {
    pub name: String,
    pub ty: Term,
    pub implicit: bool, // for {x : A} vs (x : A)
}
/// Constructor for inductive types
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub ty: Term,
}
/// Structure fields
#[derive(Debug, Clone, PartialEq)]
pub struct Field {
    pub name: String,
    pub ty: Term,
}
/// Module containing multiple declarations
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
impl fmt::Display for Universe {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Universe::Const(n) => write!(f, "{}", n),
            Universe::ScopedVar(scope, var) => write!(f, "{}::{}", scope, var),
            Universe::Meta(id) => write!(f, "?u{}", id),
            Universe::Add(u, n) => write!(f, "{}+{}", u, n),
            Universe::Max(u, v) => write!(f, "max({}, {})", u, v),
            Universe::IMax(u, v) => write!(f, "imax({}, {})", u, v),
        }
    }
}
impl fmt::Display for Term {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Term::Var(x) => write!(f, "{}", x),
            Term::App(t1, t2) => write!(f, "({} {})", t1, t2),
            Term::Abs(x, ty, body) => write!(f, "(fun {} : {} => {})", x, ty, body),
            Term::Pi(x, ty, body, implicit) => {
                if Self::occurs_free(x, body) {
                    if *implicit {
                        write!(f, "{{{}  : {}}} → {}", x, ty, body)
                    } else {
                        write!(f, "({} : {}) → {}", x, ty, body)
                    }
                } else {
                    write!(f, "{} → {}", ty, body)
                }
            }
            Term::Sort(u) => match u {
                Universe::Const(0) => write!(f, "Prop"),
                Universe::Const(n) => {
                    if *n == 1 {
                        write!(f, "Type")
                    } else {
                        write!(f, "Type {}", n - 1)
                    }
                }
                Universe::Add(ref base, n) => {
                    if let Universe::Const(1) = **base {
                        write!(f, "Type {}", n)
                    } else {
                        write!(f, "Sort {}", u)
                    }
                }
                _ => write!(f, "Sort {}", u),
            },
            Term::Let(x, ty, val, body) => write!(f, "(let {} : {} := {} in {})", x, ty, val, body),
            Term::Match(scrutinee, arms) => {
                writeln!(f, "match {} with", scrutinee)?;
                for arm in arms {
                    writeln!(f, "| {} => {}", arm.pattern, arm.body)?;
                }
                Ok(())
            }
            Term::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Term::Const(name) => write!(f, "{}", name),
            Term::Meta(name) => write!(f, "?{}", name),
            Term::Proj(term, field) => write!(f, "{}.{}", term, field),
            Term::Sigma(x, domain, codomain) => write!(f, "Σ ({} : {}), {}", x, domain, codomain),
            Term::Pair(fst, snd) => write!(f, "⟨{}, {}⟩", fst, snd),
            Term::Fst(pair) => write!(f, "π₁({})", pair),
            Term::Snd(pair) => write!(f, "π₂({})", pair),
        }
    }
}
impl fmt::Display for Pattern {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Pattern::Var(x) => write!(f, "{}", x),
            Pattern::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Pattern::Wildcard => write!(f, "_"),
        }
    }
}
impl Term {
    /// Check if variable occurs free in term
    pub fn occurs_free(var: &str, term: &Term) -> bool {
        match term {
            Term::Var(x) => x == var,
            Term::App(t1, t2) => Self::occurs_free(var, t1) || Self::occurs_free(var, t2),
            Term::Abs(x, ty, body) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Pi(x, ty, body, _) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Sort(_) => false,
            Term::Let(x, ty, val, body) => {
                Self::occurs_free(var, ty)
                    || Self::occurs_free(var, val)
                    || (x != var && Self::occurs_free(var, body))
            }
            Term::Match(scrutinee, arms) => {
                Self::occurs_free(var, scrutinee)
                    || arms.iter().any(|arm| Self::occurs_free(var, &arm.body))
            }
            Term::Constructor(_, args) => args.iter().any(|arg| Self::occurs_free(var, arg)),
            Term::Const(_) => false,
            Term::Meta(_) => false,
            Term::Proj(term, _) => Self::occurs_free(var, term),
            Term::Sigma(x, domain, codomain) => {
                Self::occurs_free(var, domain) || (x != var && Self::occurs_free(var, codomain))
            }
            Term::Pair(fst, snd) => Self::occurs_free(var, fst) || Self::occurs_free(var, snd),
            Term::Fst(pair) | Term::Snd(pair) => Self::occurs_free(var, pair),
        }
    }
}
impl UniverseContext {
    pub fn new() -> Self {
        UniverseContext {
            variables: Vec::new(),
            constraints: Vec::new(),
        }
    }

    /// Add a universe variable to the context
    pub fn add_var(&mut self, name: String) {
        if !self.variables.contains(&name) {
            self.variables.push(name);
        }
    }

    /// Add a constraint to the context
    pub fn add_constraint(&mut self, constraint: UniverseConstraint) {
        self.constraints.push(constraint);
    }

    /// Check if a universe variable is bound in this context
    pub fn contains(&self, name: &str) -> bool {
        self.variables.contains(&name.to_string())
    }

    /// Extend context with variables from another context
    pub fn extend(&self, other: &UniverseContext) -> Self {
        let mut new_ctx = self.clone();
        for var in &other.variables {
            new_ctx.add_var(var.clone());
        }
        for constraint in &other.constraints {
            new_ctx.add_constraint(constraint.clone());
        }
        new_ctx
    }

    /// Generate a fresh universe variable name
    pub fn fresh_var(&self, base: &str) -> String {
        let mut counter = 0;
        loop {
            let name = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };
            if !self.contains(&name) {
                return name;
            }
            counter += 1;
        }
    }
}
impl Default for UniverseContext {
    fn default() -> Self {
        Self::new()
    }
}
impl Universe {
    /// Create a scoped universe variable
    pub fn scoped_var(scope: String, name: String) -> Self {
        Universe::ScopedVar(scope, name)
    }
}
/// Universe levels for type theory
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Universe {
    Const(u32),                         // Concrete level: 0, 1, 2, ...
    ScopedVar(String, String),          // Scoped universe variable: (scope_name, var_name)
    Meta(u32),                          // Universe metavariable: ?u₀, ?u₁, ...
    Add(Box<Universe>, u32),            // Level + n
    Max(Box<Universe>, Box<Universe>),  // max(u, v)
    IMax(Box<Universe>, Box<Universe>), // imax(u, v)
}
}

Universe Levels and Arithmetic

The universe system supports multiple forms of universe expressions that enable both concrete level specifications and polymorphic universe computations. Concrete levels represented by Const(n) specify particular universe levels like Type 0, Type 1, and so forth, forming the foundation of the universe hierarchy and providing definite homes for specific types. Universe variables through ScopedVar enable universe polymorphism by allowing definitions to be parameterized over universe levels, permitting the same definition to work at multiple universe levels simultaneously. Level arithmetic operations using Add enable universe level computations such as u + 1, supporting common patterns where a type constructor must live at a universe level one higher than its parameter. Maximum operations provided by Max and IMax compute universe level selections that choose the higher of two levels, with IMax implementing the impredicative maximum used in proof-relevant contexts where the universe level computation must respect the logical structure of the proof.

Universe Constraint Solving

#![allow(unused)]
fn main() {
use std::fmt;
/// Core terms in the Calculus of Constructions
#[derive(Debug, Clone, PartialEq)]
pub enum Term {
    /// Variables: x, y, z
    Var(String),

    /// Application: f a
    App(Box<Term>, Box<Term>),

    /// Lambda abstraction: λ x : A, t  (written as fun x : A => t)
    Abs(String, Box<Term>, Box<Term>),

    /// Dependent product: Π x : A, B  (written as (x : A) → B or {x : A} → B)
    Pi(String, Box<Term>, Box<Term>, bool), // bool = implicit

    /// Sort/Type: Sort u, Type, Prop
    Sort(Universe),

    /// Let binding: let x := t in s
    Let(String, Box<Term>, Box<Term>, Box<Term>),

    /// Match expression with patterns
    Match(Box<Term>, Vec<MatchArm>),

    /// Inductive type constructor
    Constructor(String, Vec<Term>),

    /// Local constant (for definitions and axioms)
    Const(String),

    /// Meta-variable for type inference
    Meta(String),

    /// Field projection: term.field
    Proj(Box<Term>, String),

    /// Dependent pair type (Sigma type): Σ (x : A), B
    Sigma(String, Box<Term>, Box<Term>),

    /// Pair constructor: ⟨a, b⟩
    Pair(Box<Term>, Box<Term>),

    /// First projection: π₁
    Fst(Box<Term>),

    /// Second projection: π₂
    Snd(Box<Term>),
}
/// Universe levels for type theory
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Universe {
    Const(u32),                         // Concrete level: 0, 1, 2, ...
    ScopedVar(String, String),          // Scoped universe variable: (scope_name, var_name)
    Meta(u32),                          // Universe metavariable: ?u₀, ?u₁, ...
    Add(Box<Universe>, u32),            // Level + n
    Max(Box<Universe>, Box<Universe>),  // max(u, v)
    IMax(Box<Universe>, Box<Universe>), // imax(u, v)
}
/// Context for universe level variables
#[derive(Debug, Clone, PartialEq)]
pub struct UniverseContext {
    /// Currently bound universe variables
    pub variables: Vec<String>,
    /// Constraints on universe levels
    pub constraints: Vec<UniverseConstraint>,
}
/// Pattern matching arms
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Term,
}
/// Patterns for match expressions
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Variable pattern: x
    Var(String),

    /// Constructor pattern: Cons x xs
    Constructor(String, Vec<Pattern>),

    /// Wildcard pattern: _
    Wildcard,
}
/// Top-level declarations
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Constant definition: def name {u...} (params) : type := body
    Definition {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        body: Term,
    },

    /// Axiom: axiom name {u...} : type
    Axiom {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        ty: Term,
    },

    /// Inductive type: inductive Name {u...} (params) : Type := constructors
    Inductive {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        constructors: Vec<Constructor>,
    },

    /// Structure (single constructor inductive): structure Name {u...} :=
    /// (fields)
    Structure {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        fields: Vec<Field>,
    },
}
/// Function parameters
#[derive(Debug, Clone, PartialEq)]
pub struct Parameter {
    pub name: String,
    pub ty: Term,
    pub implicit: bool, // for {x : A} vs (x : A)
}
/// Constructor for inductive types
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub ty: Term,
}
/// Structure fields
#[derive(Debug, Clone, PartialEq)]
pub struct Field {
    pub name: String,
    pub ty: Term,
}
/// Module containing multiple declarations
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
impl fmt::Display for Universe {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Universe::Const(n) => write!(f, "{}", n),
            Universe::ScopedVar(scope, var) => write!(f, "{}::{}", scope, var),
            Universe::Meta(id) => write!(f, "?u{}", id),
            Universe::Add(u, n) => write!(f, "{}+{}", u, n),
            Universe::Max(u, v) => write!(f, "max({}, {})", u, v),
            Universe::IMax(u, v) => write!(f, "imax({}, {})", u, v),
        }
    }
}
impl fmt::Display for Term {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Term::Var(x) => write!(f, "{}", x),
            Term::App(t1, t2) => write!(f, "({} {})", t1, t2),
            Term::Abs(x, ty, body) => write!(f, "(fun {} : {} => {})", x, ty, body),
            Term::Pi(x, ty, body, implicit) => {
                if Self::occurs_free(x, body) {
                    if *implicit {
                        write!(f, "{{{}  : {}}} → {}", x, ty, body)
                    } else {
                        write!(f, "({} : {}) → {}", x, ty, body)
                    }
                } else {
                    write!(f, "{} → {}", ty, body)
                }
            }
            Term::Sort(u) => match u {
                Universe::Const(0) => write!(f, "Prop"),
                Universe::Const(n) => {
                    if *n == 1 {
                        write!(f, "Type")
                    } else {
                        write!(f, "Type {}", n - 1)
                    }
                }
                Universe::Add(ref base, n) => {
                    if let Universe::Const(1) = **base {
                        write!(f, "Type {}", n)
                    } else {
                        write!(f, "Sort {}", u)
                    }
                }
                _ => write!(f, "Sort {}", u),
            },
            Term::Let(x, ty, val, body) => write!(f, "(let {} : {} := {} in {})", x, ty, val, body),
            Term::Match(scrutinee, arms) => {
                writeln!(f, "match {} with", scrutinee)?;
                for arm in arms {
                    writeln!(f, "| {} => {}", arm.pattern, arm.body)?;
                }
                Ok(())
            }
            Term::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Term::Const(name) => write!(f, "{}", name),
            Term::Meta(name) => write!(f, "?{}", name),
            Term::Proj(term, field) => write!(f, "{}.{}", term, field),
            Term::Sigma(x, domain, codomain) => write!(f, "Σ ({} : {}), {}", x, domain, codomain),
            Term::Pair(fst, snd) => write!(f, "⟨{}, {}⟩", fst, snd),
            Term::Fst(pair) => write!(f, "π₁({})", pair),
            Term::Snd(pair) => write!(f, "π₂({})", pair),
        }
    }
}
impl fmt::Display for Pattern {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Pattern::Var(x) => write!(f, "{}", x),
            Pattern::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Pattern::Wildcard => write!(f, "_"),
        }
    }
}
impl Term {
    /// Check if variable occurs free in term
    pub fn occurs_free(var: &str, term: &Term) -> bool {
        match term {
            Term::Var(x) => x == var,
            Term::App(t1, t2) => Self::occurs_free(var, t1) || Self::occurs_free(var, t2),
            Term::Abs(x, ty, body) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Pi(x, ty, body, _) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Sort(_) => false,
            Term::Let(x, ty, val, body) => {
                Self::occurs_free(var, ty)
                    || Self::occurs_free(var, val)
                    || (x != var && Self::occurs_free(var, body))
            }
            Term::Match(scrutinee, arms) => {
                Self::occurs_free(var, scrutinee)
                    || arms.iter().any(|arm| Self::occurs_free(var, &arm.body))
            }
            Term::Constructor(_, args) => args.iter().any(|arg| Self::occurs_free(var, arg)),
            Term::Const(_) => false,
            Term::Meta(_) => false,
            Term::Proj(term, _) => Self::occurs_free(var, term),
            Term::Sigma(x, domain, codomain) => {
                Self::occurs_free(var, domain) || (x != var && Self::occurs_free(var, codomain))
            }
            Term::Pair(fst, snd) => Self::occurs_free(var, fst) || Self::occurs_free(var, snd),
            Term::Fst(pair) | Term::Snd(pair) => Self::occurs_free(var, pair),
        }
    }
}
impl UniverseContext {
    pub fn new() -> Self {
        UniverseContext {
            variables: Vec::new(),
            constraints: Vec::new(),
        }
    }

    /// Add a universe variable to the context
    pub fn add_var(&mut self, name: String) {
        if !self.variables.contains(&name) {
            self.variables.push(name);
        }
    }

    /// Add a constraint to the context
    pub fn add_constraint(&mut self, constraint: UniverseConstraint) {
        self.constraints.push(constraint);
    }

    /// Check if a universe variable is bound in this context
    pub fn contains(&self, name: &str) -> bool {
        self.variables.contains(&name.to_string())
    }

    /// Extend context with variables from another context
    pub fn extend(&self, other: &UniverseContext) -> Self {
        let mut new_ctx = self.clone();
        for var in &other.variables {
            new_ctx.add_var(var.clone());
        }
        for constraint in &other.constraints {
            new_ctx.add_constraint(constraint.clone());
        }
        new_ctx
    }

    /// Generate a fresh universe variable name
    pub fn fresh_var(&self, base: &str) -> String {
        let mut counter = 0;
        loop {
            let name = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };
            if !self.contains(&name) {
                return name;
            }
            counter += 1;
        }
    }
}
impl Default for UniverseContext {
    fn default() -> Self {
        Self::new()
    }
}
impl Universe {
    /// Create a scoped universe variable
    pub fn scoped_var(scope: String, name: String) -> Self {
        Universe::ScopedVar(scope, name)
    }
}
/// Universe level constraints for polymorphism
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UniverseConstraint {
    /// u ≤ v
    LessEq(Universe, Universe),
    /// u = v
    Equal(Universe, Universe),
}
}

Our implementation includes a dedicated universe constraint solver that handles the complex relationships between universe levels. The solver maintains a constraint graph and applies algorithms to determine consistent universe level assignments. The constraint solver manages level equality constraints that require two universe levels to be identical, arising from type equality requirements in dependent contexts where definitional equality demands universe level consistency. Level ordering constraints require one universe level to be strictly less than another, arising from the predicativity requirements of the type system that prevent logical paradoxes by maintaining a strict hierarchy of type universes. Arithmetic constraints involving universe level computations enable flexible universe level expressions while maintaining consistency, allowing complex universe polymorphic definitions to specify their universe requirements precisely through level arithmetic expressions that the solver can resolve to concrete level assignments.

Definitional Equality and Normalization

Our type checker implements definitional equality through a comprehensive normalization algorithm that handles β-reduction, η-expansion, and definitional unfolding of constant definitions.

#![allow(unused)]
fn main() {
/// Normalize a term by reducing redexes (beta reduction, eta-conversion,
/// let expansion)
pub fn normalize(&self, term: &Term, ctx: &Context) -> Term {
    match term {
        Term::App(f, arg) => {
            let f_norm = self.normalize(f, ctx);
            let arg_norm = self.normalize(arg, ctx);

            // Beta reduction: (λx.t) s ~> t[s/x]
            if let Term::Abs(x, _ty, body) = &f_norm {
                self.substitute(x, &arg_norm, body)
            } else {
                Term::App(Box::new(f_norm), Box::new(arg_norm))
            }
        }

        Term::Abs(x, ty, body) => {
            let ty_norm = self.normalize(ty, ctx);
            let extended_ctx = ctx.extend(x.clone(), ty_norm.clone());
            let body_norm = self.normalize(body, &extended_ctx);

            // Eta-conversion: λx. f x ≡ f when x is not free in f
            if let Term::App(f, arg) = &body_norm {
                if let Term::Var(arg_var) = arg.as_ref() {
                    if arg_var == x && !Term::occurs_free(x, f) {
                        // Apply eta-conversion: λx. f x ~> f
                        return f.as_ref().clone();
                    }
                }
            }

            Term::Abs(x.clone(), Box::new(ty_norm), Box::new(body_norm))
        }

        Term::Pi(x, ty, body, implicit) => {
            let ty_norm = self.normalize(ty, ctx);
            let extended_ctx = ctx.extend(x.clone(), ty_norm.clone());
            let body_norm = self.normalize(body, &extended_ctx);
            Term::Pi(x.clone(), Box::new(ty_norm), Box::new(body_norm), *implicit)
        }

        Term::Let(x, _ty, val, body) => {
            // Let reduction: let x : T := v in b ~> b[v/x]
            let val_norm = self.normalize(val, ctx);
            self.substitute(x, &val_norm, body)
        }

        Term::Var(x) => {
            // Look up definitions and unfold them
            if let Some(def) = ctx.lookup_axiom(x) {
                def.clone() // Could be further normalized if it's a
                            // definition
            } else {
                term.clone()
            }
        }

        _ => term.clone(), // Other terms are already in normal form
    }
}
}

The normalization algorithm implements β-reduction for function applications, handling both computational reductions and type-level computations. When a lambda abstraction is applied to an argument, the implementation performs substitution while carefully managing variable capture and scope.

\[ (λx : Nat. x + 1) \ 5 ⟹ 5 + 1 ⟹ 6 \]

The implementation extends β-reduction to handle dependent type computations, where type-level functions can be applied to produce new types through computation.

Eta Conversion

Eta conversion ensures that functions are equal to their eta-expanded forms, providing extensional equality for function types. The implementation applies η-expansion during normalization to ensure that definitionally equal terms are recognized as such.

\[ λx : A. f \ x ≡ f \text{ (when } x \notin \text{fv}(f)\text{)} \]

Let-Expansion and Definition Unfolding

The implementation handles let bindings through expansion, replacing let-bound variables with their definitions during normalization. This approach ensures that local definitions do not interfere with definitional equality while providing the computational content needed for type checking.

Definition unfolding enables the type checker to access the computational content of defined constants, allowing definitional equality to work across module boundaries and enabling powerful abstraction mechanisms.

Type Checking Algorithm

Our implementation employs bidirectional type checking that splits the type checking problem into synthesis and checking modes. This approach handles the complexity of dependent types while maintaining decidability and providing informative error messages.

#![allow(unused)]
fn main() {
/// Infer the type of a term
pub fn infer(&mut self, term: &Term, ctx: &Context) -> TypeResult<Term> {
    self.with_context(term, |checker| checker.infer_impl(term, ctx))
}
}

Type Synthesis

The synthesis algorithm determines the type of a term by analyzing its structure and propagating type information through the syntax tree. Variable lookup operates by consulting the typing context, which maintains bindings for both term variables and type variables, ensuring that each variable reference corresponds to a valid binding in the current scope. Function application typing proceeds by synthesizing the function type, ensuring it forms a Pi-type, and checking that the argument type matches the domain specification, with careful handling of dependent types where the codomain may depend on the specific argument value. Pi-type formation verification ensures both the domain and codomain are well-typed, with the codomain checked in a context extended with the bound variable to properly handle the dependency relationship between the domain and codomain types.

#![allow(unused)]
fn main() {
/// Check that a term has a given type
pub fn check(&mut self, term: &Term, ty: &Term, ctx: &Context) -> TypeResult<()> {
    // Special handling for checking against meta-variable types
    if let Term::Meta(_) = ty {
        // When checking against a meta-variable, infer the term's type and unify
        let _inferred = self.infer(term, ctx)?;
        // For now, just check they're compatible - in a full system we'd unify
        // and update the meta-variable
        return Ok(());
    }

    match term {
        Term::Abs(x, param_ty, body) => {
            // Check if ty is a Pi-type
            if let Term::Pi(y, expected_param_ty, expected_body_ty, _impl) = ty {
                // Handle holes in parameter type
                let actual_param_ty = match param_ty.as_ref() {
                    Term::Meta(name) if name == UNDERSCORE => {
                        // Hole in parameter type - use the expected type
                        expected_param_ty.clone()
                    }
                    _ => param_ty.clone(),
                };

                // Check parameter types are definitionally equal (with conversion)
                if !self.definitionally_equal(&actual_param_ty, expected_param_ty, ctx)? {
                    return Err(TypeError::TypeMismatch {
                        expected: expected_param_ty.as_ref().clone(),
                        actual: actual_param_ty.as_ref().clone(),
                    });
                }

                // Check body in extended context
                let extended_ctx = ctx.extend(x.clone(), actual_param_ty.as_ref().clone());

                // Substitute parameter in expected body type - this is the key for dependent
                // types
                let expected_body_ty = if x == y {
                    expected_body_ty.as_ref().clone()
                } else {
                    // Rename bound variable y to x in the body type
                    self.substitute(y, &Term::Var(x.clone()), expected_body_ty)
                };

                self.check(body, &expected_body_ty, &extended_ctx)
            } else {
                Err(TypeError::NotAFunction {
                    term: term.clone(),
                    ty: ty.clone(),
                })
            }
        }

        _ => {
            // General case: infer type and check definitional equivalence
            let inferred_ty = self.infer(term, ctx)?;

            // Handle implicit parameters: if inferred type has implicit params,
            // try to instantiate them to match the expected type
            let elaborated_ty = self.elaborate_implicit_parameters(&inferred_ty, ty, ctx)?;

            // Check definitional equivalence
            if self.definitionally_equal(&elaborated_ty, ty, ctx)? {
                Ok(())
            } else {
                Err(TypeError::TypeMismatch {
                    expected: ty.clone(),
                    actual: elaborated_ty,
                })
            }
        }
    }
}
}

Type Checking Mode

The checking algorithm verifies that a term has an expected type by comparing the term structure with the type structure. This mode often provides better error messages and more efficient checking for terms with complex dependent types. Lambda checking operates by verifying lambda abstractions against Pi-types, ensuring that the body has the correct type in the extended context where the lambda parameter is properly bound. When synthesis and checking produce different results, the algorithm falls back to definitional equality checking, normalizing both the synthesized and expected types to their canonical forms and comparing the results, enabling the type checker to recognize when terms are definitionally equal despite having different syntactic representations.

Context Management

#![allow(unused)]
fn main() {
use std::collections::HashMap;
use crate::ast::{Term, Universe, UniverseConstraint, UniverseContext};
/// Definition with universe parameters
#[derive(Debug, Clone, PartialEq)]
pub struct Definition {
    /// Universe parameters for the definition
    pub universe_params: Vec<String>,
    /// The type of the definition
    pub ty: Term,
    /// Name of the definition (for scoping universe parameters)
    pub name: String,
}
impl Context {
    pub fn new() -> Self {
        Context {
            bindings: HashMap::new(),
            universe_vars: HashMap::new(),
            constructors: HashMap::new(),
            axioms: HashMap::new(),
            definitions: HashMap::new(),
            universe_context: UniverseContext::new(),
        }
    }

    /// Extend context with variable binding
    pub fn extend(&self, var: String, ty: Term) -> Self {
        let mut new_ctx = self.clone();
        new_ctx.bindings.insert(var, ty);
        new_ctx
    }

    /// Extend context with multiple bindings
    pub fn extend_many(&self, bindings: Vec<(String, Term)>) -> Self {
        let mut new_ctx = self.clone();
        for (var, ty) in bindings {
            new_ctx.bindings.insert(var, ty);
        }
        new_ctx
    }

    /// Look up variable type
    pub fn lookup(&self, var: &str) -> Option<&Term> {
        self.bindings.get(var)
    }

    /// Check if context has a binding for the given variable
    pub fn has_binding(&self, var: &str) -> bool {
        self.bindings.contains_key(var)
    }

    /// Get all variable bindings
    pub fn get_all_bindings(&self) -> impl Iterator<Item = (&String, &Term)> {
        self.bindings.iter()
    }

    /// Look up constructor type
    pub fn lookup_constructor(&self, name: &str) -> Option<&Term> {
        self.constructors.get(name)
    }

    /// Look up axiom type
    pub fn lookup_axiom(&self, name: &str) -> Option<&Term> {
        self.axioms.get(name)
    }

    /// Add constructor definition
    pub fn add_constructor(&mut self, name: String, ty: Term) {
        self.constructors.insert(name, ty);
    }

    /// Add axiom definition
    pub fn add_axiom(&mut self, name: String, ty: Term) {
        self.axioms.insert(name, ty);
    }

    /// Add definition with universe parameters
    pub fn add_definition(&mut self, name: String, universe_params: Vec<String>, ty: Term) {
        self.definitions.insert(
            name.clone(),
            Definition {
                universe_params,
                ty,
                name: name.clone(),
            },
        );
    }

    /// Look up definition with universe parameters
    pub fn lookup_definition(&self, name: &str) -> Option<&Definition> {
        self.definitions.get(name)
    }

    /// Add universe variable constraint
    pub fn add_universe_var(&mut self, var: String, constraint: Universe) {
        self.universe_vars.insert(var, constraint);
    }

    /// Check if variable is bound in context
    pub fn contains(&self, var: &str) -> bool {
        self.bindings.contains_key(var)
    }

    /// Get all variable names
    pub fn variables(&self) -> Vec<&String> {
        self.bindings.keys().collect()
    }

    /// Remove variable from context
    pub fn remove(&self, var: &str) -> Self {
        let mut new_ctx = self.clone();
        new_ctx.bindings.remove(var);
        new_ctx
    }

    /// Get fresh variable name not in context
    pub fn fresh_var(&self, base: &str) -> String {
        let mut counter = 0;
        loop {
            let candidate = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };

            if !self.contains(&candidate) {
                return candidate;
            }
            counter += 1;
        }
    }

    /// Access universe context
    pub fn universe_context(&self) -> &UniverseContext {
        &self.universe_context
    }

    /// Extend context with universe variable
    pub fn extend_universe(&self, name: String) -> Self {
        let mut new_ctx = self.clone();
        new_ctx.universe_context.add_var(name);
        new_ctx
    }

    /// Extend context with multiple universe variables
    pub fn extend_universe_many(&self, names: Vec<String>) -> Self {
        let mut new_ctx = self.clone();
        for name in names {
            new_ctx.universe_context.add_var(name);
        }
        new_ctx
    }

    /// Add universe constraint
    pub fn add_universe_constraint(&mut self, constraint: UniverseConstraint) {
        self.universe_context.add_constraint(constraint);
    }

    /// Generate fresh universe variable
    pub fn fresh_universe_var(&self, base: &str) -> String {
        self.universe_context.fresh_var(base)
    }
}
impl Default for Context {
    fn default() -> Self {
        Self::new()
    }
}
/// Type checking errors related to context
#[derive(Debug, Clone, PartialEq)]
pub enum ContextError {
    UnboundVariable { name: String },
    UnboundConstructor { name: String },
    UnboundAxiom { name: String },
    VariableAlreadyBound { name: String },
}
impl std::fmt::Display for ContextError {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            ContextError::UnboundVariable { name } => {
                write!(f, "Unbound variable: {}", name)
            }
            ContextError::UnboundConstructor { name } => {
                write!(f, "Unbound constructor: {}", name)
            }
            ContextError::UnboundAxiom { name } => {
                write!(f, "Unbound axiom: {}", name)
            }
            ContextError::VariableAlreadyBound { name } => {
                write!(f, "Variable already bound: {}", name)
            }
        }
    }
}
impl std::error::Error for ContextError {}
#[cfg(test)]
mod tests {
    use super::*;
    use crate::ast::Universe;

    #[test]
    fn test_context_operations() {
        let ctx = Context::new();

        let var_type = Term::Sort(Universe::Const(0));
        let ctx2 = ctx.extend("x".to_string(), var_type.clone());

        assert!(ctx2.contains("x"));
        assert_eq!(ctx2.lookup("x"), Some(&var_type));
        assert!(!ctx.contains("x")); // Original context unchanged
    }

    #[test]
    fn test_fresh_var() {
        let mut ctx = Context::new();
        ctx.bindings
            .insert("x".to_string(), Term::Sort(Universe::Const(0)));
        ctx.bindings
            .insert("x1".to_string(), Term::Sort(Universe::Const(0)));

        let fresh = ctx.fresh_var("x");
        assert_eq!(fresh, "x2");

        let fresh2 = ctx.fresh_var("y");
        assert_eq!(fresh2, "y");
    }

    #[test]
    fn test_extend_many() {
        let ctx = Context::new();
        let bindings = vec![
            ("x".to_string(), Term::Sort(Universe::Const(0))),
            ("y".to_string(), Term::Sort(Universe::Const(1))),
        ];

        let ctx2 = ctx.extend_many(bindings);
        assert!(ctx2.contains("x"));
        assert!(ctx2.contains("y"));
    }
}
/// Type checking context containing variable bindings
#[derive(Debug, Clone, PartialEq)]
pub struct Context {
    /// Variable to type bindings
    bindings: HashMap<String, Term>,
    /// Universe variable constraints
    universe_vars: HashMap<String, Universe>,
    /// Constructor definitions
    constructors: HashMap<String, Term>,
    /// Axiom definitions (without universe params - for backwards compat)
    axioms: HashMap<String, Term>,
    /// Definitions with universe parameters
    definitions: HashMap<String, Definition>,
    /// Universe level context for polymorphism
    universe_context: UniverseContext,
}
}

The typing context maintains bindings for variables, definitions, axioms, and constructors. Our implementation uses a context structure that enables efficient lookup while supporting the complex scoping rules required for dependent types. Variable binding operations add new variable bindings with their types, enabling proper scoping in lambda abstractions and Pi-types where the bound variable may appear in the type of subsequent bindings. Definition extension capabilities allow adding new constant definitions with their types and bodies, enabling modular development and abstraction mechanisms that support large-scale program organization. Constructor registration functionality registers inductive type constructors with their types, enabling pattern matching and inductive reasoning that respects the structural properties of the data types and maintains type safety throughout elimination operations.

Implicit Arguments and Elaboration

Our implementation includes comprehensive support for implicit arguments that are automatically inferred by the type checker. This feature bridges the gap between the fully explicit internal representation and the more convenient surface syntax.

#![allow(unused)]
fn main() {
/// Elaborate implicit parameters by trying to instantiate them to match
/// expected type This is a simple version that handles common cases
/// without full constraint solving
fn elaborate_implicit_parameters(
    &mut self,
    inferred_ty: &Term,
    expected_ty: &Term,
    ctx: &Context,
) -> TypeResult<Term> {
    // If inferred type has implicit parameters, try to instantiate them
    match inferred_ty {
        Term::Pi(param_name, param_ty, body_ty, true) => {
            // This is an implicit parameter - try to infer what it should be
            let implicit_arg = self.infer_implicit_argument(param_ty, expected_ty, ctx)?;

            // Substitute the implicit argument and continue elaboration
            let instantiated_ty = self.substitute(param_name, &implicit_arg, body_ty);
            self.elaborate_implicit_parameters(&instantiated_ty, expected_ty, ctx)
        }
        _ => {
            // No more implicit parameters, return the type as-is
            Ok(inferred_ty.clone())
        }
    }
}
}

Implicit Argument Insertion

The type checker automatically inserts implicit arguments when encountering function applications where the function type has implicit parameters. Meta-variable generation creates fresh meta-variables to represent unknown implicit arguments, ensuring that each implicit parameter has a unique placeholder that can be resolved through subsequent constraint solving. Constraint collection gathers relationships that connect meta-variables to known type information, building a system of equations that captures the interdependencies between implicit arguments and the explicitly provided terms. Constraint solving employs the dedicated constraint solver to determine concrete values for meta-variables, using unification algorithms and type-directed search to find solutions that satisfy all accumulated constraints while respecting the type structure of the program.

Meta-variable Resolution

#![allow(unused)]
fn main() {
use std::collections::{HashMap, HashSet, VecDeque};
use std::fmt;
use crate::ast::{Term, Universe};
use crate::context::Context;
use crate::errors::TypeError;
/// Meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct MetaId(pub u64);
impl fmt::Display for MetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?m{}", self.0)
    }
}
/// Universe meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct UniverseMetaId(pub u64);
impl fmt::Display for UniverseMetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?u{}", self.0)
    }
}
/// Constraint identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct ConstraintId(pub u64);
/// Constraint types
#[derive(Debug, Clone)]
pub enum Constraint {
    /// Unification: t1 ≡ t2
    Unify {
        id: ConstraintId,
        left: Term,
        right: Term,
        strength: ConstraintStrength,
    },
    /// Type constraint: term : type
    HasType {
        id: ConstraintId,
        term: Term,
        expected_type: Term,
    },
    /// Universe unification: u1 ≡ u2
    UnifyUniverse {
        id: ConstraintId,
        left: Universe,
        right: Universe,
        strength: ConstraintStrength,
    },
    /// Universe level constraint: u1 ≤ u2
    UniverseLevel {
        id: ConstraintId,
        left: Universe,
        right: Universe,
    },
    /// Delayed constraint (for complex patterns)
    Delayed {
        id: ConstraintId,
        constraint: Box<Constraint>,
        waiting_on: HashSet<MetaId>,
    },
}
/// Constraint strength for prioritization
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ConstraintStrength {
    Required,  // Must be solved
    Preferred, // Should be solved if possible
    Weak,      // Can be postponed
}
/// Substitution mapping meta-variables to terms
#[derive(Debug, Clone, Default)]
pub struct Substitution {
    mapping: HashMap<MetaId, Term>,
    universe_mapping: HashMap<u32, Universe>,
}
impl Substitution {
    pub fn new() -> Self {
        Self::default()
    }

    pub fn insert(&mut self, meta: MetaId, term: Term) {
        self.mapping.insert(meta, term);
    }

    pub fn get(&self, meta: &MetaId) -> Option<&Term> {
        self.mapping.get(meta)
    }

    pub fn insert_universe(&mut self, meta_id: u32, universe: Universe) {
        self.universe_mapping.insert(meta_id, universe);
    }

    pub fn get_universe(&self, meta_id: &u32) -> Option<&Universe> {
        self.universe_mapping.get(meta_id)
    }

    /// Apply substitution to a term
    pub fn apply(&self, term: &Term) -> Term {
        match term {
            Term::Meta(name) => {
                // Try to parse meta ID and look up substitution
                if let Some(meta_id) = self.parse_meta_name(name) {
                    if let Some(subst) = self.mapping.get(&meta_id) {
                        // Recursively apply substitution
                        return self.apply(subst);
                    }
                }
                term.clone()
            }
            Term::App(f, arg) => Term::App(Box::new(self.apply(f)), Box::new(self.apply(arg))),
            Term::Abs(x, ty, body) => Term::Abs(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
            ),
            Term::Pi(x, ty, body, implicit) => Term::Pi(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
                *implicit,
            ),
            Term::Let(x, ty, val, body) => Term::Let(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(val)),
                Box::new(self.apply(body)),
            ),
            Term::Sort(u) => Term::Sort(self.apply_universe(u)),
            // Other cases remain unchanged
            _ => term.clone(),
        }
    }

    /// Apply substitution to a universe
    pub fn apply_universe(&self, universe: &Universe) -> Universe {
        let result = match universe {
            Universe::Meta(id) => {
                if let Some(subst) = self.universe_mapping.get(id) {
                    // Recursively apply substitution
                    self.apply_universe(subst)
                } else {
                    universe.clone()
                }
            }
            Universe::Add(base, n) => Universe::Add(Box::new(self.apply_universe(base)), *n),
            Universe::Max(u, v) => Universe::Max(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            Universe::IMax(u, v) => Universe::IMax(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            // Constants, variables, and scoped variables remain unchanged during substitution
            Universe::Const(_) | Universe::ScopedVar(_, _) => universe.clone(),
        };
        // Normalize the result to simplify expressions like Const(0) + 1 -> Const(1)
        self.normalize_universe_static(&result)
    }

    /// Static normalization (doesn't require &self)
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe_static(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe_static(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe_static(u1);
                let u2_norm = self.normalize_universe_static(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }
}
/// Advanced constraint solver with dependency tracking
pub struct Solver {
    /// Meta-variables
    metas: HashMap<MetaId, MetaInfo>,
    /// Active constraints
    constraints: HashMap<ConstraintId, Constraint>,
    /// Constraint queue (ordered by priority)
    queue: VecDeque<ConstraintId>,
    /// Solved substitutions
    substitution: Substitution,
    /// Next meta ID
    next_meta_id: u64,
    /// Next universe meta ID
    next_universe_meta_id: u32,
    /// Next constraint ID
    next_constraint_id: u64,
    /// Type checking context
    context: Context,
    /// Dependency graph: meta -> constraints that depend on it
    dependencies: HashMap<MetaId, HashSet<ConstraintId>>,
    /// Delayed constraints that couldn't be solved immediately
    delayed_constraints: HashSet<ConstraintId>,
    /// Recursion depth counter to prevent stack overflow
    recursion_depth: u32,
    /// Flag to enable Miller pattern solving (advanced)
    enable_miller_patterns: bool,
    /// Flag to enable HasType constraint solving (advanced)
    enable_has_type_solving: bool,
}
impl Solver {
    pub fn new(context: Context) -> Self {
        Self {
            metas: HashMap::new(),
            constraints: HashMap::new(),
            queue: VecDeque::new(),
            substitution: Substitution::new(),
            next_meta_id: 0,
            next_universe_meta_id: 0,
            next_constraint_id: 0,
            context,
            dependencies: HashMap::new(),
            delayed_constraints: HashSet::new(),
            recursion_depth: 0,
            enable_miller_patterns: false,  // Advanced feature
            enable_has_type_solving: false, // Advanced feature
        }
    }

    /// Enable Miller pattern solving (advanced)
    pub fn enable_miller_patterns(&mut self) {
        self.enable_miller_patterns = true;
    }

    /// Check if Miller pattern solving is enabled
    pub fn miller_patterns_enabled(&self) -> bool {
        self.enable_miller_patterns
    }

    /// Enable HasType constraint solving (advanced)
    pub fn enable_has_type_solving(&mut self) {
        self.enable_has_type_solving = true;
    }

    /// Check if HasType constraint solving is enabled
    pub fn has_type_solving_enabled(&self) -> bool {
        self.enable_has_type_solving
    }

    /// Create a fresh meta-variable
    pub fn fresh_meta(&mut self, ctx_vars: Vec<String>, expected_type: Option<Term>) -> MetaId {
        let id = MetaId(self.next_meta_id);
        self.next_meta_id += 1;

        let info = MetaInfo {
            id,
            context: ctx_vars,
            expected_type,
            solution: None,
            dependencies: HashSet::new(),
            occurrences: Vec::new(),
        };

        self.metas.insert(id, info);
        id
    }

    /// Create a fresh universe meta-variable
    pub fn fresh_universe_meta(&mut self) -> Universe {
        let id = self.next_universe_meta_id;
        self.next_universe_meta_id += 1;
        Universe::Meta(id)
    }

    /// Add a constraint to the solver
    pub fn add_constraint(&mut self, constraint: Constraint) -> ConstraintId {
        let id = match &constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        // Track which metas appear in this constraint
        self.track_meta_occurrences(&constraint);

        self.constraints.insert(id, constraint);
        self.queue.push_back(id);
        id
    }

    /// Create a new constraint ID
    pub fn new_constraint_id(&mut self) -> ConstraintId {
        let id = ConstraintId(self.next_constraint_id);
        self.next_constraint_id += 1;
        id
    }

    /// Track meta-variable occurrences in constraints
    fn track_meta_occurrences(&mut self, constraint: &Constraint) {
        let metas = self.collect_metas_in_constraint(constraint);
        let constraint_id = match constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        for meta_id in metas {
            if let Some(info) = self.metas.get_mut(&meta_id) {
                info.occurrences.push(constraint_id);
            }
            self.dependencies
                .entry(meta_id)
                .or_default()
                .insert(constraint_id);
        }
    }

    /// Collect all meta-variables in a constraint
    fn collect_metas_in_constraint(&self, constraint: &Constraint) -> HashSet<MetaId> {
        match constraint {
            Constraint::Unify { left, right, .. } => {
                let mut metas = self.collect_metas_in_term(left);
                metas.extend(self.collect_metas_in_term(right));
                metas
            }
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                let mut metas = self.collect_metas_in_term(term);
                metas.extend(self.collect_metas_in_term(expected_type));
                metas
            }
            Constraint::UnifyUniverse { .. } => {
                // Universe constraints don't directly contain term meta-variables
                // but they might contain universe meta-variables (which we track separately)
                HashSet::new()
            }
            Constraint::UniverseLevel { .. } => {
                // Same as above
                HashSet::new()
            }
            Constraint::Delayed {
                constraint,
                waiting_on,
                ..
            } => {
                let mut metas = waiting_on.clone();
                metas.extend(self.collect_metas_in_constraint(constraint));
                metas
            }
        }
    }

    /// Collect all meta-variables in a term
    fn collect_metas_in_term(&self, term: &Term) -> HashSet<MetaId> {
        let mut metas = HashSet::new();
        self.collect_metas_recursive(term, &mut metas);
        metas
    }

    fn collect_metas_recursive(&self, term: &Term, metas: &mut HashSet<MetaId>) {
        match term {
            Term::Meta(name) => {
                if let Some(id) = self.parse_meta_name(name) {
                    metas.insert(id);
                }
            }
            Term::App(f, arg) => {
                self.collect_metas_recursive(f, metas);
                self.collect_metas_recursive(arg, metas);
            }
            Term::Abs(_, ty, body) | Term::Pi(_, ty, body, _) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(body, metas);
            }
            Term::Let(_, ty, val, body) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(val, metas);
                self.collect_metas_recursive(body, metas);
            }
            _ => {}
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }

    /// Main solving loop
    pub fn solve(&mut self) -> Result<Substitution, TypeError> {
        let max_iterations = 1000;
        let mut iterations = 0;

        while !self.queue.is_empty() && iterations < max_iterations {
            iterations += 1;

            // Pick the best constraint to solve
            if let Some(constraint_id) = self.pick_constraint() {
                let constraint = self.constraints.remove(&constraint_id).unwrap();

                match self.solve_constraint(constraint.clone()) {
                    Ok(progress) => {
                        if progress {
                            // Propagate the solution
                            self.propagate_solution()?;

                            // Wake up delayed constraints that might now be solvable
                            let woken_constraints = self.wake_delayed_constraints()?;

                            // Add woken constraints back to queue with high priority
                            for woken_id in woken_constraints {
                                self.queue.push_front(woken_id);
                            }
                        }
                    }
                    Err(e) => {
                        // Try to delay the constraint if possible
                        if self.can_delay(&constraint_id) {
                            // Determine which meta-variables to wait for
                            let waiting_on = match &constraint {
                                Constraint::Unify { left, right, .. } => {
                                    let mut metas = self.collect_metas_in_term(left);
                                    metas.extend(self.collect_metas_in_term(right));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                Constraint::HasType {
                                    term,
                                    expected_type,
                                    ..
                                } => {
                                    let mut metas = self.collect_metas_in_term(term);
                                    metas.extend(self.collect_metas_in_term(expected_type));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                _ => HashSet::new(),
                            };

                            if !waiting_on.is_empty() {
                                // Put constraint back and delay it
                                self.constraints.insert(constraint_id, constraint);
                                self.delay_constraint(constraint_id, waiting_on)?;
                            } else {
                                // Can't delay, return error
                                return Err(e);
                            }
                        } else {
                            return Err(e);
                        }
                    }
                }
            }
        }

        if iterations >= max_iterations {
            return Err(TypeError::Internal {
                message: "Constraint solving did not converge".to_string(),
            });
        }

        // Check for unsolved required constraints
        for constraint in self.constraints.values() {
            if let Constraint::Unify {
                strength: ConstraintStrength::Required,
                ..
            } = constraint
            {
                return Err(TypeError::Internal {
                    message: "Unsolved required constraints remain".to_string(),
                });
            }
        }

        Ok(self.substitution.clone())
    }

    /// Pick the next constraint to solve based on heuristics
    fn pick_constraint(&mut self) -> Option<ConstraintId> {
        // Priority order:
        // 1. Constraints with no meta-variables
        // 2. Constraints with only solved meta-variables
        // 3. Simple pattern constraints (Miller patterns)
        // 4. Other constraints

        let mut best_constraint = None;
        let mut best_score = i32::MAX;

        for &id in &self.queue {
            if let Some(constraint) = self.constraints.get(&id) {
                let score = self.score_constraint(constraint);
                if score < best_score {
                    best_score = score;
                    best_constraint = Some(id);
                }
            }
        }

        if let Some(id) = best_constraint {
            self.queue.retain(|&x| x != id);
            Some(id)
        } else {
            self.queue.pop_front()
        }
    }

    /// Score a constraint for solving priority (lower is better)
    fn score_constraint(&self, constraint: &Constraint) -> i32 {
        let metas = self.collect_metas_in_constraint(constraint);

        if metas.is_empty() {
            return 0; // No metas, can solve immediately
        }

        let unsolved_count = metas
            .iter()
            .filter(|m| {
                self.metas
                    .get(m)
                    .and_then(|info| info.solution.as_ref())
                    .is_none()
            })
            .count();

        if unsolved_count == 0 {
            return 1; // All metas solved
        }

        // Check for Miller patterns
        if let Constraint::Unify { left, right, .. } = constraint {
            if self.is_miller_pattern_unification(left, right)
                || self.is_miller_pattern_unification(right, left)
            {
                return 10 + unsolved_count as i32;
            }
        }

        100 + unsolved_count as i32
    }

    /// Check if a unification is a Miller pattern (?M x1 ... xn = t)
    fn is_miller_pattern_unification(&self, left: &Term, right: &Term) -> bool {
        self.is_simple_miller_pattern(left, right)
    }

    /// Check if left is a simple Miller pattern and right is safe to solve with
    pub fn is_simple_miller_pattern(&self, left: &Term, right: &Term) -> bool {
        let (head, args) = self.decompose_application(left);

        if let Term::Meta(meta_name) = head {
            // Must have at least one argument to be interesting
            if args.is_empty() {
                return false;
            }

            // Check if all arguments are distinct variables
            let mut seen = HashSet::new();
            for arg in &args {
                if let Term::Var(x) = arg {
                    if !seen.insert(x.clone()) {
                        return false; // Not distinct
                    }
                } else {
                    return false; // Not a variable
                }
            }

            // Check occurs check - right must not contain the meta-variable
            if let Some(meta_id) = self.parse_meta_name(meta_name) {
                if self.collect_metas_in_term(right).contains(&meta_id) {
                    return false; // Occurs check failure
                }
            }

            // For now, only handle simple cases where right is a variable or constant
            match right {
                Term::Var(_) | Term::Const(_) | Term::Sort(_) => true,
                Term::Meta(_) => {
                    // Allow meta-to-meta if they're different
                    if let Term::Meta(right_name) = right {
                        meta_name != right_name
                    } else {
                        false
                    }
                }
                _ => false, // More complex cases need careful handling
            }
        } else {
            false
        }
    }

    /// Decompose an application into head and arguments
    fn decompose_application<'a>(&self, term: &'a Term) -> (&'a Term, Vec<&'a Term>) {
        let mut current = term;
        let mut args = Vec::new();

        while let Term::App(f, arg) = current {
            args.push(arg.as_ref());
            current = f;
        }

        args.reverse();
        (current, args)
    }

    /// Solve a single constraint
    fn solve_constraint(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 100;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in solve_constraint",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.solve_constraint_impl(constraint);
        self.recursion_depth -= 1;
        result
    }

    fn solve_constraint_impl(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        match constraint {
            Constraint::Unify {
                left,
                right,
                strength,
                ..
            } => self.unify(&left, &right, strength),
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                if self.enable_has_type_solving {
                    self.solve_has_type_constraint(&term, &expected_type)
                } else {
                    // Default behavior: don't solve HasType constraints
                    Ok(false)
                }
            }
            Constraint::UnifyUniverse {
                left,
                right,
                strength,
                ..
            } => self.unify_universe(&left, &right, strength),
            Constraint::UniverseLevel { left, right, .. } => {
                // For now, treat level constraints as unification constraints
                self.unify_universe(&left, &right, ConstraintStrength::Preferred)
            }
            Constraint::Delayed { constraint, .. } => {
                // Re-queue the delayed constraint
                let inner = *constraint;
                self.add_constraint(inner);
                Ok(false)
            }
        }
    }

    /// Unify two terms
    fn unify(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 50;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in unify",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.unify_impl(left, right, strength);
        self.recursion_depth -= 1;
        result
    }

    fn unify_impl(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply(left);
        let right = self.substitution.apply(right);

        // Check if already equal
        if self.alpha_equal(&left, &right) {
            return Ok(false);
        }

        // Miller pattern detection and solving (if enabled)
        if self.enable_miller_patterns {
            if self.is_simple_miller_pattern(&left, &right) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&left) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &right);
                }
            } else if self.is_simple_miller_pattern(&right, &left) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&right) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &left);
                }
            }
        }

        match (&left, &right) {
            // Meta-variable cases
            (Term::Meta(m), t) | (t, Term::Meta(m)) => {
                if let Some(meta_id) = self.parse_meta_name(m) {
                    self.solve_meta(meta_id, t)
                } else {
                    Err(TypeError::Internal {
                        message: format!("Invalid meta-variable: {}", m),
                    })
                }
            }

            // Structural cases
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *f1.clone(),
                    right: *f2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: *a1.clone(),
                    right: *a2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                // Rename bound variables to be consistent
                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            // Universe (Sort) cases
            (Term::Sort(u1), Term::Sort(u2)) => self.unify_universe(u1, u2, strength),

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::TypeMismatch {
                        expected: left,
                        actual: right,
                    })
                } else {
                    Ok(false) // Can't solve, but not an error for weak
                              // constraints
                }
            }
        }
    }

    /// Unify two universes
    fn unify_universe(
        &mut self,
        left: &Universe,
        right: &Universe,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply_universe(left);
        let right = self.substitution.apply_universe(right);

        // Normalize universes (simplify expressions like 0+1 -> 1)
        let left = self.normalize_universe(&left);
        let right = self.normalize_universe(&right);

        // Check if already equal
        if left == right {
            return Ok(false);
        }

        match (&left, &right) {
            // Same constants
            (Universe::Const(n1), Universe::Const(n2)) if n1 == n2 => Ok(false),

            // Universe variable cases
            (Universe::ScopedVar(s1, v1), Universe::ScopedVar(s2, v2)) if s1 == s2 && v1 == v2 => {
                Ok(false)
            }
            (Universe::ScopedVar(_, _), Universe::ScopedVar(_, _)) => {
                // Different universe variables cannot be unified
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!(
                            "Cannot unify distinct universe variables {} and {}",
                            left, right
                        ),
                    })
                } else {
                    Ok(false)
                }
            }

            // Meta-variable cases
            (Universe::Meta(id), u) | (u, Universe::Meta(id)) => self.solve_universe_meta(*id, u),

            // Structural cases
            (Universe::Add(base1, n1), Universe::Add(base2, n2)) if n1 == n2 => {
                let constraint_id = self.new_constraint_id();
                self.add_constraint(Constraint::UnifyUniverse {
                    id: constraint_id,
                    left: *base1.clone(),
                    right: *base2.clone(),
                    strength,
                });
                Ok(true)
            }

            // Arithmetic constraints: ?u+n = m means ?u = m-n
            (Universe::Add(base, n), Universe::Const(m))
            | (Universe::Const(m), Universe::Add(base, n)) => {
                if *m >= *n {
                    if let Universe::Meta(id) = base.as_ref() {
                        self.solve_universe_meta(*id, &Universe::Const(m - n))
                    } else {
                        // More complex case - generate constraint for base
                        let constraint_id = self.new_constraint_id();
                        self.add_constraint(Constraint::UnifyUniverse {
                            id: constraint_id,
                            left: *base.clone(),
                            right: Universe::Const(m - n),
                            strength,
                        });
                        Ok(true)
                    }
                } else {
                    Err(TypeError::Internal {
                        message: format!("Cannot solve constraint: {} cannot equal {} (would require negative universe)",
                                       if matches!(&left, Universe::Add(_, _)) { &left } else { &right },
                                       if matches!(&left, Universe::Const(_)) { &left } else { &right }),
                    })
                }
            }

            (Universe::Max(u1, v1), Universe::Max(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Universe::IMax(u1, v1), Universe::IMax(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!("Cannot unify universes {} and {}", left, right),
                    })
                } else {
                    Ok(false)
                }
            }
        }
    }

    /// Normalize a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe(u1);
                let u2_norm = self.normalize_universe(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    /// Solve a universe meta-variable
    fn solve_universe_meta(
        &mut self,
        meta_id: u32,
        solution: &Universe,
    ) -> Result<bool, TypeError> {
        // Occurs check for universe meta-variables
        if self.occurs_in_universe(meta_id, solution) {
            return Err(TypeError::Internal {
                message: format!("Universe occurs check failed for ?u{}", meta_id),
            });
        }

        // Record solution
        self.substitution.insert_universe(meta_id, solution.clone());
        Ok(true)
    }

    /// Check if a universe meta-variable occurs in a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn occurs_in_universe(&self, meta_id: u32, universe: &Universe) -> bool {
        match universe {
            Universe::Meta(id) => *id == meta_id,
            Universe::Add(base, _) => self.occurs_in_universe(meta_id, base),
            Universe::Max(u, v) | Universe::IMax(u, v) => {
                self.occurs_in_universe(meta_id, u) || self.occurs_in_universe(meta_id, v)
            }
            _ => false,
        }
    }

    /// Solve a meta-variable
    fn solve_meta(&mut self, meta_id: MetaId, solution: &Term) -> Result<bool, TypeError> {
        // Occurs check
        if self.collect_metas_in_term(solution).contains(&meta_id) {
            return Err(TypeError::Internal {
                message: format!("Occurs check failed for ?m{}", meta_id.0),
            });
        }

        // Check solution is well-scoped
        if let Some(meta_info) = self.metas.get(&meta_id) {
            if !self.is_well_scoped(solution, &meta_info.context) {
                return Err(TypeError::Internal {
                    message: format!("Solution for ?m{} is not well-scoped", meta_id.0),
                });
            }
        }

        // Record solution
        self.substitution.insert(meta_id, solution.clone());

        if let Some(info) = self.metas.get_mut(&meta_id) {
            info.solution = Some(solution.clone());
        }

        Ok(true)
    }

    /// Check if a term is well-scoped in a context
    fn is_well_scoped(&self, term: &Term, context: &[String]) -> bool {
        match term {
            Term::Var(x) => {
                // Check if it's a bound variable or a global constant
                context.contains(x)
                    || self.context.lookup(x).is_some()
                    || self.context.lookup_axiom(x).is_some()
                    || self.context.lookup_constructor(x).is_some()
            }
            Term::App(f, arg) => {
                self.is_well_scoped(f, context) && self.is_well_scoped(arg, context)
            }
            Term::Abs(x, ty, body) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Pi(x, ty, body, _) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Const(name) => {
                // Constants should be in the global context
                self.context.lookup_axiom(name).is_some()
                    || self.context.lookup_constructor(name).is_some()
            }
            _ => true, // Sorts, meta-variables, etc.
        }
    }

    /// Alpha equality check
    fn alpha_equal(&self, t1: &Term, t2: &Term) -> bool {
        self.alpha_equal_aux(t1, t2, &mut vec![])
    }

    #[allow(clippy::only_used_in_recursion)]
    fn alpha_equal_aux(&self, t1: &Term, t2: &Term, renamings: &mut Vec<(String, String)>) -> bool {
        match (t1, t2) {
            (Term::Var(x), Term::Var(y)) => {
                // Check if this is a bound variable that was renamed
                for (a, b) in renamings.iter() {
                    if a == x && b == y {
                        return true;
                    }
                }
                x == y
            }
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                self.alpha_equal_aux(f1, f2, renamings) && self.alpha_equal_aux(a1, a2, renamings)
            }
            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Sort(u1), Term::Sort(u2)) => u1 == u2,
            (Term::Const(c1), Term::Const(c2)) => c1 == c2,
            (Term::Meta(m1), Term::Meta(m2)) => m1 == m2,
            _ => false,
        }
    }

    /// Rename a variable in a term
    #[allow(clippy::only_used_in_recursion)]
    fn rename_var(&self, old: &str, new: &str, term: &Term) -> Term {
        match term {
            Term::Var(x) if x == old => Term::Var(new.to_string()),
            Term::Var(x) => Term::Var(x.clone()),
            Term::App(f, arg) => Term::App(
                Box::new(self.rename_var(old, new, f)),
                Box::new(self.rename_var(old, new, arg)),
            ),
            Term::Abs(x, ty, body) if x != old => Term::Abs(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
            ),
            Term::Pi(x, ty, body, implicit) if x != old => Term::Pi(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
                *implicit,
            ),
            _ => term.clone(),
        }
    }

    /// Propagate solutions to dependent constraints
    fn propagate_solution(&mut self) -> Result<(), TypeError> {
        // Apply substitution to all remaining constraints
        let constraints: Vec<_> = self.constraints.drain().collect();

        for (id, constraint) in constraints {
            let updated = self.apply_subst_to_constraint(constraint);
            self.constraints.insert(id, updated);
        }

        Ok(())
    }

    /// Apply substitution to a constraint
    fn apply_subst_to_constraint(&self, constraint: Constraint) -> Constraint {
        match constraint {
            Constraint::Unify {
                id,
                left,
                right,
                strength,
            } => Constraint::Unify {
                id,
                left: self.substitution.apply(&left),
                right: self.substitution.apply(&right),
                strength,
            },
            Constraint::HasType {
                id,
                term,
                expected_type,
            } => Constraint::HasType {
                id,
                term: self.substitution.apply(&term),
                expected_type: self.substitution.apply(&expected_type),
            },
            Constraint::UnifyUniverse {
                id,
                left,
                right,
                strength,
            } => Constraint::UnifyUniverse {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
                strength,
            },
            Constraint::UniverseLevel { id, left, right } => Constraint::UniverseLevel {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
            },
            Constraint::Delayed {
                id,
                constraint,
                waiting_on,
            } => Constraint::Delayed {
                id,
                constraint: Box::new(self.apply_subst_to_constraint(*constraint)),
                waiting_on,
            },
        }
    }

    /// Solve a HasType constraint by type inference and unification
    fn solve_has_type_constraint(
        &mut self,
        term: &Term,
        expected_type: &Term,
    ) -> Result<bool, TypeError> {
        match term {
            Term::Meta(meta_name) => {
                // If the term is a meta-variable, we can potentially solve it
                if let Some(meta_id) = self.parse_meta_name(meta_name) {
                    if let Some(meta_info) = self.metas.get(&meta_id) {
                        if meta_info.solution.is_none() {
                            // Try to construct a term that has the expected type
                            if let Some(solution) = self.synthesize_term_of_type(expected_type)? {
                                return self.solve_meta(meta_id, &solution);
                            }
                        }
                    }
                }
                // If we can't solve the meta, try unifying with expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }

            Term::App(f, arg) => {
                // For applications, we need to ensure the function type is compatible
                // Create a fresh meta for the function type: ?F : ?A -> ?B
                let arg_type_meta_id = self.fresh_meta(vec![], None);
                let result_type_meta_id = self.fresh_meta(vec![], None);
                let arg_type_meta = Term::Meta(format!("?m{}", arg_type_meta_id.0));
                let result_type_meta = Term::Meta(format!("?m{}", result_type_meta_id.0));
                let func_type = Term::Pi(
                    "_".to_string(),
                    Box::new(arg_type_meta.clone()),
                    Box::new(result_type_meta.clone()),
                    false,
                );

                // Add constraints:
                // 1. f : ?A -> ?B
                // 2. arg : ?A
                // 3. ?B ≡ expected_type
                let f_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: f.as_ref().clone(),
                    expected_type: func_type,
                };
                let arg_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: arg.as_ref().clone(),
                    expected_type: arg_type_meta,
                };
                let result_constraint = Constraint::Unify {
                    id: self.new_constraint_id(),
                    left: result_type_meta,
                    right: expected_type.clone(),
                    strength: ConstraintStrength::Required,
                };

                self.add_constraint(f_constraint);
                self.add_constraint(arg_constraint);
                self.add_constraint(result_constraint);

                Ok(true)
            }

            Term::Abs(param, param_type, body) => {
                // For lambda abstractions, expected type should be a Pi type
                match expected_type {
                    Term::Pi(_, expected_param_type, expected_body_type, _) => {
                        // Unify parameter types and check body type
                        let param_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: param_type.as_ref().clone(),
                            right: expected_param_type.as_ref().clone(),
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: expected_body_type.as_ref().clone(),
                        };

                        self.add_constraint(param_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    Term::Meta(_) => {
                        // Expected type is a meta-variable, create a Pi type for it
                        let body_type_meta_id = self.fresh_meta(vec![param.clone()], None);
                        let body_type_meta = Term::Meta(format!("?m{}", body_type_meta_id.0));
                        let pi_type = Term::Pi(
                            param.clone(),
                            param_type.clone(),
                            Box::new(body_type_meta.clone()),
                            false,
                        );

                        let type_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: expected_type.clone(),
                            right: pi_type,
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: body_type_meta,
                        };

                        self.add_constraint(type_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    _ => {
                        // Type mismatch - lambda can't have non-function type
                        Err(TypeError::TypeMismatch {
                            expected: expected_type.clone(),
                            actual: Term::Pi(
                                param.clone(),
                                param_type.clone(),
                                Box::new(Term::Meta("?unknown".to_string())),
                                false,
                            ),
                        })
                    }
                }
            }

            _ => {
                // For other terms, just unify with the expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }
        }
    }

    /// Attempt to synthesize a term of a given type (basic heuristics)
    fn synthesize_term_of_type(&self, expected_type: &Term) -> Result<Option<Term>, TypeError> {
        match expected_type {
            Term::Sort(_) => {
                // For Sort types, we could return a fresh meta or a simple type
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
            Term::Pi(param, param_type, _body_type, _implicit) => {
                // For Pi types, synthesize a lambda that matches the Pi structure
                // Use a fresh meta-variable for the body to avoid infinite recursion
                let body_term = Term::Meta(format!("?body{}", self.next_meta_id));

                Ok(Some(Term::Abs(
                    param.clone(),
                    param_type.clone(),
                    Box::new(body_term),
                )))
            }
            Term::Meta(_) => {
                // Can't synthesize for unknown types
                Ok(None)
            }
            _ => {
                // For concrete types, return a meta-variable
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
        }
    }

    /// Check if a term is a Miller pattern
    /// Miller patterns are terms of the form: ?M x1 x2 ... xn where xi are
    /// distinct bound variables
    fn is_miller_pattern(&self, term: &Term) -> bool {
        let (head, args) = self.decompose_application(term);

        match head {
            Term::Meta(_) => {
                // Check that all arguments are distinct variables
                let mut seen = HashSet::new();
                for arg in args {
                    if let Term::Var(x) = arg {
                        if !seen.insert(x.clone()) {
                            return false; // Not distinct
                        }
                    } else {
                        return false; // Not a variable
                    }
                }
                true
            }
            _ => false,
        }
    }

    /// Extract the spine of a Miller pattern (the meta-variable and its
    /// arguments)
    fn extract_miller_spine(&self, term: &Term) -> Option<(String, Vec<String>)> {
        // First validate it's actually a Miller pattern
        if !self.is_miller_pattern(term) {
            return None;
        }

        let (head, args) = self.decompose_application(term);

        if let Term::Meta(meta_name) = head {
            // Collect variable names (we already know they're all variables from
            // is_miller_pattern)
            let var_names: Vec<String> = args
                .iter()
                .map(|arg| {
                    if let Term::Var(name) = arg {
                        name.clone()
                    } else {
                        unreachable!()
                    }
                })
                .collect();

            Some((meta_name.clone(), var_names))
        } else {
            None
        }
    }

    /// Solve simple Miller pattern unification
    /// Only handles simple cases
    fn solve_miller_pattern(
        &mut self,
        meta_name: &str,
        spine: &[String],
        other: &Term,
    ) -> Result<bool, TypeError> {
        // Check if we can abstract over the spine variables
        if !Self::can_abstract_over_variables(other, spine) {
            // Can't abstract, fall back to regular unification
            return self.unify(
                &Term::Meta(meta_name.to_string()),
                other,
                ConstraintStrength::Required,
            );
        }

        if let Some(meta_id) = self.parse_meta_name(meta_name) {
            // Only handle simple cases to avoid complexity
            match other {
                // Case 1: ?M x ≡ y (different variable) -> ?M := λx. y
                Term::Var(y) if spine.len() == 1 && spine[0] != *y => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))), /* Type inferred later */
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                // Case 2: ?M x ≡ c (constant) -> ?M := λx. c
                Term::Const(_) | Term::Sort(_) if spine.len() == 1 => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))),
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                _ => {
                    // For more complex cases, we could build the full lambda
                    // abstraction but for now, fall back to
                    // regular unification
                }
            }
        }

        // Fall back to regular unification
        self.unify(
            &Term::Meta(meta_name.to_string()),
            other,
            ConstraintStrength::Required,
        )
    }

    /// Check if a term can be abstracted over the given variables
    fn can_abstract_over_variables(term: &Term, vars: &[String]) -> bool {
        // For now, implement a simple check
        // In a full implementation, this would check that the term only uses variables
        // that are either in the spine or are bound locally
        match term {
            Term::Var(v) => vars.contains(v), // Variable must be in spine
            Term::App(f, arg) => {
                Self::can_abstract_over_variables(f, vars)
                    && Self::can_abstract_over_variables(arg, vars)
            }
            Term::Abs(param, ty, body) => {
                // Extend the scope with the bound parameter
                let mut extended_vars = vars.to_vec();
                extended_vars.push(param.clone());
                Self::can_abstract_over_variables(ty, vars)
                    && Self::can_abstract_over_variables(body, &extended_vars)
            }
            Term::Meta(_) => true,  // Meta-variables are always abstractable
            Term::Const(_) => true, // Constants are always abstractable
            Term::Sort(_) => true,  // Sorts are always abstractable
            _ => false,             // Conservative for complex constructs
        }
    }

    /// Delay a constraint for later processing
    fn delay_constraint(
        &mut self,
        constraint_id: ConstraintId,
        waiting_on: HashSet<MetaId>,
    ) -> Result<(), TypeError> {
        if let Some(constraint) = self.constraints.remove(&constraint_id) {
            let delayed = Constraint::Delayed {
                id: constraint_id,
                constraint: Box::new(constraint),
                waiting_on,
            };
            self.constraints.insert(constraint_id, delayed);
            self.delayed_constraints.insert(constraint_id);
        }
        Ok(())
    }

    /// Wake up delayed constraints that are no longer blocked
    fn wake_delayed_constraints(&mut self) -> Result<Vec<ConstraintId>, TypeError> {
        let mut woken = Vec::new();
        let mut to_wake = Vec::new();

        // Check which delayed constraints can now be woken up
        for constraint_id in &self.delayed_constraints {
            if let Some(Constraint::Delayed {
                waiting_on,
                constraint,
                ..
            }) = self.constraints.get(constraint_id)
            {
                // Check if all blocking metas are now solved
                let all_solved = waiting_on.iter().all(|meta_id| {
                    self.metas
                        .get(meta_id)
                        .is_some_and(|info| info.solution.is_some())
                });

                if all_solved {
                    to_wake.push((*constraint_id, constraint.as_ref().clone()));
                }
            }
        }

        // Wake up the constraints
        for (constraint_id, original_constraint) in to_wake {
            self.constraints.insert(constraint_id, original_constraint);
            self.delayed_constraints.remove(&constraint_id);
            woken.push(constraint_id);
        }

        Ok(woken)
    }

    /// Check if we can delay a constraint
    fn can_delay(&self, constraint_id: &ConstraintId) -> bool {
        if let Some(constraint) = self.constraints.get(constraint_id) {
            match constraint {
                Constraint::Unify {
                    left,
                    right,
                    strength,
                    ..
                } => {
                    // We can delay weak constraints that involve unsolved metas
                    if *strength == ConstraintStrength::Weak {
                        let left_metas = self.collect_metas_in_term(left);
                        let right_metas = self.collect_metas_in_term(right);

                        // Check if any involved metas are unsolved
                        for meta_id in left_metas.iter().chain(right_metas.iter()) {
                            if self.is_unsolved_meta(meta_id) {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::HasType {
                    term,
                    expected_type,
                    ..
                } => {
                    // Can delay HasType constraints involving unsolved metas
                    let term_metas = self.collect_metas_in_term(term);
                    let type_metas = self.collect_metas_in_term(expected_type);

                    for meta_id in term_metas.iter().chain(type_metas.iter()) {
                        if let Some(meta_info) = self.metas.get(meta_id) {
                            if meta_info.solution.is_none() {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::Delayed { .. } => true, // Already delayed
                _ => false,
            }
        } else {
            false
        }
    }

    /// Check if a meta-variable is unsolved
    fn is_unsolved_meta(&self, meta_id: &MetaId) -> bool {
        self.metas
            .get(meta_id)
            .is_some_and(|info| info.solution.is_none())
    }
}
/// Information about a meta-variable
#[derive(Debug, Clone)]
pub struct MetaInfo {
    pub id: MetaId,
    pub context: Vec<String>, // Variables in scope when meta was created
    pub expected_type: Option<Term>, // Expected type of the meta-variable
    pub solution: Option<Term>, // Solution if solved
    pub dependencies: HashSet<MetaId>, // Meta-variables this depends on
    pub occurrences: Vec<ConstraintId>, // Constraints this meta appears in
}
}

Meta-variables represent unknown terms that get resolved through unification and constraint solving. Our implementation tracks meta-variable dependencies and applies solving algorithms to determine unique solutions when possible.

Takeaways

The Calculus of Constructions represents a unique convergence of computational and logical capabilities that distinguishes it from other type systems. The fundamental characteristic that sets CoC apart is its unification of terms, types, and kinds within a single syntactic framework, eliminating the artificial stratification found in traditional type systems. This unification enables unprecedented expressiveness where types can depend on computational values, creating a system where mathematical specifications and their implementations coexist naturally within the same linguistic framework.

The definitional equality mechanism provides computational content to the type system through β-reduction, η-conversion, and definition unfolding, creating a rich equational theory that recognizes when different syntactic expressions represent the same computational object. This equality extends beyond simple syntactic matching to include semantic equivalence, enabling the type checker to recognize mathematical identities and computational transformations automatically. The normalization algorithm ensures that definitional equality checking remains decidable while providing the computational power needed for type-level computations.

Inductive Types

Inductive types form the foundation of data structures in the Calculus of Constructions, providing a systematic way to define recursive types with constructors and elimination principles. Our implementation supports universe-polymorphic inductive types with dependent pattern matching, enabling data type abstractions while maintaining logical consistency through the universe hierarchy.

Inductive types represent a crucial extension to the pure lambda calculus, allowing the definition of concrete data structures like natural numbers, lists, and trees through constructor-based specifications. The implementation provides both syntactic support for inductive declarations and semantic handling through specialized type checking algorithms that ensure constructor consistency and exhaustive pattern coverage.

Consider these intuitive examples that demonstrate the fundamental concept of inductive types. A simple enumeration like the days of the week can be expressed as an inductive type with seven constructors:

inductive DayOfWeek : Type with
  | Monday    : DayOfWeek
  | Tuesday   : DayOfWeek  
  | Wednesday : DayOfWeek
  | Thursday  : DayOfWeek
  | Friday    : DayOfWeek
  | Saturday  : DayOfWeek
  | Sunday    : DayOfWeek

Similarly, primary colors form another natural inductive type with three distinct constructors:

inductive Color : Type with
  | Red   : Color
  | Green : Color  
  | Blue  : Color

These examples illustrate how inductive types provide a systematic way to define finite sets of distinct values through constructor declarations, with each constructor serving as both a data constructor and a proof that the constructed value belongs to the inductive type.

More complex inductive types can include recursive constructors that reference the type being defined, enabling data structures like binary trees:

inductive BinaryTree (A : Type) : Type with
  | Leaf : BinaryTree A
  | Node : A -> BinaryTree A -> BinaryTree A -> BinaryTree A

This binary tree definition demonstrates several important concepts: the type is parameterized by an element type A, the Leaf constructor creates empty trees, and the Node constructor takes a value of type A along with two subtrees to create larger trees. The recursive nature of the Node constructor enables the construction of arbitrarily deep tree structures while maintaining type safety through the inductive type system.

Inductive Type Declarations

The core data structure for inductive type declarations captures all the essential components needed for constructor-based type definitions:

#![allow(unused)]
fn main() {
use std::fmt;
/// Core terms in the Calculus of Constructions
#[derive(Debug, Clone, PartialEq)]
pub enum Term {
    /// Variables: x, y, z
    Var(String),

    /// Application: f a
    App(Box<Term>, Box<Term>),

    /// Lambda abstraction: λ x : A, t  (written as fun x : A => t)
    Abs(String, Box<Term>, Box<Term>),

    /// Dependent product: Π x : A, B  (written as (x : A) → B or {x : A} → B)
    Pi(String, Box<Term>, Box<Term>, bool), // bool = implicit

    /// Sort/Type: Sort u, Type, Prop
    Sort(Universe),

    /// Let binding: let x := t in s
    Let(String, Box<Term>, Box<Term>, Box<Term>),

    /// Match expression with patterns
    Match(Box<Term>, Vec<MatchArm>),

    /// Inductive type constructor
    Constructor(String, Vec<Term>),

    /// Local constant (for definitions and axioms)
    Const(String),

    /// Meta-variable for type inference
    Meta(String),

    /// Field projection: term.field
    Proj(Box<Term>, String),

    /// Dependent pair type (Sigma type): Σ (x : A), B
    Sigma(String, Box<Term>, Box<Term>),

    /// Pair constructor: ⟨a, b⟩
    Pair(Box<Term>, Box<Term>),

    /// First projection: π₁
    Fst(Box<Term>),

    /// Second projection: π₂
    Snd(Box<Term>),
}
/// Universe levels for type theory
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Universe {
    Const(u32),                         // Concrete level: 0, 1, 2, ...
    ScopedVar(String, String),          // Scoped universe variable: (scope_name, var_name)
    Meta(u32),                          // Universe metavariable: ?u₀, ?u₁, ...
    Add(Box<Universe>, u32),            // Level + n
    Max(Box<Universe>, Box<Universe>),  // max(u, v)
    IMax(Box<Universe>, Box<Universe>), // imax(u, v)
}
/// Universe level constraints for polymorphism
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UniverseConstraint {
    /// u ≤ v
    LessEq(Universe, Universe),
    /// u = v
    Equal(Universe, Universe),
}
/// Context for universe level variables
#[derive(Debug, Clone, PartialEq)]
pub struct UniverseContext {
    /// Currently bound universe variables
    pub variables: Vec<String>,
    /// Constraints on universe levels
    pub constraints: Vec<UniverseConstraint>,
}
/// Pattern matching arms
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Term,
}
/// Patterns for match expressions
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Variable pattern: x
    Var(String),

    /// Constructor pattern: Cons x xs
    Constructor(String, Vec<Pattern>),

    /// Wildcard pattern: _
    Wildcard,
}
/// Function parameters
#[derive(Debug, Clone, PartialEq)]
pub struct Parameter {
    pub name: String,
    pub ty: Term,
    pub implicit: bool, // for {x : A} vs (x : A)
}
/// Constructor for inductive types
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub ty: Term,
}
/// Structure fields
#[derive(Debug, Clone, PartialEq)]
pub struct Field {
    pub name: String,
    pub ty: Term,
}
/// Module containing multiple declarations
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
impl fmt::Display for Universe {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Universe::Const(n) => write!(f, "{}", n),
            Universe::ScopedVar(scope, var) => write!(f, "{}::{}", scope, var),
            Universe::Meta(id) => write!(f, "?u{}", id),
            Universe::Add(u, n) => write!(f, "{}+{}", u, n),
            Universe::Max(u, v) => write!(f, "max({}, {})", u, v),
            Universe::IMax(u, v) => write!(f, "imax({}, {})", u, v),
        }
    }
}
impl fmt::Display for Term {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Term::Var(x) => write!(f, "{}", x),
            Term::App(t1, t2) => write!(f, "({} {})", t1, t2),
            Term::Abs(x, ty, body) => write!(f, "(fun {} : {} => {})", x, ty, body),
            Term::Pi(x, ty, body, implicit) => {
                if Self::occurs_free(x, body) {
                    if *implicit {
                        write!(f, "{{{}  : {}}} → {}", x, ty, body)
                    } else {
                        write!(f, "({} : {}) → {}", x, ty, body)
                    }
                } else {
                    write!(f, "{} → {}", ty, body)
                }
            }
            Term::Sort(u) => match u {
                Universe::Const(0) => write!(f, "Prop"),
                Universe::Const(n) => {
                    if *n == 1 {
                        write!(f, "Type")
                    } else {
                        write!(f, "Type {}", n - 1)
                    }
                }
                Universe::Add(ref base, n) => {
                    if let Universe::Const(1) = **base {
                        write!(f, "Type {}", n)
                    } else {
                        write!(f, "Sort {}", u)
                    }
                }
                _ => write!(f, "Sort {}", u),
            },
            Term::Let(x, ty, val, body) => write!(f, "(let {} : {} := {} in {})", x, ty, val, body),
            Term::Match(scrutinee, arms) => {
                writeln!(f, "match {} with", scrutinee)?;
                for arm in arms {
                    writeln!(f, "| {} => {}", arm.pattern, arm.body)?;
                }
                Ok(())
            }
            Term::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Term::Const(name) => write!(f, "{}", name),
            Term::Meta(name) => write!(f, "?{}", name),
            Term::Proj(term, field) => write!(f, "{}.{}", term, field),
            Term::Sigma(x, domain, codomain) => write!(f, "Σ ({} : {}), {}", x, domain, codomain),
            Term::Pair(fst, snd) => write!(f, "⟨{}, {}⟩", fst, snd),
            Term::Fst(pair) => write!(f, "π₁({})", pair),
            Term::Snd(pair) => write!(f, "π₂({})", pair),
        }
    }
}
impl fmt::Display for Pattern {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Pattern::Var(x) => write!(f, "{}", x),
            Pattern::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Pattern::Wildcard => write!(f, "_"),
        }
    }
}
impl Term {
    /// Check if variable occurs free in term
    pub fn occurs_free(var: &str, term: &Term) -> bool {
        match term {
            Term::Var(x) => x == var,
            Term::App(t1, t2) => Self::occurs_free(var, t1) || Self::occurs_free(var, t2),
            Term::Abs(x, ty, body) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Pi(x, ty, body, _) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Sort(_) => false,
            Term::Let(x, ty, val, body) => {
                Self::occurs_free(var, ty)
                    || Self::occurs_free(var, val)
                    || (x != var && Self::occurs_free(var, body))
            }
            Term::Match(scrutinee, arms) => {
                Self::occurs_free(var, scrutinee)
                    || arms.iter().any(|arm| Self::occurs_free(var, &arm.body))
            }
            Term::Constructor(_, args) => args.iter().any(|arg| Self::occurs_free(var, arg)),
            Term::Const(_) => false,
            Term::Meta(_) => false,
            Term::Proj(term, _) => Self::occurs_free(var, term),
            Term::Sigma(x, domain, codomain) => {
                Self::occurs_free(var, domain) || (x != var && Self::occurs_free(var, codomain))
            }
            Term::Pair(fst, snd) => Self::occurs_free(var, fst) || Self::occurs_free(var, snd),
            Term::Fst(pair) | Term::Snd(pair) => Self::occurs_free(var, pair),
        }
    }
}
impl UniverseContext {
    pub fn new() -> Self {
        UniverseContext {
            variables: Vec::new(),
            constraints: Vec::new(),
        }
    }

    /// Add a universe variable to the context
    pub fn add_var(&mut self, name: String) {
        if !self.variables.contains(&name) {
            self.variables.push(name);
        }
    }

    /// Add a constraint to the context
    pub fn add_constraint(&mut self, constraint: UniverseConstraint) {
        self.constraints.push(constraint);
    }

    /// Check if a universe variable is bound in this context
    pub fn contains(&self, name: &str) -> bool {
        self.variables.contains(&name.to_string())
    }

    /// Extend context with variables from another context
    pub fn extend(&self, other: &UniverseContext) -> Self {
        let mut new_ctx = self.clone();
        for var in &other.variables {
            new_ctx.add_var(var.clone());
        }
        for constraint in &other.constraints {
            new_ctx.add_constraint(constraint.clone());
        }
        new_ctx
    }

    /// Generate a fresh universe variable name
    pub fn fresh_var(&self, base: &str) -> String {
        let mut counter = 0;
        loop {
            let name = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };
            if !self.contains(&name) {
                return name;
            }
            counter += 1;
        }
    }
}
impl Default for UniverseContext {
    fn default() -> Self {
        Self::new()
    }
}
impl Universe {
    /// Create a scoped universe variable
    pub fn scoped_var(scope: String, name: String) -> Self {
        Universe::ScopedVar(scope, name)
    }
}
/// Top-level declarations
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Constant definition: def name {u...} (params) : type := body
    Definition {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        body: Term,
    },

    /// Axiom: axiom name {u...} : type
    Axiom {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        ty: Term,
    },

    /// Inductive type: inductive Name {u...} (params) : Type := constructors
    Inductive {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        constructors: Vec<Constructor>,
    },

    /// Structure (single constructor inductive): structure Name {u...} :=
    /// (fields)
    Structure {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        fields: Vec<Field>,
    },
}
}

Inductive declarations specify a type name, optional universe parameters for universe polymorphism, type parameters for generic types, a result type specification, and a collection of constructor definitions. This structure enables complex inductive types ranging from simple enumerations to universe-polymorphic families of types.

#![allow(unused)]
fn main() {
use std::fmt;
/// Core terms in the Calculus of Constructions
#[derive(Debug, Clone, PartialEq)]
pub enum Term {
    /// Variables: x, y, z
    Var(String),

    /// Application: f a
    App(Box<Term>, Box<Term>),

    /// Lambda abstraction: λ x : A, t  (written as fun x : A => t)
    Abs(String, Box<Term>, Box<Term>),

    /// Dependent product: Π x : A, B  (written as (x : A) → B or {x : A} → B)
    Pi(String, Box<Term>, Box<Term>, bool), // bool = implicit

    /// Sort/Type: Sort u, Type, Prop
    Sort(Universe),

    /// Let binding: let x := t in s
    Let(String, Box<Term>, Box<Term>, Box<Term>),

    /// Match expression with patterns
    Match(Box<Term>, Vec<MatchArm>),

    /// Inductive type constructor
    Constructor(String, Vec<Term>),

    /// Local constant (for definitions and axioms)
    Const(String),

    /// Meta-variable for type inference
    Meta(String),

    /// Field projection: term.field
    Proj(Box<Term>, String),

    /// Dependent pair type (Sigma type): Σ (x : A), B
    Sigma(String, Box<Term>, Box<Term>),

    /// Pair constructor: ⟨a, b⟩
    Pair(Box<Term>, Box<Term>),

    /// First projection: π₁
    Fst(Box<Term>),

    /// Second projection: π₂
    Snd(Box<Term>),
}
/// Universe levels for type theory
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Universe {
    Const(u32),                         // Concrete level: 0, 1, 2, ...
    ScopedVar(String, String),          // Scoped universe variable: (scope_name, var_name)
    Meta(u32),                          // Universe metavariable: ?u₀, ?u₁, ...
    Add(Box<Universe>, u32),            // Level + n
    Max(Box<Universe>, Box<Universe>),  // max(u, v)
    IMax(Box<Universe>, Box<Universe>), // imax(u, v)
}
/// Universe level constraints for polymorphism
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UniverseConstraint {
    /// u ≤ v
    LessEq(Universe, Universe),
    /// u = v
    Equal(Universe, Universe),
}
/// Context for universe level variables
#[derive(Debug, Clone, PartialEq)]
pub struct UniverseContext {
    /// Currently bound universe variables
    pub variables: Vec<String>,
    /// Constraints on universe levels
    pub constraints: Vec<UniverseConstraint>,
}
/// Pattern matching arms
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Term,
}
/// Patterns for match expressions
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Variable pattern: x
    Var(String),

    /// Constructor pattern: Cons x xs
    Constructor(String, Vec<Pattern>),

    /// Wildcard pattern: _
    Wildcard,
}
/// Top-level declarations
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Constant definition: def name {u...} (params) : type := body
    Definition {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        body: Term,
    },

    /// Axiom: axiom name {u...} : type
    Axiom {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        ty: Term,
    },

    /// Inductive type: inductive Name {u...} (params) : Type := constructors
    Inductive {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        constructors: Vec<Constructor>,
    },

    /// Structure (single constructor inductive): structure Name {u...} :=
    /// (fields)
    Structure {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        fields: Vec<Field>,
    },
}
/// Function parameters
#[derive(Debug, Clone, PartialEq)]
pub struct Parameter {
    pub name: String,
    pub ty: Term,
    pub implicit: bool, // for {x : A} vs (x : A)
}
/// Structure fields
#[derive(Debug, Clone, PartialEq)]
pub struct Field {
    pub name: String,
    pub ty: Term,
}
/// Module containing multiple declarations
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
impl fmt::Display for Universe {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Universe::Const(n) => write!(f, "{}", n),
            Universe::ScopedVar(scope, var) => write!(f, "{}::{}", scope, var),
            Universe::Meta(id) => write!(f, "?u{}", id),
            Universe::Add(u, n) => write!(f, "{}+{}", u, n),
            Universe::Max(u, v) => write!(f, "max({}, {})", u, v),
            Universe::IMax(u, v) => write!(f, "imax({}, {})", u, v),
        }
    }
}
impl fmt::Display for Term {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Term::Var(x) => write!(f, "{}", x),
            Term::App(t1, t2) => write!(f, "({} {})", t1, t2),
            Term::Abs(x, ty, body) => write!(f, "(fun {} : {} => {})", x, ty, body),
            Term::Pi(x, ty, body, implicit) => {
                if Self::occurs_free(x, body) {
                    if *implicit {
                        write!(f, "{{{}  : {}}} → {}", x, ty, body)
                    } else {
                        write!(f, "({} : {}) → {}", x, ty, body)
                    }
                } else {
                    write!(f, "{} → {}", ty, body)
                }
            }
            Term::Sort(u) => match u {
                Universe::Const(0) => write!(f, "Prop"),
                Universe::Const(n) => {
                    if *n == 1 {
                        write!(f, "Type")
                    } else {
                        write!(f, "Type {}", n - 1)
                    }
                }
                Universe::Add(ref base, n) => {
                    if let Universe::Const(1) = **base {
                        write!(f, "Type {}", n)
                    } else {
                        write!(f, "Sort {}", u)
                    }
                }
                _ => write!(f, "Sort {}", u),
            },
            Term::Let(x, ty, val, body) => write!(f, "(let {} : {} := {} in {})", x, ty, val, body),
            Term::Match(scrutinee, arms) => {
                writeln!(f, "match {} with", scrutinee)?;
                for arm in arms {
                    writeln!(f, "| {} => {}", arm.pattern, arm.body)?;
                }
                Ok(())
            }
            Term::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Term::Const(name) => write!(f, "{}", name),
            Term::Meta(name) => write!(f, "?{}", name),
            Term::Proj(term, field) => write!(f, "{}.{}", term, field),
            Term::Sigma(x, domain, codomain) => write!(f, "Σ ({} : {}), {}", x, domain, codomain),
            Term::Pair(fst, snd) => write!(f, "⟨{}, {}⟩", fst, snd),
            Term::Fst(pair) => write!(f, "π₁({})", pair),
            Term::Snd(pair) => write!(f, "π₂({})", pair),
        }
    }
}
impl fmt::Display for Pattern {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Pattern::Var(x) => write!(f, "{}", x),
            Pattern::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Pattern::Wildcard => write!(f, "_"),
        }
    }
}
impl Term {
    /// Check if variable occurs free in term
    pub fn occurs_free(var: &str, term: &Term) -> bool {
        match term {
            Term::Var(x) => x == var,
            Term::App(t1, t2) => Self::occurs_free(var, t1) || Self::occurs_free(var, t2),
            Term::Abs(x, ty, body) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Pi(x, ty, body, _) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Sort(_) => false,
            Term::Let(x, ty, val, body) => {
                Self::occurs_free(var, ty)
                    || Self::occurs_free(var, val)
                    || (x != var && Self::occurs_free(var, body))
            }
            Term::Match(scrutinee, arms) => {
                Self::occurs_free(var, scrutinee)
                    || arms.iter().any(|arm| Self::occurs_free(var, &arm.body))
            }
            Term::Constructor(_, args) => args.iter().any(|arg| Self::occurs_free(var, arg)),
            Term::Const(_) => false,
            Term::Meta(_) => false,
            Term::Proj(term, _) => Self::occurs_free(var, term),
            Term::Sigma(x, domain, codomain) => {
                Self::occurs_free(var, domain) || (x != var && Self::occurs_free(var, codomain))
            }
            Term::Pair(fst, snd) => Self::occurs_free(var, fst) || Self::occurs_free(var, snd),
            Term::Fst(pair) | Term::Snd(pair) => Self::occurs_free(var, pair),
        }
    }
}
impl UniverseContext {
    pub fn new() -> Self {
        UniverseContext {
            variables: Vec::new(),
            constraints: Vec::new(),
        }
    }

    /// Add a universe variable to the context
    pub fn add_var(&mut self, name: String) {
        if !self.variables.contains(&name) {
            self.variables.push(name);
        }
    }

    /// Add a constraint to the context
    pub fn add_constraint(&mut self, constraint: UniverseConstraint) {
        self.constraints.push(constraint);
    }

    /// Check if a universe variable is bound in this context
    pub fn contains(&self, name: &str) -> bool {
        self.variables.contains(&name.to_string())
    }

    /// Extend context with variables from another context
    pub fn extend(&self, other: &UniverseContext) -> Self {
        let mut new_ctx = self.clone();
        for var in &other.variables {
            new_ctx.add_var(var.clone());
        }
        for constraint in &other.constraints {
            new_ctx.add_constraint(constraint.clone());
        }
        new_ctx
    }

    /// Generate a fresh universe variable name
    pub fn fresh_var(&self, base: &str) -> String {
        let mut counter = 0;
        loop {
            let name = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };
            if !self.contains(&name) {
                return name;
            }
            counter += 1;
        }
    }
}
impl Default for UniverseContext {
    fn default() -> Self {
        Self::new()
    }
}
impl Universe {
    /// Create a scoped universe variable
    pub fn scoped_var(scope: String, name: String) -> Self {
        Universe::ScopedVar(scope, name)
    }
}
/// Constructor for inductive types
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub ty: Term,
}
}

Constructor definitions associate names with their type signatures, enabling the type checker to verify that constructor applications respect the declared interfaces. Each constructor must produce a value of the inductive type it belongs to, maintaining the logical coherence of the type system.

Constructor Type Specialization

When inductive types include parameters, constructor types must be specialized appropriately for each use context:

#![allow(unused)]
fn main() {
/// Specialize a constructor type by instantiating type parameters to match
/// expected type
fn specialize_constructor_type(
    &mut self,
    ctor_type: &Term,
    expected_type: &Term,
    _ctx: &Context,
) -> TypeResult<Term> {
    // For a constructor like nil : (A : Type) → List A
    // and expected type List B, we need to instantiate A with B

    let mut current_type = ctor_type.clone();
    let mut substitutions = Vec::new();

    // Collect all type parameters (Pi types where domain is Type)
    while let Term::Pi(param_name, param_ty, body, _implicit) = &current_type {
        if matches!(param_ty.as_ref(), Term::Sort(_)) {
            // This is a type parameter
            // We'll need to figure out what to substitute for it
            substitutions.push(param_name.clone());
            current_type = body.as_ref().clone();
        } else {
            break;
        }
    }

    // Now current_type should be the return type with free variables
    if substitutions.is_empty() {
        // No type parameters, just return the original type
        return Ok(ctor_type.clone());
    }

    // Handle type parameter substitution
    if !substitutions.is_empty() {
        // Extract type arguments from expected_type
        let mut type_args = Vec::new();
        let mut current_expected = expected_type;

        // For types like Pair A B, we need to extract both A and B
        while let Term::App(f, arg) = current_expected {
            type_args.push(arg.as_ref().clone());
            current_expected = f;
        }
        type_args.reverse();

        // Apply substitutions
        let mut specialized = ctor_type.clone();
        for (param, arg) in substitutions.iter().zip(type_args.iter()) {
            // Skip the Pi binding for this parameter and substitute in the body
            if let Term::Pi(p, _, body, _) = &specialized {
                if p == param {
                    specialized = self.substitute(param, arg, body);
                }
            }
        }

        return Ok(specialized);
    }

    // For more complex cases, would need proper unification
    Ok(ctor_type.clone())
}
}

Constructor specialization handles the complex process of instantiating generic constructor types with specific type arguments. The algorithm traverses constructor type signatures and replaces type parameters with concrete arguments while preserving the constructor’s structural properties. This process enables generic inductive types like List A to work correctly when instantiated with specific element types.

The specialization process maintains universe polymorphism by properly handling universe parameters in constructor types. When a constructor belongs to a universe-polymorphic inductive type, the specialization algorithm ensures that universe constraints are preserved throughout the instantiation process.

Pattern Matching Implementation

Pattern matching provides the elimination principle for inductive types, allowing programs to analyze inductive values by case analysis:

#![allow(unused)]
fn main() {
use std::fmt;
/// Core terms in the Calculus of Constructions
#[derive(Debug, Clone, PartialEq)]
pub enum Term {
    /// Variables: x, y, z
    Var(String),

    /// Application: f a
    App(Box<Term>, Box<Term>),

    /// Lambda abstraction: λ x : A, t  (written as fun x : A => t)
    Abs(String, Box<Term>, Box<Term>),

    /// Dependent product: Π x : A, B  (written as (x : A) → B or {x : A} → B)
    Pi(String, Box<Term>, Box<Term>, bool), // bool = implicit

    /// Sort/Type: Sort u, Type, Prop
    Sort(Universe),

    /// Let binding: let x := t in s
    Let(String, Box<Term>, Box<Term>, Box<Term>),

    /// Match expression with patterns
    Match(Box<Term>, Vec<MatchArm>),

    /// Inductive type constructor
    Constructor(String, Vec<Term>),

    /// Local constant (for definitions and axioms)
    Const(String),

    /// Meta-variable for type inference
    Meta(String),

    /// Field projection: term.field
    Proj(Box<Term>, String),

    /// Dependent pair type (Sigma type): Σ (x : A), B
    Sigma(String, Box<Term>, Box<Term>),

    /// Pair constructor: ⟨a, b⟩
    Pair(Box<Term>, Box<Term>),

    /// First projection: π₁
    Fst(Box<Term>),

    /// Second projection: π₂
    Snd(Box<Term>),
}
/// Universe levels for type theory
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Universe {
    Const(u32),                         // Concrete level: 0, 1, 2, ...
    ScopedVar(String, String),          // Scoped universe variable: (scope_name, var_name)
    Meta(u32),                          // Universe metavariable: ?u₀, ?u₁, ...
    Add(Box<Universe>, u32),            // Level + n
    Max(Box<Universe>, Box<Universe>),  // max(u, v)
    IMax(Box<Universe>, Box<Universe>), // imax(u, v)
}
/// Universe level constraints for polymorphism
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UniverseConstraint {
    /// u ≤ v
    LessEq(Universe, Universe),
    /// u = v
    Equal(Universe, Universe),
}
/// Context for universe level variables
#[derive(Debug, Clone, PartialEq)]
pub struct UniverseContext {
    /// Currently bound universe variables
    pub variables: Vec<String>,
    /// Constraints on universe levels
    pub constraints: Vec<UniverseConstraint>,
}
/// Pattern matching arms
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Term,
}
/// Top-level declarations
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Constant definition: def name {u...} (params) : type := body
    Definition {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        body: Term,
    },

    /// Axiom: axiom name {u...} : type
    Axiom {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        ty: Term,
    },

    /// Inductive type: inductive Name {u...} (params) : Type := constructors
    Inductive {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        constructors: Vec<Constructor>,
    },

    /// Structure (single constructor inductive): structure Name {u...} :=
    /// (fields)
    Structure {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        fields: Vec<Field>,
    },
}
/// Function parameters
#[derive(Debug, Clone, PartialEq)]
pub struct Parameter {
    pub name: String,
    pub ty: Term,
    pub implicit: bool, // for {x : A} vs (x : A)
}
/// Constructor for inductive types
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub ty: Term,
}
/// Structure fields
#[derive(Debug, Clone, PartialEq)]
pub struct Field {
    pub name: String,
    pub ty: Term,
}
/// Module containing multiple declarations
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
impl fmt::Display for Universe {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Universe::Const(n) => write!(f, "{}", n),
            Universe::ScopedVar(scope, var) => write!(f, "{}::{}", scope, var),
            Universe::Meta(id) => write!(f, "?u{}", id),
            Universe::Add(u, n) => write!(f, "{}+{}", u, n),
            Universe::Max(u, v) => write!(f, "max({}, {})", u, v),
            Universe::IMax(u, v) => write!(f, "imax({}, {})", u, v),
        }
    }
}
impl fmt::Display for Term {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Term::Var(x) => write!(f, "{}", x),
            Term::App(t1, t2) => write!(f, "({} {})", t1, t2),
            Term::Abs(x, ty, body) => write!(f, "(fun {} : {} => {})", x, ty, body),
            Term::Pi(x, ty, body, implicit) => {
                if Self::occurs_free(x, body) {
                    if *implicit {
                        write!(f, "{{{}  : {}}} → {}", x, ty, body)
                    } else {
                        write!(f, "({} : {}) → {}", x, ty, body)
                    }
                } else {
                    write!(f, "{} → {}", ty, body)
                }
            }
            Term::Sort(u) => match u {
                Universe::Const(0) => write!(f, "Prop"),
                Universe::Const(n) => {
                    if *n == 1 {
                        write!(f, "Type")
                    } else {
                        write!(f, "Type {}", n - 1)
                    }
                }
                Universe::Add(ref base, n) => {
                    if let Universe::Const(1) = **base {
                        write!(f, "Type {}", n)
                    } else {
                        write!(f, "Sort {}", u)
                    }
                }
                _ => write!(f, "Sort {}", u),
            },
            Term::Let(x, ty, val, body) => write!(f, "(let {} : {} := {} in {})", x, ty, val, body),
            Term::Match(scrutinee, arms) => {
                writeln!(f, "match {} with", scrutinee)?;
                for arm in arms {
                    writeln!(f, "| {} => {}", arm.pattern, arm.body)?;
                }
                Ok(())
            }
            Term::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Term::Const(name) => write!(f, "{}", name),
            Term::Meta(name) => write!(f, "?{}", name),
            Term::Proj(term, field) => write!(f, "{}.{}", term, field),
            Term::Sigma(x, domain, codomain) => write!(f, "Σ ({} : {}), {}", x, domain, codomain),
            Term::Pair(fst, snd) => write!(f, "⟨{}, {}⟩", fst, snd),
            Term::Fst(pair) => write!(f, "π₁({})", pair),
            Term::Snd(pair) => write!(f, "π₂({})", pair),
        }
    }
}
impl fmt::Display for Pattern {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Pattern::Var(x) => write!(f, "{}", x),
            Pattern::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Pattern::Wildcard => write!(f, "_"),
        }
    }
}
impl Term {
    /// Check if variable occurs free in term
    pub fn occurs_free(var: &str, term: &Term) -> bool {
        match term {
            Term::Var(x) => x == var,
            Term::App(t1, t2) => Self::occurs_free(var, t1) || Self::occurs_free(var, t2),
            Term::Abs(x, ty, body) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Pi(x, ty, body, _) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Sort(_) => false,
            Term::Let(x, ty, val, body) => {
                Self::occurs_free(var, ty)
                    || Self::occurs_free(var, val)
                    || (x != var && Self::occurs_free(var, body))
            }
            Term::Match(scrutinee, arms) => {
                Self::occurs_free(var, scrutinee)
                    || arms.iter().any(|arm| Self::occurs_free(var, &arm.body))
            }
            Term::Constructor(_, args) => args.iter().any(|arg| Self::occurs_free(var, arg)),
            Term::Const(_) => false,
            Term::Meta(_) => false,
            Term::Proj(term, _) => Self::occurs_free(var, term),
            Term::Sigma(x, domain, codomain) => {
                Self::occurs_free(var, domain) || (x != var && Self::occurs_free(var, codomain))
            }
            Term::Pair(fst, snd) => Self::occurs_free(var, fst) || Self::occurs_free(var, snd),
            Term::Fst(pair) | Term::Snd(pair) => Self::occurs_free(var, pair),
        }
    }
}
impl UniverseContext {
    pub fn new() -> Self {
        UniverseContext {
            variables: Vec::new(),
            constraints: Vec::new(),
        }
    }

    /// Add a universe variable to the context
    pub fn add_var(&mut self, name: String) {
        if !self.variables.contains(&name) {
            self.variables.push(name);
        }
    }

    /// Add a constraint to the context
    pub fn add_constraint(&mut self, constraint: UniverseConstraint) {
        self.constraints.push(constraint);
    }

    /// Check if a universe variable is bound in this context
    pub fn contains(&self, name: &str) -> bool {
        self.variables.contains(&name.to_string())
    }

    /// Extend context with variables from another context
    pub fn extend(&self, other: &UniverseContext) -> Self {
        let mut new_ctx = self.clone();
        for var in &other.variables {
            new_ctx.add_var(var.clone());
        }
        for constraint in &other.constraints {
            new_ctx.add_constraint(constraint.clone());
        }
        new_ctx
    }

    /// Generate a fresh universe variable name
    pub fn fresh_var(&self, base: &str) -> String {
        let mut counter = 0;
        loop {
            let name = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };
            if !self.contains(&name) {
                return name;
            }
            counter += 1;
        }
    }
}
impl Default for UniverseContext {
    fn default() -> Self {
        Self::new()
    }
}
impl Universe {
    /// Create a scoped universe variable
    pub fn scoped_var(scope: String, name: String) -> Self {
        Universe::ScopedVar(scope, name)
    }
}
/// Patterns for match expressions
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Variable pattern: x
    Var(String),

    /// Constructor pattern: Cons x xs
    Constructor(String, Vec<Pattern>),

    /// Wildcard pattern: _
    Wildcard,
}
}

The pattern system supports constructor patterns that destructure inductive values, variable patterns that bind matched components to names, and wildcard patterns that match any value without binding. This comprehensive pattern language enables complete analysis of inductive data structures.

#![allow(unused)]
fn main() {
use std::fmt;
/// Core terms in the Calculus of Constructions
#[derive(Debug, Clone, PartialEq)]
pub enum Term {
    /// Variables: x, y, z
    Var(String),

    /// Application: f a
    App(Box<Term>, Box<Term>),

    /// Lambda abstraction: λ x : A, t  (written as fun x : A => t)
    Abs(String, Box<Term>, Box<Term>),

    /// Dependent product: Π x : A, B  (written as (x : A) → B or {x : A} → B)
    Pi(String, Box<Term>, Box<Term>, bool), // bool = implicit

    /// Sort/Type: Sort u, Type, Prop
    Sort(Universe),

    /// Let binding: let x := t in s
    Let(String, Box<Term>, Box<Term>, Box<Term>),

    /// Match expression with patterns
    Match(Box<Term>, Vec<MatchArm>),

    /// Inductive type constructor
    Constructor(String, Vec<Term>),

    /// Local constant (for definitions and axioms)
    Const(String),

    /// Meta-variable for type inference
    Meta(String),

    /// Field projection: term.field
    Proj(Box<Term>, String),

    /// Dependent pair type (Sigma type): Σ (x : A), B
    Sigma(String, Box<Term>, Box<Term>),

    /// Pair constructor: ⟨a, b⟩
    Pair(Box<Term>, Box<Term>),

    /// First projection: π₁
    Fst(Box<Term>),

    /// Second projection: π₂
    Snd(Box<Term>),
}
/// Universe levels for type theory
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Universe {
    Const(u32),                         // Concrete level: 0, 1, 2, ...
    ScopedVar(String, String),          // Scoped universe variable: (scope_name, var_name)
    Meta(u32),                          // Universe metavariable: ?u₀, ?u₁, ...
    Add(Box<Universe>, u32),            // Level + n
    Max(Box<Universe>, Box<Universe>),  // max(u, v)
    IMax(Box<Universe>, Box<Universe>), // imax(u, v)
}
/// Universe level constraints for polymorphism
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UniverseConstraint {
    /// u ≤ v
    LessEq(Universe, Universe),
    /// u = v
    Equal(Universe, Universe),
}
/// Context for universe level variables
#[derive(Debug, Clone, PartialEq)]
pub struct UniverseContext {
    /// Currently bound universe variables
    pub variables: Vec<String>,
    /// Constraints on universe levels
    pub constraints: Vec<UniverseConstraint>,
}
/// Patterns for match expressions
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Variable pattern: x
    Var(String),

    /// Constructor pattern: Cons x xs
    Constructor(String, Vec<Pattern>),

    /// Wildcard pattern: _
    Wildcard,
}
/// Top-level declarations
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Constant definition: def name {u...} (params) : type := body
    Definition {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        body: Term,
    },

    /// Axiom: axiom name {u...} : type
    Axiom {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        ty: Term,
    },

    /// Inductive type: inductive Name {u...} (params) : Type := constructors
    Inductive {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        constructors: Vec<Constructor>,
    },

    /// Structure (single constructor inductive): structure Name {u...} :=
    /// (fields)
    Structure {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        fields: Vec<Field>,
    },
}
/// Function parameters
#[derive(Debug, Clone, PartialEq)]
pub struct Parameter {
    pub name: String,
    pub ty: Term,
    pub implicit: bool, // for {x : A} vs (x : A)
}
/// Constructor for inductive types
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub ty: Term,
}
/// Structure fields
#[derive(Debug, Clone, PartialEq)]
pub struct Field {
    pub name: String,
    pub ty: Term,
}
/// Module containing multiple declarations
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
impl fmt::Display for Universe {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Universe::Const(n) => write!(f, "{}", n),
            Universe::ScopedVar(scope, var) => write!(f, "{}::{}", scope, var),
            Universe::Meta(id) => write!(f, "?u{}", id),
            Universe::Add(u, n) => write!(f, "{}+{}", u, n),
            Universe::Max(u, v) => write!(f, "max({}, {})", u, v),
            Universe::IMax(u, v) => write!(f, "imax({}, {})", u, v),
        }
    }
}
impl fmt::Display for Term {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Term::Var(x) => write!(f, "{}", x),
            Term::App(t1, t2) => write!(f, "({} {})", t1, t2),
            Term::Abs(x, ty, body) => write!(f, "(fun {} : {} => {})", x, ty, body),
            Term::Pi(x, ty, body, implicit) => {
                if Self::occurs_free(x, body) {
                    if *implicit {
                        write!(f, "{{{}  : {}}} → {}", x, ty, body)
                    } else {
                        write!(f, "({} : {}) → {}", x, ty, body)
                    }
                } else {
                    write!(f, "{} → {}", ty, body)
                }
            }
            Term::Sort(u) => match u {
                Universe::Const(0) => write!(f, "Prop"),
                Universe::Const(n) => {
                    if *n == 1 {
                        write!(f, "Type")
                    } else {
                        write!(f, "Type {}", n - 1)
                    }
                }
                Universe::Add(ref base, n) => {
                    if let Universe::Const(1) = **base {
                        write!(f, "Type {}", n)
                    } else {
                        write!(f, "Sort {}", u)
                    }
                }
                _ => write!(f, "Sort {}", u),
            },
            Term::Let(x, ty, val, body) => write!(f, "(let {} : {} := {} in {})", x, ty, val, body),
            Term::Match(scrutinee, arms) => {
                writeln!(f, "match {} with", scrutinee)?;
                for arm in arms {
                    writeln!(f, "| {} => {}", arm.pattern, arm.body)?;
                }
                Ok(())
            }
            Term::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Term::Const(name) => write!(f, "{}", name),
            Term::Meta(name) => write!(f, "?{}", name),
            Term::Proj(term, field) => write!(f, "{}.{}", term, field),
            Term::Sigma(x, domain, codomain) => write!(f, "Σ ({} : {}), {}", x, domain, codomain),
            Term::Pair(fst, snd) => write!(f, "⟨{}, {}⟩", fst, snd),
            Term::Fst(pair) => write!(f, "π₁({})", pair),
            Term::Snd(pair) => write!(f, "π₂({})", pair),
        }
    }
}
impl fmt::Display for Pattern {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Pattern::Var(x) => write!(f, "{}", x),
            Pattern::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Pattern::Wildcard => write!(f, "_"),
        }
    }
}
impl Term {
    /// Check if variable occurs free in term
    pub fn occurs_free(var: &str, term: &Term) -> bool {
        match term {
            Term::Var(x) => x == var,
            Term::App(t1, t2) => Self::occurs_free(var, t1) || Self::occurs_free(var, t2),
            Term::Abs(x, ty, body) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Pi(x, ty, body, _) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Sort(_) => false,
            Term::Let(x, ty, val, body) => {
                Self::occurs_free(var, ty)
                    || Self::occurs_free(var, val)
                    || (x != var && Self::occurs_free(var, body))
            }
            Term::Match(scrutinee, arms) => {
                Self::occurs_free(var, scrutinee)
                    || arms.iter().any(|arm| Self::occurs_free(var, &arm.body))
            }
            Term::Constructor(_, args) => args.iter().any(|arg| Self::occurs_free(var, arg)),
            Term::Const(_) => false,
            Term::Meta(_) => false,
            Term::Proj(term, _) => Self::occurs_free(var, term),
            Term::Sigma(x, domain, codomain) => {
                Self::occurs_free(var, domain) || (x != var && Self::occurs_free(var, codomain))
            }
            Term::Pair(fst, snd) => Self::occurs_free(var, fst) || Self::occurs_free(var, snd),
            Term::Fst(pair) | Term::Snd(pair) => Self::occurs_free(var, pair),
        }
    }
}
impl UniverseContext {
    pub fn new() -> Self {
        UniverseContext {
            variables: Vec::new(),
            constraints: Vec::new(),
        }
    }

    /// Add a universe variable to the context
    pub fn add_var(&mut self, name: String) {
        if !self.variables.contains(&name) {
            self.variables.push(name);
        }
    }

    /// Add a constraint to the context
    pub fn add_constraint(&mut self, constraint: UniverseConstraint) {
        self.constraints.push(constraint);
    }

    /// Check if a universe variable is bound in this context
    pub fn contains(&self, name: &str) -> bool {
        self.variables.contains(&name.to_string())
    }

    /// Extend context with variables from another context
    pub fn extend(&self, other: &UniverseContext) -> Self {
        let mut new_ctx = self.clone();
        for var in &other.variables {
            new_ctx.add_var(var.clone());
        }
        for constraint in &other.constraints {
            new_ctx.add_constraint(constraint.clone());
        }
        new_ctx
    }

    /// Generate a fresh universe variable name
    pub fn fresh_var(&self, base: &str) -> String {
        let mut counter = 0;
        loop {
            let name = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };
            if !self.contains(&name) {
                return name;
            }
            counter += 1;
        }
    }
}
impl Default for UniverseContext {
    fn default() -> Self {
        Self::new()
    }
}
impl Universe {
    /// Create a scoped universe variable
    pub fn scoped_var(scope: String, name: String) -> Self {
        Universe::ScopedVar(scope, name)
    }
}
/// Pattern matching arms
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Term,
}
}

Match arms associate patterns with result expressions, creating the case analysis structure that defines how pattern matching behaves. Each arm must produce a result of the same type, ensuring that pattern matching expressions have well-defined types regardless of which case is selected at runtime.

Pattern Type Checking

The type checking algorithm for pattern matching ensures that patterns are consistent with the types being matched and that all cases produce compatible result types:

#![allow(unused)]
fn main() {
/// Check a pattern against a type and return extended context with pattern
/// variables
fn check_pattern(
    &mut self,
    pattern: &Pattern,
    expected_type: &Term,
    ctx: &Context,
) -> TypeResult<Context> {
    match pattern {
        Pattern::Var(x) => {
            // Check if this is actually a constructor with no arguments
            if ctx.lookup_constructor(x).is_some() {
                // It's actually a constructor pattern with no arguments
                return self.check_pattern(
                    &Pattern::Constructor(x.clone(), vec![]),
                    expected_type,
                    ctx,
                );
            }
            // Variable pattern binds the scrutinee to the variable
            Ok(ctx.extend(x.clone(), expected_type.clone()))
        }

        Pattern::Wildcard => {
            // Wildcard matches anything, adds no bindings
            Ok(ctx.clone())
        }

        Pattern::Constructor(ctor_name, ctor_args) => {
            // Look up constructor type - check both constructors and definitions
            let (ctor_type, is_instantiated) =
                if let Some(ty) = ctx.lookup_constructor(ctor_name) {
                    (ty.clone(), false)
                } else if let Some(def) = ctx.lookup_definition(ctor_name) {
                    // Instantiate universe parameters for universe-polymorphic constructors
                    let instantiated = self.instantiate_universe_params(
                        &def.ty,
                        &def.universe_params,
                        &def.name,
                        ctx,
                    )?;
                    (instantiated, true)
                } else {
                    return Err(TypeError::UnknownConstructor {
                        name: ctor_name.clone(),
                    });
                };

            if is_instantiated {
                // For instantiated constructors, use constraint-based approach
                // The constructor type already has metavariables that can unify correctly
                let return_type = extract_constructor_return_type(&ctor_type);

                // Use constraint solving to check the return type matches expected type
                let mut solver = Solver::new(ctx.clone());
                let constraint_id = solver.new_constraint_id();
                let constraint = crate::solver::Constraint::Unify {
                    id: constraint_id,
                    left: return_type.clone(),
                    right: expected_type.clone(),
                    strength: ConstraintStrength::Required,
                };
                solver.add_constraint(constraint);
                let _subst = solver.solve()?; // This will error if unification fails

                // Extract argument types directly from instantiated constructor type
                let arg_types = extract_constructor_arg_types(&ctor_type);

                // Check sub-patterns against argument types
                if arg_types.len() != ctor_args.len() {
                    return Err(TypeError::ConstructorArityMismatch {
                        name: ctor_name.clone(),
                        expected: arg_types.len(),
                        actual: ctor_args.len(),
                    });
                }

                let mut extended_ctx = ctx.clone();
                for (arg_pattern, arg_type) in ctor_args.iter().zip(arg_types.iter()) {
                    extended_ctx = self.check_pattern(arg_pattern, arg_type, &extended_ctx)?;
                }
                Ok(extended_ctx)
            } else {
                // For regular constructors, use the existing specialization approach
                let specialized_ctor_type =
                    self.specialize_constructor_type(&ctor_type, expected_type, ctx)?;
                let return_type = extract_constructor_return_type(&specialized_ctor_type);

                if !self.definitionally_equal(&return_type, expected_type, ctx)? {
                    return Err(TypeError::TypeMismatch {
                        expected: expected_type.clone(),
                        actual: return_type,
                    });
                }

                // Extract argument types from specialized constructor type and check
                // sub-patterns
                let arg_types = extract_constructor_arg_types(&specialized_ctor_type);

                if ctor_args.len() != arg_types.len() {
                    return Err(TypeError::ConstructorArityMismatch {
                        name: ctor_name.clone(),
                        expected: arg_types.len(),
                        actual: ctor_args.len(),
                    });
                }

                let mut extended_ctx = ctx.clone();
                for (arg_pattern, arg_type) in ctor_args.iter().zip(arg_types.iter()) {
                    extended_ctx = self.check_pattern(arg_pattern, arg_type, &extended_ctx)?;
                }

                Ok(extended_ctx)
            }
        }
    }
}
}

Pattern type checking validates that constructor patterns match the structure of their corresponding constructors, that variable patterns receive appropriate types from the matched context, and that wildcard patterns are used correctly. The algorithm maintains a typing context that tracks the types of pattern variables for use in result expressions.

The pattern checker handles universe polymorphism by ensuring that constructor patterns properly instantiate universe-polymorphic constructors. When a pattern matches against a constructor from a universe-polymorphic inductive type, the checker verifies that universe constraints are satisfied throughout the pattern matching process.

Constructor Type Inference

Constructor applications in terms require specialized type inference to handle the interaction between constructor types and their arguments:

#![allow(unused)]
fn main() {
/// Infer the type of a term
pub fn infer(&mut self, term: &Term, ctx: &Context) -> TypeResult<Term> {
    self.with_context(term, |checker| checker.infer_impl(term, ctx))
}
}

Constructor type inference looks up constructor types from the context, specializes them with appropriate type arguments, and validates that the constructor is applied to arguments of the correct types. The algorithm handles both simple constructors and constructors that belong to universe-polymorphic inductive types.

Pattern Matching Utilities

Several utility functions support the pattern matching implementation by providing operations on patterns and constructor types:

#![allow(unused)]
fn main() {
use std::collections::HashMap;
use crate::ast::{MatchArm, Pattern, Term, Universe};
/// Substitute universe variables in a term
pub fn substitute_universes(term: &Term, subst: &HashMap<String, Universe>) -> Term {
    match term {
        Term::Sort(u) => Term::Sort(substitute_universe(u, subst)),
        Term::Var(x) => Term::Var(x.clone()),
        Term::Const(c) => Term::Const(c.clone()),
        Term::App(f, arg) => Term::App(
            Box::new(substitute_universes(f, subst)),
            Box::new(substitute_universes(arg, subst)),
        ),
        Term::Abs(x, ty, body) => Term::Abs(
            x.clone(),
            Box::new(substitute_universes(ty, subst)),
            Box::new(substitute_universes(body, subst)),
        ),
        Term::Pi(x, ty, body, implicit) => Term::Pi(
            x.clone(),
            Box::new(substitute_universes(ty, subst)),
            Box::new(substitute_universes(body, subst)),
            *implicit,
        ),
        Term::Let(x, ty, val, body) => Term::Let(
            x.clone(),
            Box::new(substitute_universes(ty, subst)),
            Box::new(substitute_universes(val, subst)),
            Box::new(substitute_universes(body, subst)),
        ),
        _ => term.clone(), // For other terms, no substitution needed
    }
}
/// Substitute universe variables in a universe expression
pub fn substitute_universe(u: &Universe, subst: &HashMap<String, Universe>) -> Universe {
    let result = match u {
        Universe::ScopedVar(_scope, var) => {
            // For scoped variables, try to substitute using just the variable name
            // This allows substitution within the correct scope
            subst.get(var).cloned().unwrap_or_else(|| u.clone())
        }
        Universe::Const(n) => Universe::Const(*n),
        Universe::Add(u, n) => {
            let base = substitute_universe(u, subst);
            Universe::Add(Box::new(base), *n)
        }
        Universe::Max(u1, u2) => Universe::Max(
            Box::new(substitute_universe(u1, subst)),
            Box::new(substitute_universe(u2, subst)),
        ),
        Universe::IMax(u1, u2) => Universe::IMax(
            Box::new(substitute_universe(u1, subst)),
            Box::new(substitute_universe(u2, subst)),
        ),
        _ => u.clone(),
    };
    normalize_universe(&result)
}
/// Normalize a universe expression by simplifying it
pub fn normalize_universe(u: &Universe) -> Universe {
    match u {
        Universe::Add(base, n) => match **base {
            Universe::Const(m) => Universe::Const(m + n),
            _ => u.clone(),
        },
        Universe::Max(u1, u2) => {
            let u1_norm = normalize_universe(u1);
            let u2_norm = normalize_universe(u2);
            match (&u1_norm, &u2_norm) {
                (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
            }
        }
        _ => u.clone(),
    }
}
/// Rename variable in term
pub fn rename_var(old: &str, new: &str, term: &Term) -> Term {
    match term {
        Term::Var(x) if x == old => Term::Var(new.to_string()),
        Term::Var(_) => term.clone(),
        Term::App(f, arg) => Term::App(
            Box::new(rename_var(old, new, f)),
            Box::new(rename_var(old, new, arg)),
        ),
        Term::Abs(x, ty, body) if x == old => {
            // Don't rename bound occurrences
            Term::Abs(
                new.to_string(),
                Box::new(rename_var(old, new, ty)),
                Box::new(body.as_ref().clone()),
            )
        }
        Term::Abs(x, ty, body) => Term::Abs(
            x.clone(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, body)),
        ),
        Term::Pi(x, ty, body, implicit) if x == old => Term::Pi(
            new.to_string(),
            Box::new(rename_var(old, new, ty)),
            Box::new(body.as_ref().clone()),
            *implicit,
        ),
        Term::Pi(x, ty, body, implicit) => Term::Pi(
            x.clone(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, body)),
            *implicit,
        ),
        Term::Let(x, ty, val, body) if x == old => Term::Let(
            new.to_string(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, val)),
            Box::new(body.as_ref().clone()),
        ),
        Term::Let(x, ty, val, body) => Term::Let(
            x.clone(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, val)),
            Box::new(rename_var(old, new, body)),
        ),
        Term::Sigma(x, domain, codomain) if x == old => Term::Sigma(
            new.to_string(),
            Box::new(rename_var(old, new, domain)),
            Box::new(codomain.as_ref().clone()),
        ),
        Term::Sigma(x, domain, codomain) => Term::Sigma(
            x.clone(),
            Box::new(rename_var(old, new, domain)),
            Box::new(rename_var(old, new, codomain)),
        ),
        Term::Pair(fst, snd) => Term::Pair(
            Box::new(rename_var(old, new, fst)),
            Box::new(rename_var(old, new, snd)),
        ),
        Term::Fst(pair) => Term::Fst(Box::new(rename_var(old, new, pair))),
        Term::Snd(pair) => Term::Snd(Box::new(rename_var(old, new, pair))),
        Term::Proj(term, field) => Term::Proj(Box::new(rename_var(old, new, term)), field.clone()),
        Term::Match(scrutinee, arms) => {
            let scrutinee_renamed = rename_var(old, new, scrutinee);
            let arms_renamed = arms
                .iter()
                .map(|arm| {
                    // For match arms, we need to be careful about pattern variable bindings
                    let body_renamed = if pattern_binds_var(&arm.pattern, old) {
                        // Variable is bound by pattern, don't rename in body
                        arm.body.clone()
                    } else {
                        rename_var(old, new, &arm.body)
                    };

                    MatchArm {
                        pattern: arm.pattern.clone(), // Patterns don't contain renameable terms
                        body: body_renamed,
                    }
                })
                .collect();

            Term::Match(Box::new(scrutinee_renamed), arms_renamed)
        }
        _ => term.clone(),
    }
}
/// Generate fresh variable name avoiding conflicts
pub fn fresh_var(base: &str, avoid: &[String]) -> String {
    let mut counter = 0;
    loop {
        let candidate = if counter == 0 {
            base.to_string()
        } else {
            format!("{}{}", base, counter)
        };

        if !avoid.contains(&candidate) {
            return candidate;
        }
        counter += 1;
    }
}
/// Check if a type is "simple" (like Nat, Bool, etc.)
pub fn is_simple_type(ty: &Term) -> bool {
    matches!(ty, Term::Const(_))
}
/// Check if a term has implicit parameters
pub fn has_implicit_params(ty: &Term) -> bool {
    match ty {
        Term::Pi(_, _, _, implicit) => *implicit,
        _ => false,
    }
}
/// Extract the return type from a constructor type (the final codomain after
/// stripping Pi types)
pub fn extract_constructor_return_type(ctor_type: &Term) -> Term {
    match ctor_type {
        Term::Pi(_, _, codomain, _) => extract_constructor_return_type(codomain),
        other => other.clone(),
    }
}
/// Extract argument types from a constructor type, skipping type parameters
pub fn extract_constructor_arg_types(ctor_type: &Term) -> Vec<Term> {
    let mut arg_types = Vec::new();
    let mut current = ctor_type;

    while let Term::Pi(_, domain, codomain, _) = current {
        // Skip type parameters (those whose type is a Sort)
        if !matches!(domain.as_ref(), Term::Sort(_)) {
            arg_types.push(domain.as_ref().clone());
        }
        current = codomain;
    }

    arg_types
}
/// Check if a term contains universe metavariables
pub fn contains_universe_metavariables(term: &Term) -> bool {
    match term {
        Term::Sort(u) => universe_contains_metavariables(u),
        Term::App(f, arg) => {
            contains_universe_metavariables(f) || contains_universe_metavariables(arg)
        }
        Term::Abs(_, ty, body) | Term::Pi(_, ty, body, _) => {
            contains_universe_metavariables(ty) || contains_universe_metavariables(body)
        }
        Term::Let(_, ty, val, body) => {
            contains_universe_metavariables(ty)
                || contains_universe_metavariables(val)
                || contains_universe_metavariables(body)
        }
        _ => false,
    }
}
/// Check if a universe expression contains metavariables
pub fn universe_contains_metavariables(universe: &Universe) -> bool {
    match universe {
        Universe::Meta(_) => true,
        Universe::Add(base, _) => universe_contains_metavariables(base),
        Universe::Max(u, v) | Universe::IMax(u, v) => {
            universe_contains_metavariables(u) || universe_contains_metavariables(v)
        }
        _ => false,
    }
}
/// Check if a pattern binds a specific variable
pub fn pattern_binds_var(pattern: &Pattern, var: &str) -> bool {
    match pattern {
        Pattern::Var(x) => x == var,
        Pattern::Wildcard => false,
        Pattern::Constructor(_, args) => args.iter().any(|arg| pattern_binds_var(arg, var)),
    }
}
}

This utility determines whether a pattern introduces a binding for a specific variable, enabling the type checker to track variable scopes correctly across pattern matching expressions.

#![allow(unused)]
fn main() {
use std::collections::HashMap;
use crate::ast::{MatchArm, Pattern, Term, Universe};
/// Substitute universe variables in a term
pub fn substitute_universes(term: &Term, subst: &HashMap<String, Universe>) -> Term {
    match term {
        Term::Sort(u) => Term::Sort(substitute_universe(u, subst)),
        Term::Var(x) => Term::Var(x.clone()),
        Term::Const(c) => Term::Const(c.clone()),
        Term::App(f, arg) => Term::App(
            Box::new(substitute_universes(f, subst)),
            Box::new(substitute_universes(arg, subst)),
        ),
        Term::Abs(x, ty, body) => Term::Abs(
            x.clone(),
            Box::new(substitute_universes(ty, subst)),
            Box::new(substitute_universes(body, subst)),
        ),
        Term::Pi(x, ty, body, implicit) => Term::Pi(
            x.clone(),
            Box::new(substitute_universes(ty, subst)),
            Box::new(substitute_universes(body, subst)),
            *implicit,
        ),
        Term::Let(x, ty, val, body) => Term::Let(
            x.clone(),
            Box::new(substitute_universes(ty, subst)),
            Box::new(substitute_universes(val, subst)),
            Box::new(substitute_universes(body, subst)),
        ),
        _ => term.clone(), // For other terms, no substitution needed
    }
}
/// Substitute universe variables in a universe expression
pub fn substitute_universe(u: &Universe, subst: &HashMap<String, Universe>) -> Universe {
    let result = match u {
        Universe::ScopedVar(_scope, var) => {
            // For scoped variables, try to substitute using just the variable name
            // This allows substitution within the correct scope
            subst.get(var).cloned().unwrap_or_else(|| u.clone())
        }
        Universe::Const(n) => Universe::Const(*n),
        Universe::Add(u, n) => {
            let base = substitute_universe(u, subst);
            Universe::Add(Box::new(base), *n)
        }
        Universe::Max(u1, u2) => Universe::Max(
            Box::new(substitute_universe(u1, subst)),
            Box::new(substitute_universe(u2, subst)),
        ),
        Universe::IMax(u1, u2) => Universe::IMax(
            Box::new(substitute_universe(u1, subst)),
            Box::new(substitute_universe(u2, subst)),
        ),
        _ => u.clone(),
    };
    normalize_universe(&result)
}
/// Normalize a universe expression by simplifying it
pub fn normalize_universe(u: &Universe) -> Universe {
    match u {
        Universe::Add(base, n) => match **base {
            Universe::Const(m) => Universe::Const(m + n),
            _ => u.clone(),
        },
        Universe::Max(u1, u2) => {
            let u1_norm = normalize_universe(u1);
            let u2_norm = normalize_universe(u2);
            match (&u1_norm, &u2_norm) {
                (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
            }
        }
        _ => u.clone(),
    }
}
/// Rename variable in term
pub fn rename_var(old: &str, new: &str, term: &Term) -> Term {
    match term {
        Term::Var(x) if x == old => Term::Var(new.to_string()),
        Term::Var(_) => term.clone(),
        Term::App(f, arg) => Term::App(
            Box::new(rename_var(old, new, f)),
            Box::new(rename_var(old, new, arg)),
        ),
        Term::Abs(x, ty, body) if x == old => {
            // Don't rename bound occurrences
            Term::Abs(
                new.to_string(),
                Box::new(rename_var(old, new, ty)),
                Box::new(body.as_ref().clone()),
            )
        }
        Term::Abs(x, ty, body) => Term::Abs(
            x.clone(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, body)),
        ),
        Term::Pi(x, ty, body, implicit) if x == old => Term::Pi(
            new.to_string(),
            Box::new(rename_var(old, new, ty)),
            Box::new(body.as_ref().clone()),
            *implicit,
        ),
        Term::Pi(x, ty, body, implicit) => Term::Pi(
            x.clone(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, body)),
            *implicit,
        ),
        Term::Let(x, ty, val, body) if x == old => Term::Let(
            new.to_string(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, val)),
            Box::new(body.as_ref().clone()),
        ),
        Term::Let(x, ty, val, body) => Term::Let(
            x.clone(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, val)),
            Box::new(rename_var(old, new, body)),
        ),
        Term::Sigma(x, domain, codomain) if x == old => Term::Sigma(
            new.to_string(),
            Box::new(rename_var(old, new, domain)),
            Box::new(codomain.as_ref().clone()),
        ),
        Term::Sigma(x, domain, codomain) => Term::Sigma(
            x.clone(),
            Box::new(rename_var(old, new, domain)),
            Box::new(rename_var(old, new, codomain)),
        ),
        Term::Pair(fst, snd) => Term::Pair(
            Box::new(rename_var(old, new, fst)),
            Box::new(rename_var(old, new, snd)),
        ),
        Term::Fst(pair) => Term::Fst(Box::new(rename_var(old, new, pair))),
        Term::Snd(pair) => Term::Snd(Box::new(rename_var(old, new, pair))),
        Term::Proj(term, field) => Term::Proj(Box::new(rename_var(old, new, term)), field.clone()),
        Term::Match(scrutinee, arms) => {
            let scrutinee_renamed = rename_var(old, new, scrutinee);
            let arms_renamed = arms
                .iter()
                .map(|arm| {
                    // For match arms, we need to be careful about pattern variable bindings
                    let body_renamed = if pattern_binds_var(&arm.pattern, old) {
                        // Variable is bound by pattern, don't rename in body
                        arm.body.clone()
                    } else {
                        rename_var(old, new, &arm.body)
                    };

                    MatchArm {
                        pattern: arm.pattern.clone(), // Patterns don't contain renameable terms
                        body: body_renamed,
                    }
                })
                .collect();

            Term::Match(Box::new(scrutinee_renamed), arms_renamed)
        }
        _ => term.clone(),
    }
}
/// Check if a pattern binds a specific variable
pub fn pattern_binds_var(pattern: &Pattern, var: &str) -> bool {
    match pattern {
        Pattern::Var(x) => x == var,
        Pattern::Wildcard => false,
        Pattern::Constructor(_, args) => args.iter().any(|arg| pattern_binds_var(arg, var)),
    }
}
/// Generate fresh variable name avoiding conflicts
pub fn fresh_var(base: &str, avoid: &[String]) -> String {
    let mut counter = 0;
    loop {
        let candidate = if counter == 0 {
            base.to_string()
        } else {
            format!("{}{}", base, counter)
        };

        if !avoid.contains(&candidate) {
            return candidate;
        }
        counter += 1;
    }
}
/// Check if a type is "simple" (like Nat, Bool, etc.)
pub fn is_simple_type(ty: &Term) -> bool {
    matches!(ty, Term::Const(_))
}
/// Check if a term has implicit parameters
pub fn has_implicit_params(ty: &Term) -> bool {
    match ty {
        Term::Pi(_, _, _, implicit) => *implicit,
        _ => false,
    }
}
/// Extract argument types from a constructor type, skipping type parameters
pub fn extract_constructor_arg_types(ctor_type: &Term) -> Vec<Term> {
    let mut arg_types = Vec::new();
    let mut current = ctor_type;

    while let Term::Pi(_, domain, codomain, _) = current {
        // Skip type parameters (those whose type is a Sort)
        if !matches!(domain.as_ref(), Term::Sort(_)) {
            arg_types.push(domain.as_ref().clone());
        }
        current = codomain;
    }

    arg_types
}
/// Check if a term contains universe metavariables
pub fn contains_universe_metavariables(term: &Term) -> bool {
    match term {
        Term::Sort(u) => universe_contains_metavariables(u),
        Term::App(f, arg) => {
            contains_universe_metavariables(f) || contains_universe_metavariables(arg)
        }
        Term::Abs(_, ty, body) | Term::Pi(_, ty, body, _) => {
            contains_universe_metavariables(ty) || contains_universe_metavariables(body)
        }
        Term::Let(_, ty, val, body) => {
            contains_universe_metavariables(ty)
                || contains_universe_metavariables(val)
                || contains_universe_metavariables(body)
        }
        _ => false,
    }
}
/// Check if a universe expression contains metavariables
pub fn universe_contains_metavariables(universe: &Universe) -> bool {
    match universe {
        Universe::Meta(_) => true,
        Universe::Add(base, _) => universe_contains_metavariables(base),
        Universe::Max(u, v) | Universe::IMax(u, v) => {
            universe_contains_metavariables(u) || universe_contains_metavariables(v)
        }
        _ => false,
    }
}
/// Extract the return type from a constructor type (the final codomain after
/// stripping Pi types)
pub fn extract_constructor_return_type(ctor_type: &Term) -> Term {
    match ctor_type {
        Term::Pi(_, _, codomain, _) => extract_constructor_return_type(codomain),
        other => other.clone(),
    }
}
}

Constructor return type extraction analyzes constructor type signatures to determine the result type produced by constructor applications. This operation is essential for validating that constructor uses are type-correct.

#![allow(unused)]
fn main() {
use std::collections::HashMap;
use crate::ast::{MatchArm, Pattern, Term, Universe};
/// Substitute universe variables in a term
pub fn substitute_universes(term: &Term, subst: &HashMap<String, Universe>) -> Term {
    match term {
        Term::Sort(u) => Term::Sort(substitute_universe(u, subst)),
        Term::Var(x) => Term::Var(x.clone()),
        Term::Const(c) => Term::Const(c.clone()),
        Term::App(f, arg) => Term::App(
            Box::new(substitute_universes(f, subst)),
            Box::new(substitute_universes(arg, subst)),
        ),
        Term::Abs(x, ty, body) => Term::Abs(
            x.clone(),
            Box::new(substitute_universes(ty, subst)),
            Box::new(substitute_universes(body, subst)),
        ),
        Term::Pi(x, ty, body, implicit) => Term::Pi(
            x.clone(),
            Box::new(substitute_universes(ty, subst)),
            Box::new(substitute_universes(body, subst)),
            *implicit,
        ),
        Term::Let(x, ty, val, body) => Term::Let(
            x.clone(),
            Box::new(substitute_universes(ty, subst)),
            Box::new(substitute_universes(val, subst)),
            Box::new(substitute_universes(body, subst)),
        ),
        _ => term.clone(), // For other terms, no substitution needed
    }
}
/// Substitute universe variables in a universe expression
pub fn substitute_universe(u: &Universe, subst: &HashMap<String, Universe>) -> Universe {
    let result = match u {
        Universe::ScopedVar(_scope, var) => {
            // For scoped variables, try to substitute using just the variable name
            // This allows substitution within the correct scope
            subst.get(var).cloned().unwrap_or_else(|| u.clone())
        }
        Universe::Const(n) => Universe::Const(*n),
        Universe::Add(u, n) => {
            let base = substitute_universe(u, subst);
            Universe::Add(Box::new(base), *n)
        }
        Universe::Max(u1, u2) => Universe::Max(
            Box::new(substitute_universe(u1, subst)),
            Box::new(substitute_universe(u2, subst)),
        ),
        Universe::IMax(u1, u2) => Universe::IMax(
            Box::new(substitute_universe(u1, subst)),
            Box::new(substitute_universe(u2, subst)),
        ),
        _ => u.clone(),
    };
    normalize_universe(&result)
}
/// Normalize a universe expression by simplifying it
pub fn normalize_universe(u: &Universe) -> Universe {
    match u {
        Universe::Add(base, n) => match **base {
            Universe::Const(m) => Universe::Const(m + n),
            _ => u.clone(),
        },
        Universe::Max(u1, u2) => {
            let u1_norm = normalize_universe(u1);
            let u2_norm = normalize_universe(u2);
            match (&u1_norm, &u2_norm) {
                (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
            }
        }
        _ => u.clone(),
    }
}
/// Rename variable in term
pub fn rename_var(old: &str, new: &str, term: &Term) -> Term {
    match term {
        Term::Var(x) if x == old => Term::Var(new.to_string()),
        Term::Var(_) => term.clone(),
        Term::App(f, arg) => Term::App(
            Box::new(rename_var(old, new, f)),
            Box::new(rename_var(old, new, arg)),
        ),
        Term::Abs(x, ty, body) if x == old => {
            // Don't rename bound occurrences
            Term::Abs(
                new.to_string(),
                Box::new(rename_var(old, new, ty)),
                Box::new(body.as_ref().clone()),
            )
        }
        Term::Abs(x, ty, body) => Term::Abs(
            x.clone(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, body)),
        ),
        Term::Pi(x, ty, body, implicit) if x == old => Term::Pi(
            new.to_string(),
            Box::new(rename_var(old, new, ty)),
            Box::new(body.as_ref().clone()),
            *implicit,
        ),
        Term::Pi(x, ty, body, implicit) => Term::Pi(
            x.clone(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, body)),
            *implicit,
        ),
        Term::Let(x, ty, val, body) if x == old => Term::Let(
            new.to_string(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, val)),
            Box::new(body.as_ref().clone()),
        ),
        Term::Let(x, ty, val, body) => Term::Let(
            x.clone(),
            Box::new(rename_var(old, new, ty)),
            Box::new(rename_var(old, new, val)),
            Box::new(rename_var(old, new, body)),
        ),
        Term::Sigma(x, domain, codomain) if x == old => Term::Sigma(
            new.to_string(),
            Box::new(rename_var(old, new, domain)),
            Box::new(codomain.as_ref().clone()),
        ),
        Term::Sigma(x, domain, codomain) => Term::Sigma(
            x.clone(),
            Box::new(rename_var(old, new, domain)),
            Box::new(rename_var(old, new, codomain)),
        ),
        Term::Pair(fst, snd) => Term::Pair(
            Box::new(rename_var(old, new, fst)),
            Box::new(rename_var(old, new, snd)),
        ),
        Term::Fst(pair) => Term::Fst(Box::new(rename_var(old, new, pair))),
        Term::Snd(pair) => Term::Snd(Box::new(rename_var(old, new, pair))),
        Term::Proj(term, field) => Term::Proj(Box::new(rename_var(old, new, term)), field.clone()),
        Term::Match(scrutinee, arms) => {
            let scrutinee_renamed = rename_var(old, new, scrutinee);
            let arms_renamed = arms
                .iter()
                .map(|arm| {
                    // For match arms, we need to be careful about pattern variable bindings
                    let body_renamed = if pattern_binds_var(&arm.pattern, old) {
                        // Variable is bound by pattern, don't rename in body
                        arm.body.clone()
                    } else {
                        rename_var(old, new, &arm.body)
                    };

                    MatchArm {
                        pattern: arm.pattern.clone(), // Patterns don't contain renameable terms
                        body: body_renamed,
                    }
                })
                .collect();

            Term::Match(Box::new(scrutinee_renamed), arms_renamed)
        }
        _ => term.clone(),
    }
}
/// Check if a pattern binds a specific variable
pub fn pattern_binds_var(pattern: &Pattern, var: &str) -> bool {
    match pattern {
        Pattern::Var(x) => x == var,
        Pattern::Wildcard => false,
        Pattern::Constructor(_, args) => args.iter().any(|arg| pattern_binds_var(arg, var)),
    }
}
/// Generate fresh variable name avoiding conflicts
pub fn fresh_var(base: &str, avoid: &[String]) -> String {
    let mut counter = 0;
    loop {
        let candidate = if counter == 0 {
            base.to_string()
        } else {
            format!("{}{}", base, counter)
        };

        if !avoid.contains(&candidate) {
            return candidate;
        }
        counter += 1;
    }
}
/// Check if a type is "simple" (like Nat, Bool, etc.)
pub fn is_simple_type(ty: &Term) -> bool {
    matches!(ty, Term::Const(_))
}
/// Check if a term has implicit parameters
pub fn has_implicit_params(ty: &Term) -> bool {
    match ty {
        Term::Pi(_, _, _, implicit) => *implicit,
        _ => false,
    }
}
/// Extract the return type from a constructor type (the final codomain after
/// stripping Pi types)
pub fn extract_constructor_return_type(ctor_type: &Term) -> Term {
    match ctor_type {
        Term::Pi(_, _, codomain, _) => extract_constructor_return_type(codomain),
        other => other.clone(),
    }
}
/// Check if a term contains universe metavariables
pub fn contains_universe_metavariables(term: &Term) -> bool {
    match term {
        Term::Sort(u) => universe_contains_metavariables(u),
        Term::App(f, arg) => {
            contains_universe_metavariables(f) || contains_universe_metavariables(arg)
        }
        Term::Abs(_, ty, body) | Term::Pi(_, ty, body, _) => {
            contains_universe_metavariables(ty) || contains_universe_metavariables(body)
        }
        Term::Let(_, ty, val, body) => {
            contains_universe_metavariables(ty)
                || contains_universe_metavariables(val)
                || contains_universe_metavariables(body)
        }
        _ => false,
    }
}
/// Check if a universe expression contains metavariables
pub fn universe_contains_metavariables(universe: &Universe) -> bool {
    match universe {
        Universe::Meta(_) => true,
        Universe::Add(base, _) => universe_contains_metavariables(base),
        Universe::Max(u, v) | Universe::IMax(u, v) => {
            universe_contains_metavariables(u) || universe_contains_metavariables(v)
        }
        _ => false,
    }
}
/// Extract argument types from a constructor type, skipping type parameters
pub fn extract_constructor_arg_types(ctor_type: &Term) -> Vec<Term> {
    let mut arg_types = Vec::new();
    let mut current = ctor_type;

    while let Term::Pi(_, domain, codomain, _) = current {
        // Skip type parameters (those whose type is a Sort)
        if !matches!(domain.as_ref(), Term::Sort(_)) {
            arg_types.push(domain.as_ref().clone());
        }
        current = codomain;
    }

    arg_types
}
}

Argument type extraction decomposes constructor type signatures to identify the types expected for constructor arguments. This information guides type checking of constructor applications and pattern matching expressions.

Context Integration

Constructor information is maintained in the typing context to support constructor lookups during type checking:

#![allow(unused)]
fn main() {
/// Add constructor definition
pub fn add_constructor(&mut self, name: String, ty: Term) {
    self.constructors.insert(name, ty);
}
}

Adding constructors to the context makes them available for type checking and pattern matching operations. The context maintains the mapping from constructor names to their type signatures, enabling efficient lookup during type inference.

#![allow(unused)]
fn main() {
/// Look up constructor type
pub fn lookup_constructor(&self, name: &str) -> Option<&Term> {
    self.constructors.get(name)
}
}

Constructor lookup retrieves type information for constructor names encountered in terms or patterns. This operation is fundamental to constructor type checking and pattern matching validation.

Universe Polymorphic Inductive Types

The implementation supports universe polymorphic inductive types that can exist at different universe levels:

#![allow(unused)]
fn main() {
def id (A : Type) (x : A) : A :=
  x

def compose_poly (A : Type) (B : Type) (C : Type) (g : B -> C) (f : A -> B) (x : A) : C :=
  g (f x)

def apply_twice (A : Type) (f : A -> A) (x : A) : A :=
  f (f x)
}

Universe polymorphic inductive types demonstrate the full power of the Calculus of Constructions’ universe system. These types can be instantiated at different universe levels while maintaining their structural properties, enabling generic programming patterns that work across the entire universe hierarchy.

Basic Inductive Type Examples

Simple inductive types like natural numbers and booleans provide concrete examples of the inductive type system in action:

#![allow(unused)]
fn main() {
inductive Bool : Type with
| true : Bool
| false : Bool

inductive Nat : Type with
| zero : Nat
| succ : Nat -> Nat

def is_zero (n : Nat) : Bool :=
  match n with
  case zero => true
  case succ(_) => false
}

These examples demonstrate basic inductive type declarations with simple constructors and straightforward pattern matching. The Bool type shows enumeration-style inductive types, while Nat demonstrates recursive inductive types with constructor parameters.

Advanced Pattern Matching

More pattern matching examples illustrate the expressive power of dependent pattern matching:

#![allow(unused)]
fn main() {
inductive Bool : Type with
| true : Bool
| false : Bool

inductive Nat : Type with
| zero : Nat
| succ : Nat -> Nat

def predecessor (n : Nat) : Nat :=
  match n with
  case zero => zero
  case succ(m) => m

def not_bool (b : Bool) : Bool :=
  match b with
  case true => false
  case false => true
}

Advanced pattern matching shows how constructors with parameters can be destructured through pattern matching, with pattern variables receiving appropriate types derived from the constructor signatures. The predecessor function demonstrates recursive constructor patterns, while the not_bool function shows simple enumeration pattern matching.

Dependent Inductive Types

The implementation supports dependent inductive types where constructor types can depend on term arguments, enabling data structures:

#![allow(unused)]
fn main() {
inductive Nat : Type with
  | zero : Nat
  | succ : Nat -> Nat

inductive Vec (A : Type) : Nat -> Type with  
  | nil : Vec A zero
  | cons : (n : Nat) -> A -> Vec A n -> Vec A (succ n)
}

Dependent inductive types represent the full power of inductive types in dependent type theory. These types enable length-indexed vectors, well-typed abstract syntax trees, and other data structures where types carry computational information about their contents.

Type Rules

The Calculus of Constructions features an exceptionally rich type system that unifies terms, types, and kinds into a single syntactic category. The typing rules presented here capture the essence of dependent types, universe polymorphism, and the constraint solving that makes our implementation both powerful and complex.

Unlike simpler type systems, the CoC typing rules must handle multiple layers of abstraction simultaneously. Terms can depend on other terms (functions), types can depend on terms (dependent types), types can depend on types (polymorphism), and even kinds can depend on terms through universe constraints.

Symbol Glossary

The Calculus of Constructions uses extensive formal notation to capture the complex relationships between terms, types, and universes. This glossary provides a reference for understanding the symbols and concepts used throughout the type rules.

Core Symbols

  • \( \Gamma \) (Gamma): The context or environment, which tracks what variables are in scope and their types
  • \( \vdash \) (turnstile): The judgment symbol, read as “proves” or “entails”
  • \( \Pi \) (Pi): Dependent function type (generalization of A -> B where the result type can depend on the input value)
  • \( \lambda \) (lambda): Function abstraction (creating a function)
  • \( \equiv \): Definitional equality (two terms that are “the same” according to computation rules)
  • \( \doteq \): Unification constraint (asking if two terms can be made equal)
  • \( \leadsto \): Constraint solving (producing a solution to constraints)

Judgment Forms

  • \( \Gamma \vdash t : T \): “In context Gamma, term t has type T”
  • \( \Gamma \vdash T : \text{Type}_i \): “T is a type in universe level i”
  • \( \Gamma \vdash C \): “Constraint C holds in context Gamma”
  • \( \Gamma \vdash s \equiv t : T \): “Terms s and t are convertible at type T”

Type System Concepts

  • Universe hierarchy: Types live in universes (Type_0, Type_1, etc.) to avoid logical paradoxes
  • Dependent types: Types that can depend on values (like “a list of length n”)
  • Meta-variables: Unknowns that our constraint solver tries to figure out (written as )
  • Substitution: Replacing variables with values, written as t[s/x] (replace x with s in t)
  • Positivity: A restriction on inductive types to ensure they’re well-founded

Notation Conventions

  • Overlines: \( \overline{x} \) means “a sequence of x’s” (like x₁, x₂, x₃, …)
  • Brackets: \( [s/x] \) means substitution of s for x
  • Subscripts: \( \text{Type}_i \) refers to universe levels
  • Fresh variables: When we say “fresh(α)”, we mean a brand new variable that hasn’t been used yet

Core Judgment Forms

The CoC type system uses several judgment forms that interact in complex ways:

Typing Judgments: \( \Gamma \vdash t : T \) asserts that term \( t \) has type \( T \) in context \( \Gamma \)

Universe Judgments: \( \Gamma \vdash T : \text{Type}_i \) asserts that \( T \) is a type in universe \( i \)

Constraint Judgments: \( \Gamma \vdash C \) asserts that constraint \( C \) holds in context \( \Gamma \)

Conversion Judgments: \( \Gamma \vdash s \equiv t : T \) asserts that \( s \) and \( t \) are convertible at type \( T \)

Variable and Constant Rules

Variable lookup in the context:

\[ \frac{x : T \in \Gamma}{\Gamma \vdash x : T} \text{(T-Var)} \]

Universe hierarchy with explicit level constraints:

\[ \frac{i < j}{\Gamma \vdash \text{Type}_i : \text{Type}_j} \text{(T-Univ)} \]

Primitive constants with their types:

\[ \frac{}{\Gamma \vdash \text{Nat} : \text{Type}_0} \text{(T-Nat)} \]

\[ \frac{}{\Gamma \vdash 0 : \text{Nat}} \text{(T-Zero)} \]

\[ \frac{}{\Gamma \vdash \text{succ} : \text{Nat} \to \text{Nat}} \text{(T-Succ)} \]

Function Types and Abstractions

Dependent function types (Pi types):

\[ \frac{\Gamma \vdash A : \text{Type}_i \quad \Gamma, x : A \vdash B : \text{Type}_j}{\Gamma \vdash \Pi x : A. B : \text{Type}_{\text{max}(i,j)}} \text{(T-Pi)} \]

Lambda abstraction with dependent types:

\[ \frac{\Gamma, x : A \vdash t : B \quad \Gamma \vdash \Pi x : A. B : \text{Type}_i}{\Gamma \vdash \lambda x : A. t : \Pi x : A. B} \text{(T-Lam)} \]

Function application with substitution:

\[ \frac{\Gamma \vdash f : \Pi x : A. B \quad \Gamma \vdash a : A}{\Gamma \vdash f ; a : B[a/x]} \text{(T-App)} \]

Inductive Types

Inductive type formation with universe constraints:

\[ \frac{\overline{\Gamma \vdash C_i : A_i \to T ; \text{params}} \quad \Gamma \vdash T : \text{Type}_j}{\Gamma \vdash \text{data } T \text{ where } \overline{C_i : A_i} : \text{Type}_j} \text{(T-Data)} \]

Constructor typing with positivity constraints:

\[ \frac{\Gamma \vdash I : \text{Type}_i \quad \text{Positive}(I, A)}{\Gamma \vdash c : A \to I} \text{(T-Constr)} \]

Pattern matching with dependent elimination:

\[ \frac{\begin{array}{c} \Gamma \vdash t : I ; \overline{p} \ \Gamma \vdash P : \Pi \overline{x : A}. I ; \overline{x} \to \text{Type}_k \ \overline{\Gamma \vdash f_i : \Pi \overline{y : B_i}. P ; (c_i ; \overline{y})} \end{array}}{\Gamma \vdash \text{match } t \text{ return } P \text{ with } \overline{c_i ; \overline{y} \Rightarrow f_i ; \overline{y}} : P ; t} \text{(T-Match)} \]

Universe Polymorphism

Universe variables in types:

\[ \frac{\alpha \in \text{UVars}}{\Gamma \vdash \text{Type}_\alpha : \text{Type}_{\alpha+1}} \text{(T-UVar)} \]

Universe level constraints:

\[ \frac{\Gamma \vdash C_1 \quad \Gamma \vdash C_2}{\Gamma \vdash C_1 \land C_2} \text{(T-Conj)} \]

\[ \frac{i \leq j}{\Gamma \vdash i \leq j} \text{(T-Leq)} \]

Universe maximum operation:

\[ \frac{\Gamma \vdash i \leq k \quad \Gamma \vdash j \leq k}{\Gamma \vdash \text{max}(i,j) \leq k} \text{(T-Max)} \]

Conversion and Definitional Equality

Beta reduction for function application:

\[ \frac{}{\Gamma \vdash (\lambda x : A. t) ; s \equiv t[s/x] : B[s/x]} \text{(Conv-Beta)} \]

Eta conversion for function types:

\[ \frac{\Gamma \vdash f : \Pi x : A. B \quad x \notin \text{FV}(f)}{\Gamma \vdash f \equiv \lambda x : A. f ; x : \Pi x : A. B} \text{(Conv-Eta)} \]

Iota reduction for pattern matching:

\[ \frac{}{\Gamma \vdash \text{match } (c ; \overline{a}) \text{ return } P \text{ with } \overline{c_i ; \overline{y} \Rightarrow f_i ; \overline{y}} \equiv f ; \overline{a} : P ; (c ; \overline{a})} \text{(Conv-Iota)} \]

Congruence rules for structural conversion:

\[ \frac{\Gamma \vdash s_1 \equiv t_1 : A \to B \quad \Gamma \vdash s_2 \equiv t_2 : A}{\Gamma \vdash s_1 ; s_2 \equiv t_1 ; t_2 : B} \text{(Conv-App)} \]

Type Conversion and Subsumption

Conversion allows definitionally equal types to be used interchangeably:

\[ \frac{\Gamma \vdash t : A \quad \Gamma \vdash A \equiv B : \text{Type}_i}{\Gamma \vdash t : B} \text{(T-Conv)} \]

Meta-Variable and Constraint Rules

Meta-variable introduction for inference:

\[ \frac{\text{fresh}(\alpha) \quad \Gamma \vdash T : \text{Type}_i}{\Gamma \vdash ?\alpha : T} \text{(T-Meta)} \]

Constraint solving with unification:

\[ \frac{\Gamma \vdash s : T \quad \Gamma \vdash t : T \quad \text{Unify}(s, t, \sigma)}{\Gamma \vdash s \doteq t : T \leadsto \sigma} \text{(T-Unify)} \]

Constraint propagation through substitution:

\[ \frac{\Gamma \vdash C[\sigma] \quad \text{Dom}(\sigma) \subseteq \text{MetaVars}(C)}{\Gamma \vdash C \leadsto \sigma} \text{(T-Subst)} \]

These rules capture the interplay between dependent types, universe constraints, and meta-variable unification that makes the Calculus of Constructions both expressive and challenging to implement. The key insight is that type checking and constraint solving must proceed hand-in-hand, with each phase informing and constraining the other.

Universe Polymorphism

Universe polymorphism enables definitions to abstract over universe levels, creating truly generic constructions that work across the entire universe hierarchy. Our implementation includes a dedicated universe constraint solver that handles the complex arithmetic and constraint relationships that arise in polymorphic universe contexts.

Universe Constraint Solver

#![allow(unused)]
fn main() {
use std::collections::{HashMap, HashSet};
/// Universe level constraint solver for polymorphic universe levels
use crate::ast::{Universe, UniverseConstraint};
impl UniverseSolver {
    pub fn new() -> Self {
        UniverseSolver {
            substitutions: HashMap::new(),
        }
    }

    /// Solve universe constraints and return substitutions
    pub fn solve(&mut self, constraints: &[UniverseConstraint]) -> Result<(), String> {
        // Simple constraint solver - in a full system this would be much more
        // sophisticated
        for constraint in constraints {
            self.solve_constraint(constraint)?;
        }
        Ok(())
    }

    /// Solve a single constraint
    fn solve_constraint(&mut self, constraint: &UniverseConstraint) -> Result<(), String> {
        match constraint {
            UniverseConstraint::Equal(u1, u2) => self.unify_universes(u1, u2),
            UniverseConstraint::LessEq(u1, u2) => {
                // For now, treat less-equal as equality (simplified)
                self.unify_universes(u1, u2)
            }
        }
    }

    /// Unify two universe levels
    fn unify_universes(&mut self, u1: &Universe, u2: &Universe) -> Result<(), String> {
        let u1_subst = self.apply_substitution(u1);
        let u2_subst = self.apply_substitution(u2);

        match (&u1_subst, &u2_subst) {
            (Universe::ScopedVar(_, x), u) | (u, Universe::ScopedVar(_, x)) => {
                if let Universe::ScopedVar(_, y) = u {
                    if x == y {
                        return Ok(());
                    }
                }

                // Occurs check
                if self.occurs_check(x, u) {
                    return Err(format!(
                        "Universe variable {} occurs in {}",
                        x,
                        self.universe_to_string(u)
                    ));
                }

                self.substitutions.insert(x.clone(), u.clone());
                Ok(())
            }

            (Universe::Const(n1), Universe::Const(n2)) if n1 == n2 => Ok(()),
            (Universe::Const(_), Universe::Const(_)) => {
                Err("Cannot unify different constants".to_string())
            }

            (Universe::Add(u1, n1), Universe::Add(u2, n2)) if n1 == n2 => {
                self.unify_universes(u1, u2)
            }
            (Universe::Add(_, _), Universe::Add(_, _)) => {
                Err("Cannot unify different additions".to_string())
            }

            (Universe::Max(u1, v1), Universe::Max(u2, v2)) => {
                self.unify_universes(u1, u2)?;
                self.unify_universes(v1, v2)
            }

            (Universe::IMax(u1, v1), Universe::IMax(u2, v2)) => {
                self.unify_universes(u1, u2)?;
                self.unify_universes(v1, v2)
            }

            _ => Err(format!(
                "Cannot unify universe levels {} and {}",
                self.universe_to_string(&u1_subst),
                self.universe_to_string(&u2_subst)
            )),
        }
    }

    /// Apply substitutions to a universe level
    fn apply_substitution(&self, u: &Universe) -> Universe {
        match u {
            Universe::ScopedVar(_scope, x) => {
                if let Some(subst) = self.substitutions.get(x) {
                    self.apply_substitution(subst)
                } else {
                    u.clone()
                }
            }
            Universe::Const(n) => Universe::Const(*n),
            Universe::Add(u, n) => Universe::Add(Box::new(self.apply_substitution(u)), *n),
            Universe::Max(u, v) => Universe::Max(
                Box::new(self.apply_substitution(u)),
                Box::new(self.apply_substitution(v)),
            ),
            Universe::IMax(u, v) => Universe::IMax(
                Box::new(self.apply_substitution(u)),
                Box::new(self.apply_substitution(v)),
            ),
            _ => u.clone(),
        }
    }

    /// Check if a universe variable occurs in a universe level
    #[allow(clippy::only_used_in_recursion)]
    fn occurs_check(&self, var: &str, u: &Universe) -> bool {
        match u {
            Universe::ScopedVar(_, x) => x == var,
            Universe::Add(u, _) => self.occurs_check(var, u),
            Universe::Max(u, v) | Universe::IMax(u, v) => {
                self.occurs_check(var, u) || self.occurs_check(var, v)
            }
            Universe::Const(_) => false,
            Universe::Meta(_) => false, // Meta variables don't contain named variables
        }
    }

    /// Convert universe to string representation
    #[allow(clippy::only_used_in_recursion)]
    fn universe_to_string(&self, u: &Universe) -> String {
        match u {
            Universe::Const(n) => n.to_string(),
            Universe::ScopedVar(scope, var) => format!("{}::{}", scope, var),
            Universe::Meta(id) => format!("?u{}", id),
            Universe::Add(u, n) => format!("({} + {})", self.universe_to_string(u), n),
            Universe::Max(u, v) => format!(
                "max({}, {})",
                self.universe_to_string(u),
                self.universe_to_string(v)
            ),
            Universe::IMax(u, v) => format!(
                "imax({}, {})",
                self.universe_to_string(u),
                self.universe_to_string(v)
            ),
        }
    }

    /// Get the final substitution for a universe variable
    pub fn get_substitution(&self, var: &str) -> Option<Universe> {
        self.substitutions
            .get(var)
            .map(|u| self.apply_substitution(u))
    }

    /// Apply all substitutions to a universe level
    pub fn substitute_universe(&self, u: &Universe) -> Universe {
        self.apply_substitution(u)
    }

    /// Get all substitutions
    pub fn get_all_substitutions(&self) -> &HashMap<String, Universe> {
        &self.substitutions
    }

    /// Check if universe level constraints are satisfiable
    pub fn is_satisfiable(&self, constraints: &[UniverseConstraint]) -> bool {
        let mut solver = self.clone();
        solver.solve(constraints).is_ok()
    }

    /// Generate fresh universe variable
    pub fn fresh_universe_var(&self, base: &str, avoid: &HashSet<String>) -> String {
        let mut counter = 0;
        loop {
            let name = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };
            if !avoid.contains(&name) && !self.substitutions.contains_key(&name) {
                return name;
            }
            counter += 1;
        }
    }
}
impl Default for UniverseSolver {
    fn default() -> Self {
        Self::new()
    }
}
/// Universe level constraint solver
#[derive(Debug, Clone)]
pub struct UniverseSolver {
    /// Substitutions for universe variables
    substitutions: HashMap<String, Universe>,
}
}

The UniverseSolver manages universe-level constraints independently from the main constraint solver, enabling specialized algorithms for universe arithmetic and level unification. This separation allows for efficient handling of universe polymorphism without complicating the main type checking algorithms.

Universe Constraint Types

Universe constraints capture the relationships between universe levels that must be maintained for logical consistency:

#![allow(unused)]
fn main() {
use std::fmt;
/// Core terms in the Calculus of Constructions
#[derive(Debug, Clone, PartialEq)]
pub enum Term {
    /// Variables: x, y, z
    Var(String),

    /// Application: f a
    App(Box<Term>, Box<Term>),

    /// Lambda abstraction: λ x : A, t  (written as fun x : A => t)
    Abs(String, Box<Term>, Box<Term>),

    /// Dependent product: Π x : A, B  (written as (x : A) → B or {x : A} → B)
    Pi(String, Box<Term>, Box<Term>, bool), // bool = implicit

    /// Sort/Type: Sort u, Type, Prop
    Sort(Universe),

    /// Let binding: let x := t in s
    Let(String, Box<Term>, Box<Term>, Box<Term>),

    /// Match expression with patterns
    Match(Box<Term>, Vec<MatchArm>),

    /// Inductive type constructor
    Constructor(String, Vec<Term>),

    /// Local constant (for definitions and axioms)
    Const(String),

    /// Meta-variable for type inference
    Meta(String),

    /// Field projection: term.field
    Proj(Box<Term>, String),

    /// Dependent pair type (Sigma type): Σ (x : A), B
    Sigma(String, Box<Term>, Box<Term>),

    /// Pair constructor: ⟨a, b⟩
    Pair(Box<Term>, Box<Term>),

    /// First projection: π₁
    Fst(Box<Term>),

    /// Second projection: π₂
    Snd(Box<Term>),
}
/// Universe levels for type theory
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Universe {
    Const(u32),                         // Concrete level: 0, 1, 2, ...
    ScopedVar(String, String),          // Scoped universe variable: (scope_name, var_name)
    Meta(u32),                          // Universe metavariable: ?u₀, ?u₁, ...
    Add(Box<Universe>, u32),            // Level + n
    Max(Box<Universe>, Box<Universe>),  // max(u, v)
    IMax(Box<Universe>, Box<Universe>), // imax(u, v)
}
/// Context for universe level variables
#[derive(Debug, Clone, PartialEq)]
pub struct UniverseContext {
    /// Currently bound universe variables
    pub variables: Vec<String>,
    /// Constraints on universe levels
    pub constraints: Vec<UniverseConstraint>,
}
/// Pattern matching arms
#[derive(Debug, Clone, PartialEq)]
pub struct MatchArm {
    pub pattern: Pattern,
    pub body: Term,
}
/// Patterns for match expressions
#[derive(Debug, Clone, PartialEq)]
pub enum Pattern {
    /// Variable pattern: x
    Var(String),

    /// Constructor pattern: Cons x xs
    Constructor(String, Vec<Pattern>),

    /// Wildcard pattern: _
    Wildcard,
}
/// Top-level declarations
#[derive(Debug, Clone, PartialEq)]
pub enum Declaration {
    /// Constant definition: def name {u...} (params) : type := body
    Definition {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        body: Term,
    },

    /// Axiom: axiom name {u...} : type
    Axiom {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        ty: Term,
    },

    /// Inductive type: inductive Name {u...} (params) : Type := constructors
    Inductive {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        constructors: Vec<Constructor>,
    },

    /// Structure (single constructor inductive): structure Name {u...} :=
    /// (fields)
    Structure {
        name: String,
        universe_params: Vec<String>, // Universe level parameters
        params: Vec<Parameter>,
        ty: Term,
        fields: Vec<Field>,
    },
}
/// Function parameters
#[derive(Debug, Clone, PartialEq)]
pub struct Parameter {
    pub name: String,
    pub ty: Term,
    pub implicit: bool, // for {x : A} vs (x : A)
}
/// Constructor for inductive types
#[derive(Debug, Clone, PartialEq)]
pub struct Constructor {
    pub name: String,
    pub ty: Term,
}
/// Structure fields
#[derive(Debug, Clone, PartialEq)]
pub struct Field {
    pub name: String,
    pub ty: Term,
}
/// Module containing multiple declarations
#[derive(Debug, Clone, PartialEq)]
pub struct Module {
    pub declarations: Vec<Declaration>,
}
impl fmt::Display for Universe {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Universe::Const(n) => write!(f, "{}", n),
            Universe::ScopedVar(scope, var) => write!(f, "{}::{}", scope, var),
            Universe::Meta(id) => write!(f, "?u{}", id),
            Universe::Add(u, n) => write!(f, "{}+{}", u, n),
            Universe::Max(u, v) => write!(f, "max({}, {})", u, v),
            Universe::IMax(u, v) => write!(f, "imax({}, {})", u, v),
        }
    }
}
impl fmt::Display for Term {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Term::Var(x) => write!(f, "{}", x),
            Term::App(t1, t2) => write!(f, "({} {})", t1, t2),
            Term::Abs(x, ty, body) => write!(f, "(fun {} : {} => {})", x, ty, body),
            Term::Pi(x, ty, body, implicit) => {
                if Self::occurs_free(x, body) {
                    if *implicit {
                        write!(f, "{{{}  : {}}} → {}", x, ty, body)
                    } else {
                        write!(f, "({} : {}) → {}", x, ty, body)
                    }
                } else {
                    write!(f, "{} → {}", ty, body)
                }
            }
            Term::Sort(u) => match u {
                Universe::Const(0) => write!(f, "Prop"),
                Universe::Const(n) => {
                    if *n == 1 {
                        write!(f, "Type")
                    } else {
                        write!(f, "Type {}", n - 1)
                    }
                }
                Universe::Add(ref base, n) => {
                    if let Universe::Const(1) = **base {
                        write!(f, "Type {}", n)
                    } else {
                        write!(f, "Sort {}", u)
                    }
                }
                _ => write!(f, "Sort {}", u),
            },
            Term::Let(x, ty, val, body) => write!(f, "(let {} : {} := {} in {})", x, ty, val, body),
            Term::Match(scrutinee, arms) => {
                writeln!(f, "match {} with", scrutinee)?;
                for arm in arms {
                    writeln!(f, "| {} => {}", arm.pattern, arm.body)?;
                }
                Ok(())
            }
            Term::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Term::Const(name) => write!(f, "{}", name),
            Term::Meta(name) => write!(f, "?{}", name),
            Term::Proj(term, field) => write!(f, "{}.{}", term, field),
            Term::Sigma(x, domain, codomain) => write!(f, "Σ ({} : {}), {}", x, domain, codomain),
            Term::Pair(fst, snd) => write!(f, "⟨{}, {}⟩", fst, snd),
            Term::Fst(pair) => write!(f, "π₁({})", pair),
            Term::Snd(pair) => write!(f, "π₂({})", pair),
        }
    }
}
impl fmt::Display for Pattern {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Pattern::Var(x) => write!(f, "{}", x),
            Pattern::Constructor(name, args) => {
                write!(f, "{}", name)?;
                for arg in args {
                    write!(f, " {}", arg)?;
                }
                Ok(())
            }
            Pattern::Wildcard => write!(f, "_"),
        }
    }
}
impl Term {
    /// Check if variable occurs free in term
    pub fn occurs_free(var: &str, term: &Term) -> bool {
        match term {
            Term::Var(x) => x == var,
            Term::App(t1, t2) => Self::occurs_free(var, t1) || Self::occurs_free(var, t2),
            Term::Abs(x, ty, body) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Pi(x, ty, body, _) => {
                Self::occurs_free(var, ty) || (x != var && Self::occurs_free(var, body))
            }
            Term::Sort(_) => false,
            Term::Let(x, ty, val, body) => {
                Self::occurs_free(var, ty)
                    || Self::occurs_free(var, val)
                    || (x != var && Self::occurs_free(var, body))
            }
            Term::Match(scrutinee, arms) => {
                Self::occurs_free(var, scrutinee)
                    || arms.iter().any(|arm| Self::occurs_free(var, &arm.body))
            }
            Term::Constructor(_, args) => args.iter().any(|arg| Self::occurs_free(var, arg)),
            Term::Const(_) => false,
            Term::Meta(_) => false,
            Term::Proj(term, _) => Self::occurs_free(var, term),
            Term::Sigma(x, domain, codomain) => {
                Self::occurs_free(var, domain) || (x != var && Self::occurs_free(var, codomain))
            }
            Term::Pair(fst, snd) => Self::occurs_free(var, fst) || Self::occurs_free(var, snd),
            Term::Fst(pair) | Term::Snd(pair) => Self::occurs_free(var, pair),
        }
    }
}
impl UniverseContext {
    pub fn new() -> Self {
        UniverseContext {
            variables: Vec::new(),
            constraints: Vec::new(),
        }
    }

    /// Add a universe variable to the context
    pub fn add_var(&mut self, name: String) {
        if !self.variables.contains(&name) {
            self.variables.push(name);
        }
    }

    /// Add a constraint to the context
    pub fn add_constraint(&mut self, constraint: UniverseConstraint) {
        self.constraints.push(constraint);
    }

    /// Check if a universe variable is bound in this context
    pub fn contains(&self, name: &str) -> bool {
        self.variables.contains(&name.to_string())
    }

    /// Extend context with variables from another context
    pub fn extend(&self, other: &UniverseContext) -> Self {
        let mut new_ctx = self.clone();
        for var in &other.variables {
            new_ctx.add_var(var.clone());
        }
        for constraint in &other.constraints {
            new_ctx.add_constraint(constraint.clone());
        }
        new_ctx
    }

    /// Generate a fresh universe variable name
    pub fn fresh_var(&self, base: &str) -> String {
        let mut counter = 0;
        loop {
            let name = if counter == 0 {
                base.to_string()
            } else {
                format!("{}{}", base, counter)
            };
            if !self.contains(&name) {
                return name;
            }
            counter += 1;
        }
    }
}
impl Default for UniverseContext {
    fn default() -> Self {
        Self::new()
    }
}
impl Universe {
    /// Create a scoped universe variable
    pub fn scoped_var(scope: String, name: String) -> Self {
        Universe::ScopedVar(scope, name)
    }
}
/// Universe level constraints for polymorphism
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UniverseConstraint {
    /// u ≤ v
    LessEq(Universe, Universe),
    /// u = v
    Equal(Universe, Universe),
}
}

The constraint system handles two fundamental relationships:

Equality Constraints (Equal) require two universe levels to be identical, arising from type equality requirements in dependent contexts where universe levels must match exactly.

Ordering Constraints (LessEq) ensure that one universe level is less than or equal to another, maintaining the predicativity requirements that prevent logical paradoxes.

Universe Constraint Solving

#![allow(unused)]
fn main() {
/// Solve universe constraints and return substitutions
pub fn solve(&mut self, constraints: &[UniverseConstraint]) -> Result<(), String> {
    // Simple constraint solver - in a full system this would be much more
    // sophisticated
    for constraint in constraints {
        self.solve_constraint(constraint)?;
    }
    Ok(())
}
}

The main solving algorithm processes constraint lists through iterative constraint resolution, ensuring that all universe relationships are satisfied consistently:

#![allow(unused)]
fn main() {
/// Solve a single constraint
fn solve_constraint(&mut self, constraint: &UniverseConstraint) -> Result<(), String> {
    match constraint {
        UniverseConstraint::Equal(u1, u2) => self.unify_universes(u1, u2),
        UniverseConstraint::LessEq(u1, u2) => {
            // For now, treat less-equal as equality (simplified)
            self.unify_universes(u1, u2)
        }
    }
}
}

Individual constraint solving handles the different constraint types through specialized algorithms. Equality constraints use unification, while ordering constraints require more complex analysis to ensure the universe hierarchy remains consistent.

Universe Unification Algorithm

#![allow(unused)]
fn main() {
/// Unify two universe levels
fn unify_universes(&mut self, u1: &Universe, u2: &Universe) -> Result<(), String> {
    let u1_subst = self.apply_substitution(u1);
    let u2_subst = self.apply_substitution(u2);

    match (&u1_subst, &u2_subst) {
        (Universe::ScopedVar(_, x), u) | (u, Universe::ScopedVar(_, x)) => {
            if let Universe::ScopedVar(_, y) = u {
                if x == y {
                    return Ok(());
                }
            }

            // Occurs check
            if self.occurs_check(x, u) {
                return Err(format!(
                    "Universe variable {} occurs in {}",
                    x,
                    self.universe_to_string(u)
                ));
            }

            self.substitutions.insert(x.clone(), u.clone());
            Ok(())
        }

        (Universe::Const(n1), Universe::Const(n2)) if n1 == n2 => Ok(()),
        (Universe::Const(_), Universe::Const(_)) => {
            Err("Cannot unify different constants".to_string())
        }

        (Universe::Add(u1, n1), Universe::Add(u2, n2)) if n1 == n2 => {
            self.unify_universes(u1, u2)
        }
        (Universe::Add(_, _), Universe::Add(_, _)) => {
            Err("Cannot unify different additions".to_string())
        }

        (Universe::Max(u1, v1), Universe::Max(u2, v2)) => {
            self.unify_universes(u1, u2)?;
            self.unify_universes(v1, v2)
        }

        (Universe::IMax(u1, v1), Universe::IMax(u2, v2)) => {
            self.unify_universes(u1, u2)?;
            self.unify_universes(v1, v2)
        }

        _ => Err(format!(
            "Cannot unify universe levels {} and {}",
            self.universe_to_string(&u1_subst),
            self.universe_to_string(&u2_subst)
        )),
    }
}
}

Universe unification demonstrates the complexity of working with universe-level arithmetic. The algorithm handles multiple universe expression forms:

  1. Variable Unification: When unifying a universe variable with any expression, we create a substitution mapping after performing occurs checking to prevent infinite universe expressions.

  2. Constant Unification: Universe constants can only unify with identical constants, ensuring that concrete levels like Type 0 and Type 1 remain distinct.

  3. Arithmetic Expressions: Universe expressions like u + 1 and max(u, v) require structural decomposition where we recursively unify the component universe expressions.

Substitution and Normalization

#![allow(unused)]
fn main() {
/// Apply substitutions to a universe level
fn apply_substitution(&self, u: &Universe) -> Universe {
    match u {
        Universe::ScopedVar(_scope, x) => {
            if let Some(subst) = self.substitutions.get(x) {
                self.apply_substitution(subst)
            } else {
                u.clone()
            }
        }
        Universe::Const(n) => Universe::Const(*n),
        Universe::Add(u, n) => Universe::Add(Box::new(self.apply_substitution(u)), *n),
        Universe::Max(u, v) => Universe::Max(
            Box::new(self.apply_substitution(u)),
            Box::new(self.apply_substitution(v)),
        ),
        Universe::IMax(u, v) => Universe::IMax(
            Box::new(self.apply_substitution(u)),
            Box::new(self.apply_substitution(v)),
        ),
        _ => u.clone(),
    }
}
}

Universe substitution application demonstrates the recursive nature of universe expressions. When substituting into compound expressions like Add or Max, we must recursively apply substitutions to all subcomponents while maintaining the arithmetic structure.

Occurs Check for Universe Variables

#![allow(unused)]
fn main() {
/// Check if a universe variable occurs in a universe level
#[allow(clippy::only_used_in_recursion)]
fn occurs_check(&self, var: &str, u: &Universe) -> bool {
    match u {
        Universe::ScopedVar(_, x) => x == var,
        Universe::Add(u, _) => self.occurs_check(var, u),
        Universe::Max(u, v) | Universe::IMax(u, v) => {
            self.occurs_check(var, u) || self.occurs_check(var, v)
        }
        Universe::Const(_) => false,
        Universe::Meta(_) => false, // Meta variables don't contain named variables
    }
}
}

The occurs check prevents infinite universe expressions by ensuring that universe variables don’t appear within their own solutions. This check must traverse the entire structure of universe expressions, including arithmetic operations and maximum expressions.

Universe Expression Normalization

The universe solver includes normalization capabilities that simplify universe expressions to canonical forms:

#![allow(unused)]
fn main() {
match u {
    Universe::Add(base, n) => {
        let base_norm = self.normalize_universe_static(base);
        match base_norm {
            Universe::Const(m) => Universe::Const(m + n),
            _ => Universe::Add(Box::new(base_norm), *n),
        }
    }
    Universe::Max(u1, u2) => {
        let u1_norm = self.normalize_universe_static(u1);
        let u2_norm = self.normalize_universe_static(u2);
        match (&u1_norm, &u2_norm) {
            (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
            _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
        }
    }
    _ => u.clone(),
}
}

This normalization process:

  • Arithmetic Simplification: Combines constants in addition expressions like Const(2) + 3 becoming Const(5)
  • Maximum Computation: Evaluates maximum expressions between constants
  • Canonical Forms: Maintains normalized expressions that improve unification success

Universe Polymorphic Definitions

Universe polymorphism enables definitions like:

def id.{u} (A : Sort u) (x : A) : A := x

The .{u} syntax introduces a universe parameter that can be instantiated at different levels:

-- id instantiated at Type 0
id_nat : Nat → Nat := id.{0} Nat

-- id instantiated at Type 1
id_type : Type → Type := id.{1} Type

Fresh Variable Generation

#![allow(unused)]
fn main() {
/// Generate fresh universe variable
pub fn fresh_universe_var(&self, base: &str, avoid: &HashSet<String>) -> String {
    let mut counter = 0;
    loop {
        let name = if counter == 0 {
            base.to_string()
        } else {
            format!("{}{}", base, counter)
        };
        if !avoid.contains(&name) && !self.substitutions.contains_key(&name) {
            return name;
        }
        counter += 1;
    }
}
}

Fresh universe variable generation ensures that each universe abstraction gets unique variable names, preventing conflicts in complex polymorphic definitions. The algorithm:

  1. Base Name Generation: Starts with a descriptive base name
  2. Conflict Avoidance: Checks against existing variables and substitutions
  3. Counter Extension: Adds numeric suffixes when conflicts occur
  4. Uniqueness Guarantee: Ensures the returned name is globally unique

Substitution Management

#![allow(unused)]
fn main() {
/// Get the final substitution for a universe variable
pub fn get_substitution(&self, var: &str) -> Option<Universe> {
    self.substitutions
        .get(var)
        .map(|u| self.apply_substitution(u))
}
}

The solver provides access to resolved universe substitutions, enabling the main type checker to apply universe-level solutions throughout the type checking process.

#![allow(unused)]
fn main() {
/// Apply all substitutions to a universe level
pub fn substitute_universe(&self, u: &Universe) -> Universe {
    self.apply_substitution(u)
}
}

Universe substitution application handles the complete resolution of universe expressions, recursively applying all accumulated substitutions to produce fully resolved universe levels.

Integration with Type Checking

The universe solver integrates with the main type checking algorithm at several points:

Type Formation: When checking that types are well-formed, the universe solver ensures that universe level constraints are satisfied.

Polymorphic Instantiation: When instantiating polymorphic definitions, the universe solver generates fresh universe variables and maintains constraints between them.

Definitional Equality: When checking definitional equality between types with universe polymorphism, the universe solver ensures that universe relationships are preserved.

Constraint Satisfaction Checking

#![allow(unused)]
fn main() {
/// Check if universe level constraints are satisfiable
pub fn is_satisfiable(&self, constraints: &[UniverseConstraint]) -> bool {
    let mut solver = self.clone();
    solver.solve(constraints).is_ok()
}
}

The satisfiability checker enables the type checker to verify that universe constraint sets have solutions before committing to particular type assignments. This early checking prevents backtracking in complex type inference scenarios.

Universe Polymorphism Examples

Our implementation supports several forms of universe polymorphic definitions:

Polymorphic Data Types

structure Pair.{u, v} (A : Sort u) (B : Sort v) : Sort (max u v) :=
  (fst : A)
  (snd : B)

The Pair type is polymorphic over two universe levels, with the result type living at the maximum of the argument universe levels.

Polymorphic Functions

def const.{u, v} (A : Sort u) (B : Sort v) (x : A) (y : B) : A := x

The const function ignores its second argument and returns the first, working at any universe levels for both argument types.

Universe Level Arithmetic

def lift.{u} (A : Sort u) : Sort (u + 1) := A

The lift operation moves a type from universe u to universe u + 1, demonstrating universe level arithmetic in type expressions.

Failure modes

There are a couple of failure cases we handle with custom errors:

#![allow(unused)]
fn main() {
/// Convert universe to string representation
#[allow(clippy::only_used_in_recursion)]
fn universe_to_string(&self, u: &Universe) -> String {
    match u {
        Universe::Const(n) => n.to_string(),
        Universe::ScopedVar(scope, var) => format!("{}::{}", scope, var),
        Universe::Meta(id) => format!("?u{}", id),
        Universe::Add(u, n) => format!("({} + {})", self.universe_to_string(u), n),
        Universe::Max(u, v) => format!(
            "max({}, {})",
            self.universe_to_string(u),
            self.universe_to_string(v)
        ),
        Universe::IMax(u, v) => format!(
            "imax({}, {})",
            self.universe_to_string(u),
            self.universe_to_string(v)
        ),
    }
}
}

The failures are categorized into three main types:

  • Unification Failures: Show the specific universe levels that couldn’t be unified
  • Occurs Check Violations: Identify infinite universe expressions
  • Arithmetic Inconsistencies: Point out invalid universe level arithmetic

Constraint Solving

The constraint solver forms the beating heart of our Calculus of Constructions implementation, handling the complex task of resolving unknown terms, types, and universe levels through systematic constraint propagation and unification.

This is a non-trivial piece of software. That requirese quite a bit of care and thought. While this is a difficult piece of code, it is still far simpler than the 100k+ LOC implementations in Coq and Lean. This was the motivating example of this project to have a small CoC implementation that a sufficiently motivated (and probably caffeinated) undergraduate could read through in an afternoon.

So grab an espresso (or two or three) and buckle up!

Core Data Structures

The solver operates on several fundamental data structures that capture the essence of constraint-based type inference in dependent type systems.

Meta-Variable Management

#![allow(unused)]
fn main() {
use std::collections::{HashMap, HashSet, VecDeque};
use std::fmt;
use crate::ast::{Term, Universe};
use crate::context::Context;
use crate::errors::TypeError;
/// Meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct MetaId(pub u64);
impl fmt::Display for MetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?m{}", self.0)
    }
}
/// Universe meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct UniverseMetaId(pub u64);
impl fmt::Display for UniverseMetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?u{}", self.0)
    }
}
/// Constraint identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct ConstraintId(pub u64);
/// Constraint types
#[derive(Debug, Clone)]
pub enum Constraint {
    /// Unification: t1 ≡ t2
    Unify {
        id: ConstraintId,
        left: Term,
        right: Term,
        strength: ConstraintStrength,
    },
    /// Type constraint: term : type
    HasType {
        id: ConstraintId,
        term: Term,
        expected_type: Term,
    },
    /// Universe unification: u1 ≡ u2
    UnifyUniverse {
        id: ConstraintId,
        left: Universe,
        right: Universe,
        strength: ConstraintStrength,
    },
    /// Universe level constraint: u1 ≤ u2
    UniverseLevel {
        id: ConstraintId,
        left: Universe,
        right: Universe,
    },
    /// Delayed constraint (for complex patterns)
    Delayed {
        id: ConstraintId,
        constraint: Box<Constraint>,
        waiting_on: HashSet<MetaId>,
    },
}
/// Constraint strength for prioritization
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ConstraintStrength {
    Required,  // Must be solved
    Preferred, // Should be solved if possible
    Weak,      // Can be postponed
}
/// Substitution mapping meta-variables to terms
#[derive(Debug, Clone, Default)]
pub struct Substitution {
    mapping: HashMap<MetaId, Term>,
    universe_mapping: HashMap<u32, Universe>,
}
impl Substitution {
    pub fn new() -> Self {
        Self::default()
    }

    pub fn insert(&mut self, meta: MetaId, term: Term) {
        self.mapping.insert(meta, term);
    }

    pub fn get(&self, meta: &MetaId) -> Option<&Term> {
        self.mapping.get(meta)
    }

    pub fn insert_universe(&mut self, meta_id: u32, universe: Universe) {
        self.universe_mapping.insert(meta_id, universe);
    }

    pub fn get_universe(&self, meta_id: &u32) -> Option<&Universe> {
        self.universe_mapping.get(meta_id)
    }

    /// Apply substitution to a term
    pub fn apply(&self, term: &Term) -> Term {
        match term {
            Term::Meta(name) => {
                // Try to parse meta ID and look up substitution
                if let Some(meta_id) = self.parse_meta_name(name) {
                    if let Some(subst) = self.mapping.get(&meta_id) {
                        // Recursively apply substitution
                        return self.apply(subst);
                    }
                }
                term.clone()
            }
            Term::App(f, arg) => Term::App(Box::new(self.apply(f)), Box::new(self.apply(arg))),
            Term::Abs(x, ty, body) => Term::Abs(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
            ),
            Term::Pi(x, ty, body, implicit) => Term::Pi(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
                *implicit,
            ),
            Term::Let(x, ty, val, body) => Term::Let(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(val)),
                Box::new(self.apply(body)),
            ),
            Term::Sort(u) => Term::Sort(self.apply_universe(u)),
            // Other cases remain unchanged
            _ => term.clone(),
        }
    }

    /// Apply substitution to a universe
    pub fn apply_universe(&self, universe: &Universe) -> Universe {
        let result = match universe {
            Universe::Meta(id) => {
                if let Some(subst) = self.universe_mapping.get(id) {
                    // Recursively apply substitution
                    self.apply_universe(subst)
                } else {
                    universe.clone()
                }
            }
            Universe::Add(base, n) => Universe::Add(Box::new(self.apply_universe(base)), *n),
            Universe::Max(u, v) => Universe::Max(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            Universe::IMax(u, v) => Universe::IMax(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            // Constants, variables, and scoped variables remain unchanged during substitution
            Universe::Const(_) | Universe::ScopedVar(_, _) => universe.clone(),
        };
        // Normalize the result to simplify expressions like Const(0) + 1 -> Const(1)
        self.normalize_universe_static(&result)
    }

    /// Static normalization (doesn't require &self)
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe_static(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe_static(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe_static(u1);
                let u2_norm = self.normalize_universe_static(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }
}
/// Advanced constraint solver with dependency tracking
pub struct Solver {
    /// Meta-variables
    metas: HashMap<MetaId, MetaInfo>,
    /// Active constraints
    constraints: HashMap<ConstraintId, Constraint>,
    /// Constraint queue (ordered by priority)
    queue: VecDeque<ConstraintId>,
    /// Solved substitutions
    substitution: Substitution,
    /// Next meta ID
    next_meta_id: u64,
    /// Next universe meta ID
    next_universe_meta_id: u32,
    /// Next constraint ID
    next_constraint_id: u64,
    /// Type checking context
    context: Context,
    /// Dependency graph: meta -> constraints that depend on it
    dependencies: HashMap<MetaId, HashSet<ConstraintId>>,
    /// Delayed constraints that couldn't be solved immediately
    delayed_constraints: HashSet<ConstraintId>,
    /// Recursion depth counter to prevent stack overflow
    recursion_depth: u32,
    /// Flag to enable Miller pattern solving (advanced)
    enable_miller_patterns: bool,
    /// Flag to enable HasType constraint solving (advanced)
    enable_has_type_solving: bool,
}
impl Solver {
    pub fn new(context: Context) -> Self {
        Self {
            metas: HashMap::new(),
            constraints: HashMap::new(),
            queue: VecDeque::new(),
            substitution: Substitution::new(),
            next_meta_id: 0,
            next_universe_meta_id: 0,
            next_constraint_id: 0,
            context,
            dependencies: HashMap::new(),
            delayed_constraints: HashSet::new(),
            recursion_depth: 0,
            enable_miller_patterns: false,  // Advanced feature
            enable_has_type_solving: false, // Advanced feature
        }
    }

    /// Enable Miller pattern solving (advanced)
    pub fn enable_miller_patterns(&mut self) {
        self.enable_miller_patterns = true;
    }

    /// Check if Miller pattern solving is enabled
    pub fn miller_patterns_enabled(&self) -> bool {
        self.enable_miller_patterns
    }

    /// Enable HasType constraint solving (advanced)
    pub fn enable_has_type_solving(&mut self) {
        self.enable_has_type_solving = true;
    }

    /// Check if HasType constraint solving is enabled
    pub fn has_type_solving_enabled(&self) -> bool {
        self.enable_has_type_solving
    }

    /// Create a fresh meta-variable
    pub fn fresh_meta(&mut self, ctx_vars: Vec<String>, expected_type: Option<Term>) -> MetaId {
        let id = MetaId(self.next_meta_id);
        self.next_meta_id += 1;

        let info = MetaInfo {
            id,
            context: ctx_vars,
            expected_type,
            solution: None,
            dependencies: HashSet::new(),
            occurrences: Vec::new(),
        };

        self.metas.insert(id, info);
        id
    }

    /// Create a fresh universe meta-variable
    pub fn fresh_universe_meta(&mut self) -> Universe {
        let id = self.next_universe_meta_id;
        self.next_universe_meta_id += 1;
        Universe::Meta(id)
    }

    /// Add a constraint to the solver
    pub fn add_constraint(&mut self, constraint: Constraint) -> ConstraintId {
        let id = match &constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        // Track which metas appear in this constraint
        self.track_meta_occurrences(&constraint);

        self.constraints.insert(id, constraint);
        self.queue.push_back(id);
        id
    }

    /// Create a new constraint ID
    pub fn new_constraint_id(&mut self) -> ConstraintId {
        let id = ConstraintId(self.next_constraint_id);
        self.next_constraint_id += 1;
        id
    }

    /// Track meta-variable occurrences in constraints
    fn track_meta_occurrences(&mut self, constraint: &Constraint) {
        let metas = self.collect_metas_in_constraint(constraint);
        let constraint_id = match constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        for meta_id in metas {
            if let Some(info) = self.metas.get_mut(&meta_id) {
                info.occurrences.push(constraint_id);
            }
            self.dependencies
                .entry(meta_id)
                .or_default()
                .insert(constraint_id);
        }
    }

    /// Collect all meta-variables in a constraint
    fn collect_metas_in_constraint(&self, constraint: &Constraint) -> HashSet<MetaId> {
        match constraint {
            Constraint::Unify { left, right, .. } => {
                let mut metas = self.collect_metas_in_term(left);
                metas.extend(self.collect_metas_in_term(right));
                metas
            }
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                let mut metas = self.collect_metas_in_term(term);
                metas.extend(self.collect_metas_in_term(expected_type));
                metas
            }
            Constraint::UnifyUniverse { .. } => {
                // Universe constraints don't directly contain term meta-variables
                // but they might contain universe meta-variables (which we track separately)
                HashSet::new()
            }
            Constraint::UniverseLevel { .. } => {
                // Same as above
                HashSet::new()
            }
            Constraint::Delayed {
                constraint,
                waiting_on,
                ..
            } => {
                let mut metas = waiting_on.clone();
                metas.extend(self.collect_metas_in_constraint(constraint));
                metas
            }
        }
    }

    /// Collect all meta-variables in a term
    fn collect_metas_in_term(&self, term: &Term) -> HashSet<MetaId> {
        let mut metas = HashSet::new();
        self.collect_metas_recursive(term, &mut metas);
        metas
    }

    fn collect_metas_recursive(&self, term: &Term, metas: &mut HashSet<MetaId>) {
        match term {
            Term::Meta(name) => {
                if let Some(id) = self.parse_meta_name(name) {
                    metas.insert(id);
                }
            }
            Term::App(f, arg) => {
                self.collect_metas_recursive(f, metas);
                self.collect_metas_recursive(arg, metas);
            }
            Term::Abs(_, ty, body) | Term::Pi(_, ty, body, _) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(body, metas);
            }
            Term::Let(_, ty, val, body) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(val, metas);
                self.collect_metas_recursive(body, metas);
            }
            _ => {}
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }

    /// Main solving loop
    pub fn solve(&mut self) -> Result<Substitution, TypeError> {
        let max_iterations = 1000;
        let mut iterations = 0;

        while !self.queue.is_empty() && iterations < max_iterations {
            iterations += 1;

            // Pick the best constraint to solve
            if let Some(constraint_id) = self.pick_constraint() {
                let constraint = self.constraints.remove(&constraint_id).unwrap();

                match self.solve_constraint(constraint.clone()) {
                    Ok(progress) => {
                        if progress {
                            // Propagate the solution
                            self.propagate_solution()?;

                            // Wake up delayed constraints that might now be solvable
                            let woken_constraints = self.wake_delayed_constraints()?;

                            // Add woken constraints back to queue with high priority
                            for woken_id in woken_constraints {
                                self.queue.push_front(woken_id);
                            }
                        }
                    }
                    Err(e) => {
                        // Try to delay the constraint if possible
                        if self.can_delay(&constraint_id) {
                            // Determine which meta-variables to wait for
                            let waiting_on = match &constraint {
                                Constraint::Unify { left, right, .. } => {
                                    let mut metas = self.collect_metas_in_term(left);
                                    metas.extend(self.collect_metas_in_term(right));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                Constraint::HasType {
                                    term,
                                    expected_type,
                                    ..
                                } => {
                                    let mut metas = self.collect_metas_in_term(term);
                                    metas.extend(self.collect_metas_in_term(expected_type));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                _ => HashSet::new(),
                            };

                            if !waiting_on.is_empty() {
                                // Put constraint back and delay it
                                self.constraints.insert(constraint_id, constraint);
                                self.delay_constraint(constraint_id, waiting_on)?;
                            } else {
                                // Can't delay, return error
                                return Err(e);
                            }
                        } else {
                            return Err(e);
                        }
                    }
                }
            }
        }

        if iterations >= max_iterations {
            return Err(TypeError::Internal {
                message: "Constraint solving did not converge".to_string(),
            });
        }

        // Check for unsolved required constraints
        for constraint in self.constraints.values() {
            if let Constraint::Unify {
                strength: ConstraintStrength::Required,
                ..
            } = constraint
            {
                return Err(TypeError::Internal {
                    message: "Unsolved required constraints remain".to_string(),
                });
            }
        }

        Ok(self.substitution.clone())
    }

    /// Pick the next constraint to solve based on heuristics
    fn pick_constraint(&mut self) -> Option<ConstraintId> {
        // Priority order:
        // 1. Constraints with no meta-variables
        // 2. Constraints with only solved meta-variables
        // 3. Simple pattern constraints (Miller patterns)
        // 4. Other constraints

        let mut best_constraint = None;
        let mut best_score = i32::MAX;

        for &id in &self.queue {
            if let Some(constraint) = self.constraints.get(&id) {
                let score = self.score_constraint(constraint);
                if score < best_score {
                    best_score = score;
                    best_constraint = Some(id);
                }
            }
        }

        if let Some(id) = best_constraint {
            self.queue.retain(|&x| x != id);
            Some(id)
        } else {
            self.queue.pop_front()
        }
    }

    /// Score a constraint for solving priority (lower is better)
    fn score_constraint(&self, constraint: &Constraint) -> i32 {
        let metas = self.collect_metas_in_constraint(constraint);

        if metas.is_empty() {
            return 0; // No metas, can solve immediately
        }

        let unsolved_count = metas
            .iter()
            .filter(|m| {
                self.metas
                    .get(m)
                    .and_then(|info| info.solution.as_ref())
                    .is_none()
            })
            .count();

        if unsolved_count == 0 {
            return 1; // All metas solved
        }

        // Check for Miller patterns
        if let Constraint::Unify { left, right, .. } = constraint {
            if self.is_miller_pattern_unification(left, right)
                || self.is_miller_pattern_unification(right, left)
            {
                return 10 + unsolved_count as i32;
            }
        }

        100 + unsolved_count as i32
    }

    /// Check if a unification is a Miller pattern (?M x1 ... xn = t)
    fn is_miller_pattern_unification(&self, left: &Term, right: &Term) -> bool {
        self.is_simple_miller_pattern(left, right)
    }

    /// Check if left is a simple Miller pattern and right is safe to solve with
    pub fn is_simple_miller_pattern(&self, left: &Term, right: &Term) -> bool {
        let (head, args) = self.decompose_application(left);

        if let Term::Meta(meta_name) = head {
            // Must have at least one argument to be interesting
            if args.is_empty() {
                return false;
            }

            // Check if all arguments are distinct variables
            let mut seen = HashSet::new();
            for arg in &args {
                if let Term::Var(x) = arg {
                    if !seen.insert(x.clone()) {
                        return false; // Not distinct
                    }
                } else {
                    return false; // Not a variable
                }
            }

            // Check occurs check - right must not contain the meta-variable
            if let Some(meta_id) = self.parse_meta_name(meta_name) {
                if self.collect_metas_in_term(right).contains(&meta_id) {
                    return false; // Occurs check failure
                }
            }

            // For now, only handle simple cases where right is a variable or constant
            match right {
                Term::Var(_) | Term::Const(_) | Term::Sort(_) => true,
                Term::Meta(_) => {
                    // Allow meta-to-meta if they're different
                    if let Term::Meta(right_name) = right {
                        meta_name != right_name
                    } else {
                        false
                    }
                }
                _ => false, // More complex cases need careful handling
            }
        } else {
            false
        }
    }

    /// Decompose an application into head and arguments
    fn decompose_application<'a>(&self, term: &'a Term) -> (&'a Term, Vec<&'a Term>) {
        let mut current = term;
        let mut args = Vec::new();

        while let Term::App(f, arg) = current {
            args.push(arg.as_ref());
            current = f;
        }

        args.reverse();
        (current, args)
    }

    /// Solve a single constraint
    fn solve_constraint(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 100;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in solve_constraint",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.solve_constraint_impl(constraint);
        self.recursion_depth -= 1;
        result
    }

    fn solve_constraint_impl(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        match constraint {
            Constraint::Unify {
                left,
                right,
                strength,
                ..
            } => self.unify(&left, &right, strength),
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                if self.enable_has_type_solving {
                    self.solve_has_type_constraint(&term, &expected_type)
                } else {
                    // Default behavior: don't solve HasType constraints
                    Ok(false)
                }
            }
            Constraint::UnifyUniverse {
                left,
                right,
                strength,
                ..
            } => self.unify_universe(&left, &right, strength),
            Constraint::UniverseLevel { left, right, .. } => {
                // For now, treat level constraints as unification constraints
                self.unify_universe(&left, &right, ConstraintStrength::Preferred)
            }
            Constraint::Delayed { constraint, .. } => {
                // Re-queue the delayed constraint
                let inner = *constraint;
                self.add_constraint(inner);
                Ok(false)
            }
        }
    }

    /// Unify two terms
    fn unify(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 50;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in unify",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.unify_impl(left, right, strength);
        self.recursion_depth -= 1;
        result
    }

    fn unify_impl(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply(left);
        let right = self.substitution.apply(right);

        // Check if already equal
        if self.alpha_equal(&left, &right) {
            return Ok(false);
        }

        // Miller pattern detection and solving (if enabled)
        if self.enable_miller_patterns {
            if self.is_simple_miller_pattern(&left, &right) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&left) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &right);
                }
            } else if self.is_simple_miller_pattern(&right, &left) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&right) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &left);
                }
            }
        }

        match (&left, &right) {
            // Meta-variable cases
            (Term::Meta(m), t) | (t, Term::Meta(m)) => {
                if let Some(meta_id) = self.parse_meta_name(m) {
                    self.solve_meta(meta_id, t)
                } else {
                    Err(TypeError::Internal {
                        message: format!("Invalid meta-variable: {}", m),
                    })
                }
            }

            // Structural cases
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *f1.clone(),
                    right: *f2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: *a1.clone(),
                    right: *a2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                // Rename bound variables to be consistent
                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            // Universe (Sort) cases
            (Term::Sort(u1), Term::Sort(u2)) => self.unify_universe(u1, u2, strength),

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::TypeMismatch {
                        expected: left,
                        actual: right,
                    })
                } else {
                    Ok(false) // Can't solve, but not an error for weak
                              // constraints
                }
            }
        }
    }

    /// Unify two universes
    fn unify_universe(
        &mut self,
        left: &Universe,
        right: &Universe,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply_universe(left);
        let right = self.substitution.apply_universe(right);

        // Normalize universes (simplify expressions like 0+1 -> 1)
        let left = self.normalize_universe(&left);
        let right = self.normalize_universe(&right);

        // Check if already equal
        if left == right {
            return Ok(false);
        }

        match (&left, &right) {
            // Same constants
            (Universe::Const(n1), Universe::Const(n2)) if n1 == n2 => Ok(false),

            // Universe variable cases
            (Universe::ScopedVar(s1, v1), Universe::ScopedVar(s2, v2)) if s1 == s2 && v1 == v2 => {
                Ok(false)
            }
            (Universe::ScopedVar(_, _), Universe::ScopedVar(_, _)) => {
                // Different universe variables cannot be unified
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!(
                            "Cannot unify distinct universe variables {} and {}",
                            left, right
                        ),
                    })
                } else {
                    Ok(false)
                }
            }

            // Meta-variable cases
            (Universe::Meta(id), u) | (u, Universe::Meta(id)) => self.solve_universe_meta(*id, u),

            // Structural cases
            (Universe::Add(base1, n1), Universe::Add(base2, n2)) if n1 == n2 => {
                let constraint_id = self.new_constraint_id();
                self.add_constraint(Constraint::UnifyUniverse {
                    id: constraint_id,
                    left: *base1.clone(),
                    right: *base2.clone(),
                    strength,
                });
                Ok(true)
            }

            // Arithmetic constraints: ?u+n = m means ?u = m-n
            (Universe::Add(base, n), Universe::Const(m))
            | (Universe::Const(m), Universe::Add(base, n)) => {
                if *m >= *n {
                    if let Universe::Meta(id) = base.as_ref() {
                        self.solve_universe_meta(*id, &Universe::Const(m - n))
                    } else {
                        // More complex case - generate constraint for base
                        let constraint_id = self.new_constraint_id();
                        self.add_constraint(Constraint::UnifyUniverse {
                            id: constraint_id,
                            left: *base.clone(),
                            right: Universe::Const(m - n),
                            strength,
                        });
                        Ok(true)
                    }
                } else {
                    Err(TypeError::Internal {
                        message: format!("Cannot solve constraint: {} cannot equal {} (would require negative universe)",
                                       if matches!(&left, Universe::Add(_, _)) { &left } else { &right },
                                       if matches!(&left, Universe::Const(_)) { &left } else { &right }),
                    })
                }
            }

            (Universe::Max(u1, v1), Universe::Max(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Universe::IMax(u1, v1), Universe::IMax(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!("Cannot unify universes {} and {}", left, right),
                    })
                } else {
                    Ok(false)
                }
            }
        }
    }

    /// Normalize a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe(u1);
                let u2_norm = self.normalize_universe(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    /// Solve a universe meta-variable
    fn solve_universe_meta(
        &mut self,
        meta_id: u32,
        solution: &Universe,
    ) -> Result<bool, TypeError> {
        // Occurs check for universe meta-variables
        if self.occurs_in_universe(meta_id, solution) {
            return Err(TypeError::Internal {
                message: format!("Universe occurs check failed for ?u{}", meta_id),
            });
        }

        // Record solution
        self.substitution.insert_universe(meta_id, solution.clone());
        Ok(true)
    }

    /// Check if a universe meta-variable occurs in a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn occurs_in_universe(&self, meta_id: u32, universe: &Universe) -> bool {
        match universe {
            Universe::Meta(id) => *id == meta_id,
            Universe::Add(base, _) => self.occurs_in_universe(meta_id, base),
            Universe::Max(u, v) | Universe::IMax(u, v) => {
                self.occurs_in_universe(meta_id, u) || self.occurs_in_universe(meta_id, v)
            }
            _ => false,
        }
    }

    /// Solve a meta-variable
    fn solve_meta(&mut self, meta_id: MetaId, solution: &Term) -> Result<bool, TypeError> {
        // Occurs check
        if self.collect_metas_in_term(solution).contains(&meta_id) {
            return Err(TypeError::Internal {
                message: format!("Occurs check failed for ?m{}", meta_id.0),
            });
        }

        // Check solution is well-scoped
        if let Some(meta_info) = self.metas.get(&meta_id) {
            if !self.is_well_scoped(solution, &meta_info.context) {
                return Err(TypeError::Internal {
                    message: format!("Solution for ?m{} is not well-scoped", meta_id.0),
                });
            }
        }

        // Record solution
        self.substitution.insert(meta_id, solution.clone());

        if let Some(info) = self.metas.get_mut(&meta_id) {
            info.solution = Some(solution.clone());
        }

        Ok(true)
    }

    /// Check if a term is well-scoped in a context
    fn is_well_scoped(&self, term: &Term, context: &[String]) -> bool {
        match term {
            Term::Var(x) => {
                // Check if it's a bound variable or a global constant
                context.contains(x)
                    || self.context.lookup(x).is_some()
                    || self.context.lookup_axiom(x).is_some()
                    || self.context.lookup_constructor(x).is_some()
            }
            Term::App(f, arg) => {
                self.is_well_scoped(f, context) && self.is_well_scoped(arg, context)
            }
            Term::Abs(x, ty, body) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Pi(x, ty, body, _) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Const(name) => {
                // Constants should be in the global context
                self.context.lookup_axiom(name).is_some()
                    || self.context.lookup_constructor(name).is_some()
            }
            _ => true, // Sorts, meta-variables, etc.
        }
    }

    /// Alpha equality check
    fn alpha_equal(&self, t1: &Term, t2: &Term) -> bool {
        self.alpha_equal_aux(t1, t2, &mut vec![])
    }

    #[allow(clippy::only_used_in_recursion)]
    fn alpha_equal_aux(&self, t1: &Term, t2: &Term, renamings: &mut Vec<(String, String)>) -> bool {
        match (t1, t2) {
            (Term::Var(x), Term::Var(y)) => {
                // Check if this is a bound variable that was renamed
                for (a, b) in renamings.iter() {
                    if a == x && b == y {
                        return true;
                    }
                }
                x == y
            }
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                self.alpha_equal_aux(f1, f2, renamings) && self.alpha_equal_aux(a1, a2, renamings)
            }
            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Sort(u1), Term::Sort(u2)) => u1 == u2,
            (Term::Const(c1), Term::Const(c2)) => c1 == c2,
            (Term::Meta(m1), Term::Meta(m2)) => m1 == m2,
            _ => false,
        }
    }

    /// Rename a variable in a term
    #[allow(clippy::only_used_in_recursion)]
    fn rename_var(&self, old: &str, new: &str, term: &Term) -> Term {
        match term {
            Term::Var(x) if x == old => Term::Var(new.to_string()),
            Term::Var(x) => Term::Var(x.clone()),
            Term::App(f, arg) => Term::App(
                Box::new(self.rename_var(old, new, f)),
                Box::new(self.rename_var(old, new, arg)),
            ),
            Term::Abs(x, ty, body) if x != old => Term::Abs(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
            ),
            Term::Pi(x, ty, body, implicit) if x != old => Term::Pi(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
                *implicit,
            ),
            _ => term.clone(),
        }
    }

    /// Propagate solutions to dependent constraints
    fn propagate_solution(&mut self) -> Result<(), TypeError> {
        // Apply substitution to all remaining constraints
        let constraints: Vec<_> = self.constraints.drain().collect();

        for (id, constraint) in constraints {
            let updated = self.apply_subst_to_constraint(constraint);
            self.constraints.insert(id, updated);
        }

        Ok(())
    }

    /// Apply substitution to a constraint
    fn apply_subst_to_constraint(&self, constraint: Constraint) -> Constraint {
        match constraint {
            Constraint::Unify {
                id,
                left,
                right,
                strength,
            } => Constraint::Unify {
                id,
                left: self.substitution.apply(&left),
                right: self.substitution.apply(&right),
                strength,
            },
            Constraint::HasType {
                id,
                term,
                expected_type,
            } => Constraint::HasType {
                id,
                term: self.substitution.apply(&term),
                expected_type: self.substitution.apply(&expected_type),
            },
            Constraint::UnifyUniverse {
                id,
                left,
                right,
                strength,
            } => Constraint::UnifyUniverse {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
                strength,
            },
            Constraint::UniverseLevel { id, left, right } => Constraint::UniverseLevel {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
            },
            Constraint::Delayed {
                id,
                constraint,
                waiting_on,
            } => Constraint::Delayed {
                id,
                constraint: Box::new(self.apply_subst_to_constraint(*constraint)),
                waiting_on,
            },
        }
    }

    /// Solve a HasType constraint by type inference and unification
    fn solve_has_type_constraint(
        &mut self,
        term: &Term,
        expected_type: &Term,
    ) -> Result<bool, TypeError> {
        match term {
            Term::Meta(meta_name) => {
                // If the term is a meta-variable, we can potentially solve it
                if let Some(meta_id) = self.parse_meta_name(meta_name) {
                    if let Some(meta_info) = self.metas.get(&meta_id) {
                        if meta_info.solution.is_none() {
                            // Try to construct a term that has the expected type
                            if let Some(solution) = self.synthesize_term_of_type(expected_type)? {
                                return self.solve_meta(meta_id, &solution);
                            }
                        }
                    }
                }
                // If we can't solve the meta, try unifying with expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }

            Term::App(f, arg) => {
                // For applications, we need to ensure the function type is compatible
                // Create a fresh meta for the function type: ?F : ?A -> ?B
                let arg_type_meta_id = self.fresh_meta(vec![], None);
                let result_type_meta_id = self.fresh_meta(vec![], None);
                let arg_type_meta = Term::Meta(format!("?m{}", arg_type_meta_id.0));
                let result_type_meta = Term::Meta(format!("?m{}", result_type_meta_id.0));
                let func_type = Term::Pi(
                    "_".to_string(),
                    Box::new(arg_type_meta.clone()),
                    Box::new(result_type_meta.clone()),
                    false,
                );

                // Add constraints:
                // 1. f : ?A -> ?B
                // 2. arg : ?A
                // 3. ?B ≡ expected_type
                let f_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: f.as_ref().clone(),
                    expected_type: func_type,
                };
                let arg_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: arg.as_ref().clone(),
                    expected_type: arg_type_meta,
                };
                let result_constraint = Constraint::Unify {
                    id: self.new_constraint_id(),
                    left: result_type_meta,
                    right: expected_type.clone(),
                    strength: ConstraintStrength::Required,
                };

                self.add_constraint(f_constraint);
                self.add_constraint(arg_constraint);
                self.add_constraint(result_constraint);

                Ok(true)
            }

            Term::Abs(param, param_type, body) => {
                // For lambda abstractions, expected type should be a Pi type
                match expected_type {
                    Term::Pi(_, expected_param_type, expected_body_type, _) => {
                        // Unify parameter types and check body type
                        let param_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: param_type.as_ref().clone(),
                            right: expected_param_type.as_ref().clone(),
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: expected_body_type.as_ref().clone(),
                        };

                        self.add_constraint(param_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    Term::Meta(_) => {
                        // Expected type is a meta-variable, create a Pi type for it
                        let body_type_meta_id = self.fresh_meta(vec![param.clone()], None);
                        let body_type_meta = Term::Meta(format!("?m{}", body_type_meta_id.0));
                        let pi_type = Term::Pi(
                            param.clone(),
                            param_type.clone(),
                            Box::new(body_type_meta.clone()),
                            false,
                        );

                        let type_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: expected_type.clone(),
                            right: pi_type,
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: body_type_meta,
                        };

                        self.add_constraint(type_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    _ => {
                        // Type mismatch - lambda can't have non-function type
                        Err(TypeError::TypeMismatch {
                            expected: expected_type.clone(),
                            actual: Term::Pi(
                                param.clone(),
                                param_type.clone(),
                                Box::new(Term::Meta("?unknown".to_string())),
                                false,
                            ),
                        })
                    }
                }
            }

            _ => {
                // For other terms, just unify with the expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }
        }
    }

    /// Attempt to synthesize a term of a given type (basic heuristics)
    fn synthesize_term_of_type(&self, expected_type: &Term) -> Result<Option<Term>, TypeError> {
        match expected_type {
            Term::Sort(_) => {
                // For Sort types, we could return a fresh meta or a simple type
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
            Term::Pi(param, param_type, _body_type, _implicit) => {
                // For Pi types, synthesize a lambda that matches the Pi structure
                // Use a fresh meta-variable for the body to avoid infinite recursion
                let body_term = Term::Meta(format!("?body{}", self.next_meta_id));

                Ok(Some(Term::Abs(
                    param.clone(),
                    param_type.clone(),
                    Box::new(body_term),
                )))
            }
            Term::Meta(_) => {
                // Can't synthesize for unknown types
                Ok(None)
            }
            _ => {
                // For concrete types, return a meta-variable
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
        }
    }

    /// Check if a term is a Miller pattern
    /// Miller patterns are terms of the form: ?M x1 x2 ... xn where xi are
    /// distinct bound variables
    fn is_miller_pattern(&self, term: &Term) -> bool {
        let (head, args) = self.decompose_application(term);

        match head {
            Term::Meta(_) => {
                // Check that all arguments are distinct variables
                let mut seen = HashSet::new();
                for arg in args {
                    if let Term::Var(x) = arg {
                        if !seen.insert(x.clone()) {
                            return false; // Not distinct
                        }
                    } else {
                        return false; // Not a variable
                    }
                }
                true
            }
            _ => false,
        }
    }

    /// Extract the spine of a Miller pattern (the meta-variable and its
    /// arguments)
    fn extract_miller_spine(&self, term: &Term) -> Option<(String, Vec<String>)> {
        // First validate it's actually a Miller pattern
        if !self.is_miller_pattern(term) {
            return None;
        }

        let (head, args) = self.decompose_application(term);

        if let Term::Meta(meta_name) = head {
            // Collect variable names (we already know they're all variables from
            // is_miller_pattern)
            let var_names: Vec<String> = args
                .iter()
                .map(|arg| {
                    if let Term::Var(name) = arg {
                        name.clone()
                    } else {
                        unreachable!()
                    }
                })
                .collect();

            Some((meta_name.clone(), var_names))
        } else {
            None
        }
    }

    /// Solve simple Miller pattern unification
    /// Only handles simple cases
    fn solve_miller_pattern(
        &mut self,
        meta_name: &str,
        spine: &[String],
        other: &Term,
    ) -> Result<bool, TypeError> {
        // Check if we can abstract over the spine variables
        if !Self::can_abstract_over_variables(other, spine) {
            // Can't abstract, fall back to regular unification
            return self.unify(
                &Term::Meta(meta_name.to_string()),
                other,
                ConstraintStrength::Required,
            );
        }

        if let Some(meta_id) = self.parse_meta_name(meta_name) {
            // Only handle simple cases to avoid complexity
            match other {
                // Case 1: ?M x ≡ y (different variable) -> ?M := λx. y
                Term::Var(y) if spine.len() == 1 && spine[0] != *y => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))), /* Type inferred later */
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                // Case 2: ?M x ≡ c (constant) -> ?M := λx. c
                Term::Const(_) | Term::Sort(_) if spine.len() == 1 => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))),
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                _ => {
                    // For more complex cases, we could build the full lambda
                    // abstraction but for now, fall back to
                    // regular unification
                }
            }
        }

        // Fall back to regular unification
        self.unify(
            &Term::Meta(meta_name.to_string()),
            other,
            ConstraintStrength::Required,
        )
    }

    /// Check if a term can be abstracted over the given variables
    fn can_abstract_over_variables(term: &Term, vars: &[String]) -> bool {
        // For now, implement a simple check
        // In a full implementation, this would check that the term only uses variables
        // that are either in the spine or are bound locally
        match term {
            Term::Var(v) => vars.contains(v), // Variable must be in spine
            Term::App(f, arg) => {
                Self::can_abstract_over_variables(f, vars)
                    && Self::can_abstract_over_variables(arg, vars)
            }
            Term::Abs(param, ty, body) => {
                // Extend the scope with the bound parameter
                let mut extended_vars = vars.to_vec();
                extended_vars.push(param.clone());
                Self::can_abstract_over_variables(ty, vars)
                    && Self::can_abstract_over_variables(body, &extended_vars)
            }
            Term::Meta(_) => true,  // Meta-variables are always abstractable
            Term::Const(_) => true, // Constants are always abstractable
            Term::Sort(_) => true,  // Sorts are always abstractable
            _ => false,             // Conservative for complex constructs
        }
    }

    /// Delay a constraint for later processing
    fn delay_constraint(
        &mut self,
        constraint_id: ConstraintId,
        waiting_on: HashSet<MetaId>,
    ) -> Result<(), TypeError> {
        if let Some(constraint) = self.constraints.remove(&constraint_id) {
            let delayed = Constraint::Delayed {
                id: constraint_id,
                constraint: Box::new(constraint),
                waiting_on,
            };
            self.constraints.insert(constraint_id, delayed);
            self.delayed_constraints.insert(constraint_id);
        }
        Ok(())
    }

    /// Wake up delayed constraints that are no longer blocked
    fn wake_delayed_constraints(&mut self) -> Result<Vec<ConstraintId>, TypeError> {
        let mut woken = Vec::new();
        let mut to_wake = Vec::new();

        // Check which delayed constraints can now be woken up
        for constraint_id in &self.delayed_constraints {
            if let Some(Constraint::Delayed {
                waiting_on,
                constraint,
                ..
            }) = self.constraints.get(constraint_id)
            {
                // Check if all blocking metas are now solved
                let all_solved = waiting_on.iter().all(|meta_id| {
                    self.metas
                        .get(meta_id)
                        .is_some_and(|info| info.solution.is_some())
                });

                if all_solved {
                    to_wake.push((*constraint_id, constraint.as_ref().clone()));
                }
            }
        }

        // Wake up the constraints
        for (constraint_id, original_constraint) in to_wake {
            self.constraints.insert(constraint_id, original_constraint);
            self.delayed_constraints.remove(&constraint_id);
            woken.push(constraint_id);
        }

        Ok(woken)
    }

    /// Check if we can delay a constraint
    fn can_delay(&self, constraint_id: &ConstraintId) -> bool {
        if let Some(constraint) = self.constraints.get(constraint_id) {
            match constraint {
                Constraint::Unify {
                    left,
                    right,
                    strength,
                    ..
                } => {
                    // We can delay weak constraints that involve unsolved metas
                    if *strength == ConstraintStrength::Weak {
                        let left_metas = self.collect_metas_in_term(left);
                        let right_metas = self.collect_metas_in_term(right);

                        // Check if any involved metas are unsolved
                        for meta_id in left_metas.iter().chain(right_metas.iter()) {
                            if self.is_unsolved_meta(meta_id) {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::HasType {
                    term,
                    expected_type,
                    ..
                } => {
                    // Can delay HasType constraints involving unsolved metas
                    let term_metas = self.collect_metas_in_term(term);
                    let type_metas = self.collect_metas_in_term(expected_type);

                    for meta_id in term_metas.iter().chain(type_metas.iter()) {
                        if let Some(meta_info) = self.metas.get(meta_id) {
                            if meta_info.solution.is_none() {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::Delayed { .. } => true, // Already delayed
                _ => false,
            }
        } else {
            false
        }
    }

    /// Check if a meta-variable is unsolved
    fn is_unsolved_meta(&self, meta_id: &MetaId) -> bool {
        self.metas
            .get(meta_id)
            .is_some_and(|info| info.solution.is_none())
    }
}
/// Information about a meta-variable
#[derive(Debug, Clone)]
pub struct MetaInfo {
    pub id: MetaId,
    pub context: Vec<String>, // Variables in scope when meta was created
    pub expected_type: Option<Term>, // Expected type of the meta-variable
    pub solution: Option<Term>, // Solution if solved
    pub dependencies: HashSet<MetaId>, // Meta-variables this depends on
    pub occurrences: Vec<ConstraintId>, // Constraints this meta appears in
}
}

Meta-variables represent unknown terms that must be resolved through constraint solving. Our implementation maintains comprehensive metadata for each meta-variable, enabling dependency tracking and solution propagation:

#![allow(unused)]
fn main() {
use std::collections::{HashMap, HashSet, VecDeque};
use std::fmt;
use crate::ast::{Term, Universe};
use crate::context::Context;
use crate::errors::TypeError;
impl fmt::Display for MetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?m{}", self.0)
    }
}
/// Universe meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct UniverseMetaId(pub u64);
impl fmt::Display for UniverseMetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?u{}", self.0)
    }
}
/// Information about a meta-variable
#[derive(Debug, Clone)]
pub struct MetaInfo {
    pub id: MetaId,
    pub context: Vec<String>, // Variables in scope when meta was created
    pub expected_type: Option<Term>, // Expected type of the meta-variable
    pub solution: Option<Term>, // Solution if solved
    pub dependencies: HashSet<MetaId>, // Meta-variables this depends on
    pub occurrences: Vec<ConstraintId>, // Constraints this meta appears in
}
/// Constraint identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct ConstraintId(pub u64);
/// Constraint types
#[derive(Debug, Clone)]
pub enum Constraint {
    /// Unification: t1 ≡ t2
    Unify {
        id: ConstraintId,
        left: Term,
        right: Term,
        strength: ConstraintStrength,
    },
    /// Type constraint: term : type
    HasType {
        id: ConstraintId,
        term: Term,
        expected_type: Term,
    },
    /// Universe unification: u1 ≡ u2
    UnifyUniverse {
        id: ConstraintId,
        left: Universe,
        right: Universe,
        strength: ConstraintStrength,
    },
    /// Universe level constraint: u1 ≤ u2
    UniverseLevel {
        id: ConstraintId,
        left: Universe,
        right: Universe,
    },
    /// Delayed constraint (for complex patterns)
    Delayed {
        id: ConstraintId,
        constraint: Box<Constraint>,
        waiting_on: HashSet<MetaId>,
    },
}
/// Constraint strength for prioritization
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ConstraintStrength {
    Required,  // Must be solved
    Preferred, // Should be solved if possible
    Weak,      // Can be postponed
}
/// Substitution mapping meta-variables to terms
#[derive(Debug, Clone, Default)]
pub struct Substitution {
    mapping: HashMap<MetaId, Term>,
    universe_mapping: HashMap<u32, Universe>,
}
impl Substitution {
    pub fn new() -> Self {
        Self::default()
    }

    pub fn insert(&mut self, meta: MetaId, term: Term) {
        self.mapping.insert(meta, term);
    }

    pub fn get(&self, meta: &MetaId) -> Option<&Term> {
        self.mapping.get(meta)
    }

    pub fn insert_universe(&mut self, meta_id: u32, universe: Universe) {
        self.universe_mapping.insert(meta_id, universe);
    }

    pub fn get_universe(&self, meta_id: &u32) -> Option<&Universe> {
        self.universe_mapping.get(meta_id)
    }

    /// Apply substitution to a term
    pub fn apply(&self, term: &Term) -> Term {
        match term {
            Term::Meta(name) => {
                // Try to parse meta ID and look up substitution
                if let Some(meta_id) = self.parse_meta_name(name) {
                    if let Some(subst) = self.mapping.get(&meta_id) {
                        // Recursively apply substitution
                        return self.apply(subst);
                    }
                }
                term.clone()
            }
            Term::App(f, arg) => Term::App(Box::new(self.apply(f)), Box::new(self.apply(arg))),
            Term::Abs(x, ty, body) => Term::Abs(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
            ),
            Term::Pi(x, ty, body, implicit) => Term::Pi(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
                *implicit,
            ),
            Term::Let(x, ty, val, body) => Term::Let(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(val)),
                Box::new(self.apply(body)),
            ),
            Term::Sort(u) => Term::Sort(self.apply_universe(u)),
            // Other cases remain unchanged
            _ => term.clone(),
        }
    }

    /// Apply substitution to a universe
    pub fn apply_universe(&self, universe: &Universe) -> Universe {
        let result = match universe {
            Universe::Meta(id) => {
                if let Some(subst) = self.universe_mapping.get(id) {
                    // Recursively apply substitution
                    self.apply_universe(subst)
                } else {
                    universe.clone()
                }
            }
            Universe::Add(base, n) => Universe::Add(Box::new(self.apply_universe(base)), *n),
            Universe::Max(u, v) => Universe::Max(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            Universe::IMax(u, v) => Universe::IMax(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            // Constants, variables, and scoped variables remain unchanged during substitution
            Universe::Const(_) | Universe::ScopedVar(_, _) => universe.clone(),
        };
        // Normalize the result to simplify expressions like Const(0) + 1 -> Const(1)
        self.normalize_universe_static(&result)
    }

    /// Static normalization (doesn't require &self)
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe_static(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe_static(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe_static(u1);
                let u2_norm = self.normalize_universe_static(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }
}
/// Advanced constraint solver with dependency tracking
pub struct Solver {
    /// Meta-variables
    metas: HashMap<MetaId, MetaInfo>,
    /// Active constraints
    constraints: HashMap<ConstraintId, Constraint>,
    /// Constraint queue (ordered by priority)
    queue: VecDeque<ConstraintId>,
    /// Solved substitutions
    substitution: Substitution,
    /// Next meta ID
    next_meta_id: u64,
    /// Next universe meta ID
    next_universe_meta_id: u32,
    /// Next constraint ID
    next_constraint_id: u64,
    /// Type checking context
    context: Context,
    /// Dependency graph: meta -> constraints that depend on it
    dependencies: HashMap<MetaId, HashSet<ConstraintId>>,
    /// Delayed constraints that couldn't be solved immediately
    delayed_constraints: HashSet<ConstraintId>,
    /// Recursion depth counter to prevent stack overflow
    recursion_depth: u32,
    /// Flag to enable Miller pattern solving (advanced)
    enable_miller_patterns: bool,
    /// Flag to enable HasType constraint solving (advanced)
    enable_has_type_solving: bool,
}
impl Solver {
    pub fn new(context: Context) -> Self {
        Self {
            metas: HashMap::new(),
            constraints: HashMap::new(),
            queue: VecDeque::new(),
            substitution: Substitution::new(),
            next_meta_id: 0,
            next_universe_meta_id: 0,
            next_constraint_id: 0,
            context,
            dependencies: HashMap::new(),
            delayed_constraints: HashSet::new(),
            recursion_depth: 0,
            enable_miller_patterns: false,  // Advanced feature
            enable_has_type_solving: false, // Advanced feature
        }
    }

    /// Enable Miller pattern solving (advanced)
    pub fn enable_miller_patterns(&mut self) {
        self.enable_miller_patterns = true;
    }

    /// Check if Miller pattern solving is enabled
    pub fn miller_patterns_enabled(&self) -> bool {
        self.enable_miller_patterns
    }

    /// Enable HasType constraint solving (advanced)
    pub fn enable_has_type_solving(&mut self) {
        self.enable_has_type_solving = true;
    }

    /// Check if HasType constraint solving is enabled
    pub fn has_type_solving_enabled(&self) -> bool {
        self.enable_has_type_solving
    }

    /// Create a fresh meta-variable
    pub fn fresh_meta(&mut self, ctx_vars: Vec<String>, expected_type: Option<Term>) -> MetaId {
        let id = MetaId(self.next_meta_id);
        self.next_meta_id += 1;

        let info = MetaInfo {
            id,
            context: ctx_vars,
            expected_type,
            solution: None,
            dependencies: HashSet::new(),
            occurrences: Vec::new(),
        };

        self.metas.insert(id, info);
        id
    }

    /// Create a fresh universe meta-variable
    pub fn fresh_universe_meta(&mut self) -> Universe {
        let id = self.next_universe_meta_id;
        self.next_universe_meta_id += 1;
        Universe::Meta(id)
    }

    /// Add a constraint to the solver
    pub fn add_constraint(&mut self, constraint: Constraint) -> ConstraintId {
        let id = match &constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        // Track which metas appear in this constraint
        self.track_meta_occurrences(&constraint);

        self.constraints.insert(id, constraint);
        self.queue.push_back(id);
        id
    }

    /// Create a new constraint ID
    pub fn new_constraint_id(&mut self) -> ConstraintId {
        let id = ConstraintId(self.next_constraint_id);
        self.next_constraint_id += 1;
        id
    }

    /// Track meta-variable occurrences in constraints
    fn track_meta_occurrences(&mut self, constraint: &Constraint) {
        let metas = self.collect_metas_in_constraint(constraint);
        let constraint_id = match constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        for meta_id in metas {
            if let Some(info) = self.metas.get_mut(&meta_id) {
                info.occurrences.push(constraint_id);
            }
            self.dependencies
                .entry(meta_id)
                .or_default()
                .insert(constraint_id);
        }
    }

    /// Collect all meta-variables in a constraint
    fn collect_metas_in_constraint(&self, constraint: &Constraint) -> HashSet<MetaId> {
        match constraint {
            Constraint::Unify { left, right, .. } => {
                let mut metas = self.collect_metas_in_term(left);
                metas.extend(self.collect_metas_in_term(right));
                metas
            }
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                let mut metas = self.collect_metas_in_term(term);
                metas.extend(self.collect_metas_in_term(expected_type));
                metas
            }
            Constraint::UnifyUniverse { .. } => {
                // Universe constraints don't directly contain term meta-variables
                // but they might contain universe meta-variables (which we track separately)
                HashSet::new()
            }
            Constraint::UniverseLevel { .. } => {
                // Same as above
                HashSet::new()
            }
            Constraint::Delayed {
                constraint,
                waiting_on,
                ..
            } => {
                let mut metas = waiting_on.clone();
                metas.extend(self.collect_metas_in_constraint(constraint));
                metas
            }
        }
    }

    /// Collect all meta-variables in a term
    fn collect_metas_in_term(&self, term: &Term) -> HashSet<MetaId> {
        let mut metas = HashSet::new();
        self.collect_metas_recursive(term, &mut metas);
        metas
    }

    fn collect_metas_recursive(&self, term: &Term, metas: &mut HashSet<MetaId>) {
        match term {
            Term::Meta(name) => {
                if let Some(id) = self.parse_meta_name(name) {
                    metas.insert(id);
                }
            }
            Term::App(f, arg) => {
                self.collect_metas_recursive(f, metas);
                self.collect_metas_recursive(arg, metas);
            }
            Term::Abs(_, ty, body) | Term::Pi(_, ty, body, _) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(body, metas);
            }
            Term::Let(_, ty, val, body) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(val, metas);
                self.collect_metas_recursive(body, metas);
            }
            _ => {}
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }

    /// Main solving loop
    pub fn solve(&mut self) -> Result<Substitution, TypeError> {
        let max_iterations = 1000;
        let mut iterations = 0;

        while !self.queue.is_empty() && iterations < max_iterations {
            iterations += 1;

            // Pick the best constraint to solve
            if let Some(constraint_id) = self.pick_constraint() {
                let constraint = self.constraints.remove(&constraint_id).unwrap();

                match self.solve_constraint(constraint.clone()) {
                    Ok(progress) => {
                        if progress {
                            // Propagate the solution
                            self.propagate_solution()?;

                            // Wake up delayed constraints that might now be solvable
                            let woken_constraints = self.wake_delayed_constraints()?;

                            // Add woken constraints back to queue with high priority
                            for woken_id in woken_constraints {
                                self.queue.push_front(woken_id);
                            }
                        }
                    }
                    Err(e) => {
                        // Try to delay the constraint if possible
                        if self.can_delay(&constraint_id) {
                            // Determine which meta-variables to wait for
                            let waiting_on = match &constraint {
                                Constraint::Unify { left, right, .. } => {
                                    let mut metas = self.collect_metas_in_term(left);
                                    metas.extend(self.collect_metas_in_term(right));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                Constraint::HasType {
                                    term,
                                    expected_type,
                                    ..
                                } => {
                                    let mut metas = self.collect_metas_in_term(term);
                                    metas.extend(self.collect_metas_in_term(expected_type));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                _ => HashSet::new(),
                            };

                            if !waiting_on.is_empty() {
                                // Put constraint back and delay it
                                self.constraints.insert(constraint_id, constraint);
                                self.delay_constraint(constraint_id, waiting_on)?;
                            } else {
                                // Can't delay, return error
                                return Err(e);
                            }
                        } else {
                            return Err(e);
                        }
                    }
                }
            }
        }

        if iterations >= max_iterations {
            return Err(TypeError::Internal {
                message: "Constraint solving did not converge".to_string(),
            });
        }

        // Check for unsolved required constraints
        for constraint in self.constraints.values() {
            if let Constraint::Unify {
                strength: ConstraintStrength::Required,
                ..
            } = constraint
            {
                return Err(TypeError::Internal {
                    message: "Unsolved required constraints remain".to_string(),
                });
            }
        }

        Ok(self.substitution.clone())
    }

    /// Pick the next constraint to solve based on heuristics
    fn pick_constraint(&mut self) -> Option<ConstraintId> {
        // Priority order:
        // 1. Constraints with no meta-variables
        // 2. Constraints with only solved meta-variables
        // 3. Simple pattern constraints (Miller patterns)
        // 4. Other constraints

        let mut best_constraint = None;
        let mut best_score = i32::MAX;

        for &id in &self.queue {
            if let Some(constraint) = self.constraints.get(&id) {
                let score = self.score_constraint(constraint);
                if score < best_score {
                    best_score = score;
                    best_constraint = Some(id);
                }
            }
        }

        if let Some(id) = best_constraint {
            self.queue.retain(|&x| x != id);
            Some(id)
        } else {
            self.queue.pop_front()
        }
    }

    /// Score a constraint for solving priority (lower is better)
    fn score_constraint(&self, constraint: &Constraint) -> i32 {
        let metas = self.collect_metas_in_constraint(constraint);

        if metas.is_empty() {
            return 0; // No metas, can solve immediately
        }

        let unsolved_count = metas
            .iter()
            .filter(|m| {
                self.metas
                    .get(m)
                    .and_then(|info| info.solution.as_ref())
                    .is_none()
            })
            .count();

        if unsolved_count == 0 {
            return 1; // All metas solved
        }

        // Check for Miller patterns
        if let Constraint::Unify { left, right, .. } = constraint {
            if self.is_miller_pattern_unification(left, right)
                || self.is_miller_pattern_unification(right, left)
            {
                return 10 + unsolved_count as i32;
            }
        }

        100 + unsolved_count as i32
    }

    /// Check if a unification is a Miller pattern (?M x1 ... xn = t)
    fn is_miller_pattern_unification(&self, left: &Term, right: &Term) -> bool {
        self.is_simple_miller_pattern(left, right)
    }

    /// Check if left is a simple Miller pattern and right is safe to solve with
    pub fn is_simple_miller_pattern(&self, left: &Term, right: &Term) -> bool {
        let (head, args) = self.decompose_application(left);

        if let Term::Meta(meta_name) = head {
            // Must have at least one argument to be interesting
            if args.is_empty() {
                return false;
            }

            // Check if all arguments are distinct variables
            let mut seen = HashSet::new();
            for arg in &args {
                if let Term::Var(x) = arg {
                    if !seen.insert(x.clone()) {
                        return false; // Not distinct
                    }
                } else {
                    return false; // Not a variable
                }
            }

            // Check occurs check - right must not contain the meta-variable
            if let Some(meta_id) = self.parse_meta_name(meta_name) {
                if self.collect_metas_in_term(right).contains(&meta_id) {
                    return false; // Occurs check failure
                }
            }

            // For now, only handle simple cases where right is a variable or constant
            match right {
                Term::Var(_) | Term::Const(_) | Term::Sort(_) => true,
                Term::Meta(_) => {
                    // Allow meta-to-meta if they're different
                    if let Term::Meta(right_name) = right {
                        meta_name != right_name
                    } else {
                        false
                    }
                }
                _ => false, // More complex cases need careful handling
            }
        } else {
            false
        }
    }

    /// Decompose an application into head and arguments
    fn decompose_application<'a>(&self, term: &'a Term) -> (&'a Term, Vec<&'a Term>) {
        let mut current = term;
        let mut args = Vec::new();

        while let Term::App(f, arg) = current {
            args.push(arg.as_ref());
            current = f;
        }

        args.reverse();
        (current, args)
    }

    /// Solve a single constraint
    fn solve_constraint(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 100;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in solve_constraint",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.solve_constraint_impl(constraint);
        self.recursion_depth -= 1;
        result
    }

    fn solve_constraint_impl(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        match constraint {
            Constraint::Unify {
                left,
                right,
                strength,
                ..
            } => self.unify(&left, &right, strength),
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                if self.enable_has_type_solving {
                    self.solve_has_type_constraint(&term, &expected_type)
                } else {
                    // Default behavior: don't solve HasType constraints
                    Ok(false)
                }
            }
            Constraint::UnifyUniverse {
                left,
                right,
                strength,
                ..
            } => self.unify_universe(&left, &right, strength),
            Constraint::UniverseLevel { left, right, .. } => {
                // For now, treat level constraints as unification constraints
                self.unify_universe(&left, &right, ConstraintStrength::Preferred)
            }
            Constraint::Delayed { constraint, .. } => {
                // Re-queue the delayed constraint
                let inner = *constraint;
                self.add_constraint(inner);
                Ok(false)
            }
        }
    }

    /// Unify two terms
    fn unify(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 50;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in unify",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.unify_impl(left, right, strength);
        self.recursion_depth -= 1;
        result
    }

    fn unify_impl(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply(left);
        let right = self.substitution.apply(right);

        // Check if already equal
        if self.alpha_equal(&left, &right) {
            return Ok(false);
        }

        // Miller pattern detection and solving (if enabled)
        if self.enable_miller_patterns {
            if self.is_simple_miller_pattern(&left, &right) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&left) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &right);
                }
            } else if self.is_simple_miller_pattern(&right, &left) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&right) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &left);
                }
            }
        }

        match (&left, &right) {
            // Meta-variable cases
            (Term::Meta(m), t) | (t, Term::Meta(m)) => {
                if let Some(meta_id) = self.parse_meta_name(m) {
                    self.solve_meta(meta_id, t)
                } else {
                    Err(TypeError::Internal {
                        message: format!("Invalid meta-variable: {}", m),
                    })
                }
            }

            // Structural cases
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *f1.clone(),
                    right: *f2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: *a1.clone(),
                    right: *a2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                // Rename bound variables to be consistent
                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            // Universe (Sort) cases
            (Term::Sort(u1), Term::Sort(u2)) => self.unify_universe(u1, u2, strength),

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::TypeMismatch {
                        expected: left,
                        actual: right,
                    })
                } else {
                    Ok(false) // Can't solve, but not an error for weak
                              // constraints
                }
            }
        }
    }

    /// Unify two universes
    fn unify_universe(
        &mut self,
        left: &Universe,
        right: &Universe,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply_universe(left);
        let right = self.substitution.apply_universe(right);

        // Normalize universes (simplify expressions like 0+1 -> 1)
        let left = self.normalize_universe(&left);
        let right = self.normalize_universe(&right);

        // Check if already equal
        if left == right {
            return Ok(false);
        }

        match (&left, &right) {
            // Same constants
            (Universe::Const(n1), Universe::Const(n2)) if n1 == n2 => Ok(false),

            // Universe variable cases
            (Universe::ScopedVar(s1, v1), Universe::ScopedVar(s2, v2)) if s1 == s2 && v1 == v2 => {
                Ok(false)
            }
            (Universe::ScopedVar(_, _), Universe::ScopedVar(_, _)) => {
                // Different universe variables cannot be unified
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!(
                            "Cannot unify distinct universe variables {} and {}",
                            left, right
                        ),
                    })
                } else {
                    Ok(false)
                }
            }

            // Meta-variable cases
            (Universe::Meta(id), u) | (u, Universe::Meta(id)) => self.solve_universe_meta(*id, u),

            // Structural cases
            (Universe::Add(base1, n1), Universe::Add(base2, n2)) if n1 == n2 => {
                let constraint_id = self.new_constraint_id();
                self.add_constraint(Constraint::UnifyUniverse {
                    id: constraint_id,
                    left: *base1.clone(),
                    right: *base2.clone(),
                    strength,
                });
                Ok(true)
            }

            // Arithmetic constraints: ?u+n = m means ?u = m-n
            (Universe::Add(base, n), Universe::Const(m))
            | (Universe::Const(m), Universe::Add(base, n)) => {
                if *m >= *n {
                    if let Universe::Meta(id) = base.as_ref() {
                        self.solve_universe_meta(*id, &Universe::Const(m - n))
                    } else {
                        // More complex case - generate constraint for base
                        let constraint_id = self.new_constraint_id();
                        self.add_constraint(Constraint::UnifyUniverse {
                            id: constraint_id,
                            left: *base.clone(),
                            right: Universe::Const(m - n),
                            strength,
                        });
                        Ok(true)
                    }
                } else {
                    Err(TypeError::Internal {
                        message: format!("Cannot solve constraint: {} cannot equal {} (would require negative universe)",
                                       if matches!(&left, Universe::Add(_, _)) { &left } else { &right },
                                       if matches!(&left, Universe::Const(_)) { &left } else { &right }),
                    })
                }
            }

            (Universe::Max(u1, v1), Universe::Max(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Universe::IMax(u1, v1), Universe::IMax(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!("Cannot unify universes {} and {}", left, right),
                    })
                } else {
                    Ok(false)
                }
            }
        }
    }

    /// Normalize a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe(u1);
                let u2_norm = self.normalize_universe(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    /// Solve a universe meta-variable
    fn solve_universe_meta(
        &mut self,
        meta_id: u32,
        solution: &Universe,
    ) -> Result<bool, TypeError> {
        // Occurs check for universe meta-variables
        if self.occurs_in_universe(meta_id, solution) {
            return Err(TypeError::Internal {
                message: format!("Universe occurs check failed for ?u{}", meta_id),
            });
        }

        // Record solution
        self.substitution.insert_universe(meta_id, solution.clone());
        Ok(true)
    }

    /// Check if a universe meta-variable occurs in a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn occurs_in_universe(&self, meta_id: u32, universe: &Universe) -> bool {
        match universe {
            Universe::Meta(id) => *id == meta_id,
            Universe::Add(base, _) => self.occurs_in_universe(meta_id, base),
            Universe::Max(u, v) | Universe::IMax(u, v) => {
                self.occurs_in_universe(meta_id, u) || self.occurs_in_universe(meta_id, v)
            }
            _ => false,
        }
    }

    /// Solve a meta-variable
    fn solve_meta(&mut self, meta_id: MetaId, solution: &Term) -> Result<bool, TypeError> {
        // Occurs check
        if self.collect_metas_in_term(solution).contains(&meta_id) {
            return Err(TypeError::Internal {
                message: format!("Occurs check failed for ?m{}", meta_id.0),
            });
        }

        // Check solution is well-scoped
        if let Some(meta_info) = self.metas.get(&meta_id) {
            if !self.is_well_scoped(solution, &meta_info.context) {
                return Err(TypeError::Internal {
                    message: format!("Solution for ?m{} is not well-scoped", meta_id.0),
                });
            }
        }

        // Record solution
        self.substitution.insert(meta_id, solution.clone());

        if let Some(info) = self.metas.get_mut(&meta_id) {
            info.solution = Some(solution.clone());
        }

        Ok(true)
    }

    /// Check if a term is well-scoped in a context
    fn is_well_scoped(&self, term: &Term, context: &[String]) -> bool {
        match term {
            Term::Var(x) => {
                // Check if it's a bound variable or a global constant
                context.contains(x)
                    || self.context.lookup(x).is_some()
                    || self.context.lookup_axiom(x).is_some()
                    || self.context.lookup_constructor(x).is_some()
            }
            Term::App(f, arg) => {
                self.is_well_scoped(f, context) && self.is_well_scoped(arg, context)
            }
            Term::Abs(x, ty, body) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Pi(x, ty, body, _) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Const(name) => {
                // Constants should be in the global context
                self.context.lookup_axiom(name).is_some()
                    || self.context.lookup_constructor(name).is_some()
            }
            _ => true, // Sorts, meta-variables, etc.
        }
    }

    /// Alpha equality check
    fn alpha_equal(&self, t1: &Term, t2: &Term) -> bool {
        self.alpha_equal_aux(t1, t2, &mut vec![])
    }

    #[allow(clippy::only_used_in_recursion)]
    fn alpha_equal_aux(&self, t1: &Term, t2: &Term, renamings: &mut Vec<(String, String)>) -> bool {
        match (t1, t2) {
            (Term::Var(x), Term::Var(y)) => {
                // Check if this is a bound variable that was renamed
                for (a, b) in renamings.iter() {
                    if a == x && b == y {
                        return true;
                    }
                }
                x == y
            }
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                self.alpha_equal_aux(f1, f2, renamings) && self.alpha_equal_aux(a1, a2, renamings)
            }
            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Sort(u1), Term::Sort(u2)) => u1 == u2,
            (Term::Const(c1), Term::Const(c2)) => c1 == c2,
            (Term::Meta(m1), Term::Meta(m2)) => m1 == m2,
            _ => false,
        }
    }

    /// Rename a variable in a term
    #[allow(clippy::only_used_in_recursion)]
    fn rename_var(&self, old: &str, new: &str, term: &Term) -> Term {
        match term {
            Term::Var(x) if x == old => Term::Var(new.to_string()),
            Term::Var(x) => Term::Var(x.clone()),
            Term::App(f, arg) => Term::App(
                Box::new(self.rename_var(old, new, f)),
                Box::new(self.rename_var(old, new, arg)),
            ),
            Term::Abs(x, ty, body) if x != old => Term::Abs(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
            ),
            Term::Pi(x, ty, body, implicit) if x != old => Term::Pi(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
                *implicit,
            ),
            _ => term.clone(),
        }
    }

    /// Propagate solutions to dependent constraints
    fn propagate_solution(&mut self) -> Result<(), TypeError> {
        // Apply substitution to all remaining constraints
        let constraints: Vec<_> = self.constraints.drain().collect();

        for (id, constraint) in constraints {
            let updated = self.apply_subst_to_constraint(constraint);
            self.constraints.insert(id, updated);
        }

        Ok(())
    }

    /// Apply substitution to a constraint
    fn apply_subst_to_constraint(&self, constraint: Constraint) -> Constraint {
        match constraint {
            Constraint::Unify {
                id,
                left,
                right,
                strength,
            } => Constraint::Unify {
                id,
                left: self.substitution.apply(&left),
                right: self.substitution.apply(&right),
                strength,
            },
            Constraint::HasType {
                id,
                term,
                expected_type,
            } => Constraint::HasType {
                id,
                term: self.substitution.apply(&term),
                expected_type: self.substitution.apply(&expected_type),
            },
            Constraint::UnifyUniverse {
                id,
                left,
                right,
                strength,
            } => Constraint::UnifyUniverse {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
                strength,
            },
            Constraint::UniverseLevel { id, left, right } => Constraint::UniverseLevel {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
            },
            Constraint::Delayed {
                id,
                constraint,
                waiting_on,
            } => Constraint::Delayed {
                id,
                constraint: Box::new(self.apply_subst_to_constraint(*constraint)),
                waiting_on,
            },
        }
    }

    /// Solve a HasType constraint by type inference and unification
    fn solve_has_type_constraint(
        &mut self,
        term: &Term,
        expected_type: &Term,
    ) -> Result<bool, TypeError> {
        match term {
            Term::Meta(meta_name) => {
                // If the term is a meta-variable, we can potentially solve it
                if let Some(meta_id) = self.parse_meta_name(meta_name) {
                    if let Some(meta_info) = self.metas.get(&meta_id) {
                        if meta_info.solution.is_none() {
                            // Try to construct a term that has the expected type
                            if let Some(solution) = self.synthesize_term_of_type(expected_type)? {
                                return self.solve_meta(meta_id, &solution);
                            }
                        }
                    }
                }
                // If we can't solve the meta, try unifying with expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }

            Term::App(f, arg) => {
                // For applications, we need to ensure the function type is compatible
                // Create a fresh meta for the function type: ?F : ?A -> ?B
                let arg_type_meta_id = self.fresh_meta(vec![], None);
                let result_type_meta_id = self.fresh_meta(vec![], None);
                let arg_type_meta = Term::Meta(format!("?m{}", arg_type_meta_id.0));
                let result_type_meta = Term::Meta(format!("?m{}", result_type_meta_id.0));
                let func_type = Term::Pi(
                    "_".to_string(),
                    Box::new(arg_type_meta.clone()),
                    Box::new(result_type_meta.clone()),
                    false,
                );

                // Add constraints:
                // 1. f : ?A -> ?B
                // 2. arg : ?A
                // 3. ?B ≡ expected_type
                let f_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: f.as_ref().clone(),
                    expected_type: func_type,
                };
                let arg_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: arg.as_ref().clone(),
                    expected_type: arg_type_meta,
                };
                let result_constraint = Constraint::Unify {
                    id: self.new_constraint_id(),
                    left: result_type_meta,
                    right: expected_type.clone(),
                    strength: ConstraintStrength::Required,
                };

                self.add_constraint(f_constraint);
                self.add_constraint(arg_constraint);
                self.add_constraint(result_constraint);

                Ok(true)
            }

            Term::Abs(param, param_type, body) => {
                // For lambda abstractions, expected type should be a Pi type
                match expected_type {
                    Term::Pi(_, expected_param_type, expected_body_type, _) => {
                        // Unify parameter types and check body type
                        let param_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: param_type.as_ref().clone(),
                            right: expected_param_type.as_ref().clone(),
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: expected_body_type.as_ref().clone(),
                        };

                        self.add_constraint(param_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    Term::Meta(_) => {
                        // Expected type is a meta-variable, create a Pi type for it
                        let body_type_meta_id = self.fresh_meta(vec![param.clone()], None);
                        let body_type_meta = Term::Meta(format!("?m{}", body_type_meta_id.0));
                        let pi_type = Term::Pi(
                            param.clone(),
                            param_type.clone(),
                            Box::new(body_type_meta.clone()),
                            false,
                        );

                        let type_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: expected_type.clone(),
                            right: pi_type,
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: body_type_meta,
                        };

                        self.add_constraint(type_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    _ => {
                        // Type mismatch - lambda can't have non-function type
                        Err(TypeError::TypeMismatch {
                            expected: expected_type.clone(),
                            actual: Term::Pi(
                                param.clone(),
                                param_type.clone(),
                                Box::new(Term::Meta("?unknown".to_string())),
                                false,
                            ),
                        })
                    }
                }
            }

            _ => {
                // For other terms, just unify with the expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }
        }
    }

    /// Attempt to synthesize a term of a given type (basic heuristics)
    fn synthesize_term_of_type(&self, expected_type: &Term) -> Result<Option<Term>, TypeError> {
        match expected_type {
            Term::Sort(_) => {
                // For Sort types, we could return a fresh meta or a simple type
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
            Term::Pi(param, param_type, _body_type, _implicit) => {
                // For Pi types, synthesize a lambda that matches the Pi structure
                // Use a fresh meta-variable for the body to avoid infinite recursion
                let body_term = Term::Meta(format!("?body{}", self.next_meta_id));

                Ok(Some(Term::Abs(
                    param.clone(),
                    param_type.clone(),
                    Box::new(body_term),
                )))
            }
            Term::Meta(_) => {
                // Can't synthesize for unknown types
                Ok(None)
            }
            _ => {
                // For concrete types, return a meta-variable
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
        }
    }

    /// Check if a term is a Miller pattern
    /// Miller patterns are terms of the form: ?M x1 x2 ... xn where xi are
    /// distinct bound variables
    fn is_miller_pattern(&self, term: &Term) -> bool {
        let (head, args) = self.decompose_application(term);

        match head {
            Term::Meta(_) => {
                // Check that all arguments are distinct variables
                let mut seen = HashSet::new();
                for arg in args {
                    if let Term::Var(x) = arg {
                        if !seen.insert(x.clone()) {
                            return false; // Not distinct
                        }
                    } else {
                        return false; // Not a variable
                    }
                }
                true
            }
            _ => false,
        }
    }

    /// Extract the spine of a Miller pattern (the meta-variable and its
    /// arguments)
    fn extract_miller_spine(&self, term: &Term) -> Option<(String, Vec<String>)> {
        // First validate it's actually a Miller pattern
        if !self.is_miller_pattern(term) {
            return None;
        }

        let (head, args) = self.decompose_application(term);

        if let Term::Meta(meta_name) = head {
            // Collect variable names (we already know they're all variables from
            // is_miller_pattern)
            let var_names: Vec<String> = args
                .iter()
                .map(|arg| {
                    if let Term::Var(name) = arg {
                        name.clone()
                    } else {
                        unreachable!()
                    }
                })
                .collect();

            Some((meta_name.clone(), var_names))
        } else {
            None
        }
    }

    /// Solve simple Miller pattern unification
    /// Only handles simple cases
    fn solve_miller_pattern(
        &mut self,
        meta_name: &str,
        spine: &[String],
        other: &Term,
    ) -> Result<bool, TypeError> {
        // Check if we can abstract over the spine variables
        if !Self::can_abstract_over_variables(other, spine) {
            // Can't abstract, fall back to regular unification
            return self.unify(
                &Term::Meta(meta_name.to_string()),
                other,
                ConstraintStrength::Required,
            );
        }

        if let Some(meta_id) = self.parse_meta_name(meta_name) {
            // Only handle simple cases to avoid complexity
            match other {
                // Case 1: ?M x ≡ y (different variable) -> ?M := λx. y
                Term::Var(y) if spine.len() == 1 && spine[0] != *y => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))), /* Type inferred later */
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                // Case 2: ?M x ≡ c (constant) -> ?M := λx. c
                Term::Const(_) | Term::Sort(_) if spine.len() == 1 => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))),
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                _ => {
                    // For more complex cases, we could build the full lambda
                    // abstraction but for now, fall back to
                    // regular unification
                }
            }
        }

        // Fall back to regular unification
        self.unify(
            &Term::Meta(meta_name.to_string()),
            other,
            ConstraintStrength::Required,
        )
    }

    /// Check if a term can be abstracted over the given variables
    fn can_abstract_over_variables(term: &Term, vars: &[String]) -> bool {
        // For now, implement a simple check
        // In a full implementation, this would check that the term only uses variables
        // that are either in the spine or are bound locally
        match term {
            Term::Var(v) => vars.contains(v), // Variable must be in spine
            Term::App(f, arg) => {
                Self::can_abstract_over_variables(f, vars)
                    && Self::can_abstract_over_variables(arg, vars)
            }
            Term::Abs(param, ty, body) => {
                // Extend the scope with the bound parameter
                let mut extended_vars = vars.to_vec();
                extended_vars.push(param.clone());
                Self::can_abstract_over_variables(ty, vars)
                    && Self::can_abstract_over_variables(body, &extended_vars)
            }
            Term::Meta(_) => true,  // Meta-variables are always abstractable
            Term::Const(_) => true, // Constants are always abstractable
            Term::Sort(_) => true,  // Sorts are always abstractable
            _ => false,             // Conservative for complex constructs
        }
    }

    /// Delay a constraint for later processing
    fn delay_constraint(
        &mut self,
        constraint_id: ConstraintId,
        waiting_on: HashSet<MetaId>,
    ) -> Result<(), TypeError> {
        if let Some(constraint) = self.constraints.remove(&constraint_id) {
            let delayed = Constraint::Delayed {
                id: constraint_id,
                constraint: Box::new(constraint),
                waiting_on,
            };
            self.constraints.insert(constraint_id, delayed);
            self.delayed_constraints.insert(constraint_id);
        }
        Ok(())
    }

    /// Wake up delayed constraints that are no longer blocked
    fn wake_delayed_constraints(&mut self) -> Result<Vec<ConstraintId>, TypeError> {
        let mut woken = Vec::new();
        let mut to_wake = Vec::new();

        // Check which delayed constraints can now be woken up
        for constraint_id in &self.delayed_constraints {
            if let Some(Constraint::Delayed {
                waiting_on,
                constraint,
                ..
            }) = self.constraints.get(constraint_id)
            {
                // Check if all blocking metas are now solved
                let all_solved = waiting_on.iter().all(|meta_id| {
                    self.metas
                        .get(meta_id)
                        .is_some_and(|info| info.solution.is_some())
                });

                if all_solved {
                    to_wake.push((*constraint_id, constraint.as_ref().clone()));
                }
            }
        }

        // Wake up the constraints
        for (constraint_id, original_constraint) in to_wake {
            self.constraints.insert(constraint_id, original_constraint);
            self.delayed_constraints.remove(&constraint_id);
            woken.push(constraint_id);
        }

        Ok(woken)
    }

    /// Check if we can delay a constraint
    fn can_delay(&self, constraint_id: &ConstraintId) -> bool {
        if let Some(constraint) = self.constraints.get(constraint_id) {
            match constraint {
                Constraint::Unify {
                    left,
                    right,
                    strength,
                    ..
                } => {
                    // We can delay weak constraints that involve unsolved metas
                    if *strength == ConstraintStrength::Weak {
                        let left_metas = self.collect_metas_in_term(left);
                        let right_metas = self.collect_metas_in_term(right);

                        // Check if any involved metas are unsolved
                        for meta_id in left_metas.iter().chain(right_metas.iter()) {
                            if self.is_unsolved_meta(meta_id) {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::HasType {
                    term,
                    expected_type,
                    ..
                } => {
                    // Can delay HasType constraints involving unsolved metas
                    let term_metas = self.collect_metas_in_term(term);
                    let type_metas = self.collect_metas_in_term(expected_type);

                    for meta_id in term_metas.iter().chain(type_metas.iter()) {
                        if let Some(meta_info) = self.metas.get(meta_id) {
                            if meta_info.solution.is_none() {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::Delayed { .. } => true, // Already delayed
                _ => false,
            }
        } else {
            false
        }
    }

    /// Check if a meta-variable is unsolved
    fn is_unsolved_meta(&self, meta_id: &MetaId) -> bool {
        self.metas
            .get(meta_id)
            .is_some_and(|info| info.solution.is_none())
    }
}
/// Meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct MetaId(pub u64);
}

The MetaInfo structure demonstrates how we track the complete lifecycle of unknown terms. The context field preserves the variable scope at the meta-variable’s creation point, ensuring that solutions respect lexical scoping. The dependencies field tracks which other meta-variables must be resolved before this one can be solved, enabling topological ordering of constraint resolution.

Constraint Representation

#![allow(unused)]
fn main() {
use std::collections::{HashMap, HashSet, VecDeque};
use std::fmt;
use crate::ast::{Term, Universe};
use crate::context::Context;
use crate::errors::TypeError;
/// Meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct MetaId(pub u64);
impl fmt::Display for MetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?m{}", self.0)
    }
}
/// Universe meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct UniverseMetaId(pub u64);
impl fmt::Display for UniverseMetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?u{}", self.0)
    }
}
/// Information about a meta-variable
#[derive(Debug, Clone)]
pub struct MetaInfo {
    pub id: MetaId,
    pub context: Vec<String>, // Variables in scope when meta was created
    pub expected_type: Option<Term>, // Expected type of the meta-variable
    pub solution: Option<Term>, // Solution if solved
    pub dependencies: HashSet<MetaId>, // Meta-variables this depends on
    pub occurrences: Vec<ConstraintId>, // Constraints this meta appears in
}
/// Constraint identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct ConstraintId(pub u64);
/// Constraint strength for prioritization
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ConstraintStrength {
    Required,  // Must be solved
    Preferred, // Should be solved if possible
    Weak,      // Can be postponed
}
/// Substitution mapping meta-variables to terms
#[derive(Debug, Clone, Default)]
pub struct Substitution {
    mapping: HashMap<MetaId, Term>,
    universe_mapping: HashMap<u32, Universe>,
}
impl Substitution {
    pub fn new() -> Self {
        Self::default()
    }

    pub fn insert(&mut self, meta: MetaId, term: Term) {
        self.mapping.insert(meta, term);
    }

    pub fn get(&self, meta: &MetaId) -> Option<&Term> {
        self.mapping.get(meta)
    }

    pub fn insert_universe(&mut self, meta_id: u32, universe: Universe) {
        self.universe_mapping.insert(meta_id, universe);
    }

    pub fn get_universe(&self, meta_id: &u32) -> Option<&Universe> {
        self.universe_mapping.get(meta_id)
    }

    /// Apply substitution to a term
    pub fn apply(&self, term: &Term) -> Term {
        match term {
            Term::Meta(name) => {
                // Try to parse meta ID and look up substitution
                if let Some(meta_id) = self.parse_meta_name(name) {
                    if let Some(subst) = self.mapping.get(&meta_id) {
                        // Recursively apply substitution
                        return self.apply(subst);
                    }
                }
                term.clone()
            }
            Term::App(f, arg) => Term::App(Box::new(self.apply(f)), Box::new(self.apply(arg))),
            Term::Abs(x, ty, body) => Term::Abs(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
            ),
            Term::Pi(x, ty, body, implicit) => Term::Pi(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
                *implicit,
            ),
            Term::Let(x, ty, val, body) => Term::Let(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(val)),
                Box::new(self.apply(body)),
            ),
            Term::Sort(u) => Term::Sort(self.apply_universe(u)),
            // Other cases remain unchanged
            _ => term.clone(),
        }
    }

    /// Apply substitution to a universe
    pub fn apply_universe(&self, universe: &Universe) -> Universe {
        let result = match universe {
            Universe::Meta(id) => {
                if let Some(subst) = self.universe_mapping.get(id) {
                    // Recursively apply substitution
                    self.apply_universe(subst)
                } else {
                    universe.clone()
                }
            }
            Universe::Add(base, n) => Universe::Add(Box::new(self.apply_universe(base)), *n),
            Universe::Max(u, v) => Universe::Max(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            Universe::IMax(u, v) => Universe::IMax(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            // Constants, variables, and scoped variables remain unchanged during substitution
            Universe::Const(_) | Universe::ScopedVar(_, _) => universe.clone(),
        };
        // Normalize the result to simplify expressions like Const(0) + 1 -> Const(1)
        self.normalize_universe_static(&result)
    }

    /// Static normalization (doesn't require &self)
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe_static(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe_static(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe_static(u1);
                let u2_norm = self.normalize_universe_static(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }
}
/// Advanced constraint solver with dependency tracking
pub struct Solver {
    /// Meta-variables
    metas: HashMap<MetaId, MetaInfo>,
    /// Active constraints
    constraints: HashMap<ConstraintId, Constraint>,
    /// Constraint queue (ordered by priority)
    queue: VecDeque<ConstraintId>,
    /// Solved substitutions
    substitution: Substitution,
    /// Next meta ID
    next_meta_id: u64,
    /// Next universe meta ID
    next_universe_meta_id: u32,
    /// Next constraint ID
    next_constraint_id: u64,
    /// Type checking context
    context: Context,
    /// Dependency graph: meta -> constraints that depend on it
    dependencies: HashMap<MetaId, HashSet<ConstraintId>>,
    /// Delayed constraints that couldn't be solved immediately
    delayed_constraints: HashSet<ConstraintId>,
    /// Recursion depth counter to prevent stack overflow
    recursion_depth: u32,
    /// Flag to enable Miller pattern solving (advanced)
    enable_miller_patterns: bool,
    /// Flag to enable HasType constraint solving (advanced)
    enable_has_type_solving: bool,
}
impl Solver {
    pub fn new(context: Context) -> Self {
        Self {
            metas: HashMap::new(),
            constraints: HashMap::new(),
            queue: VecDeque::new(),
            substitution: Substitution::new(),
            next_meta_id: 0,
            next_universe_meta_id: 0,
            next_constraint_id: 0,
            context,
            dependencies: HashMap::new(),
            delayed_constraints: HashSet::new(),
            recursion_depth: 0,
            enable_miller_patterns: false,  // Advanced feature
            enable_has_type_solving: false, // Advanced feature
        }
    }

    /// Enable Miller pattern solving (advanced)
    pub fn enable_miller_patterns(&mut self) {
        self.enable_miller_patterns = true;
    }

    /// Check if Miller pattern solving is enabled
    pub fn miller_patterns_enabled(&self) -> bool {
        self.enable_miller_patterns
    }

    /// Enable HasType constraint solving (advanced)
    pub fn enable_has_type_solving(&mut self) {
        self.enable_has_type_solving = true;
    }

    /// Check if HasType constraint solving is enabled
    pub fn has_type_solving_enabled(&self) -> bool {
        self.enable_has_type_solving
    }

    /// Create a fresh meta-variable
    pub fn fresh_meta(&mut self, ctx_vars: Vec<String>, expected_type: Option<Term>) -> MetaId {
        let id = MetaId(self.next_meta_id);
        self.next_meta_id += 1;

        let info = MetaInfo {
            id,
            context: ctx_vars,
            expected_type,
            solution: None,
            dependencies: HashSet::new(),
            occurrences: Vec::new(),
        };

        self.metas.insert(id, info);
        id
    }

    /// Create a fresh universe meta-variable
    pub fn fresh_universe_meta(&mut self) -> Universe {
        let id = self.next_universe_meta_id;
        self.next_universe_meta_id += 1;
        Universe::Meta(id)
    }

    /// Add a constraint to the solver
    pub fn add_constraint(&mut self, constraint: Constraint) -> ConstraintId {
        let id = match &constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        // Track which metas appear in this constraint
        self.track_meta_occurrences(&constraint);

        self.constraints.insert(id, constraint);
        self.queue.push_back(id);
        id
    }

    /// Create a new constraint ID
    pub fn new_constraint_id(&mut self) -> ConstraintId {
        let id = ConstraintId(self.next_constraint_id);
        self.next_constraint_id += 1;
        id
    }

    /// Track meta-variable occurrences in constraints
    fn track_meta_occurrences(&mut self, constraint: &Constraint) {
        let metas = self.collect_metas_in_constraint(constraint);
        let constraint_id = match constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        for meta_id in metas {
            if let Some(info) = self.metas.get_mut(&meta_id) {
                info.occurrences.push(constraint_id);
            }
            self.dependencies
                .entry(meta_id)
                .or_default()
                .insert(constraint_id);
        }
    }

    /// Collect all meta-variables in a constraint
    fn collect_metas_in_constraint(&self, constraint: &Constraint) -> HashSet<MetaId> {
        match constraint {
            Constraint::Unify { left, right, .. } => {
                let mut metas = self.collect_metas_in_term(left);
                metas.extend(self.collect_metas_in_term(right));
                metas
            }
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                let mut metas = self.collect_metas_in_term(term);
                metas.extend(self.collect_metas_in_term(expected_type));
                metas
            }
            Constraint::UnifyUniverse { .. } => {
                // Universe constraints don't directly contain term meta-variables
                // but they might contain universe meta-variables (which we track separately)
                HashSet::new()
            }
            Constraint::UniverseLevel { .. } => {
                // Same as above
                HashSet::new()
            }
            Constraint::Delayed {
                constraint,
                waiting_on,
                ..
            } => {
                let mut metas = waiting_on.clone();
                metas.extend(self.collect_metas_in_constraint(constraint));
                metas
            }
        }
    }

    /// Collect all meta-variables in a term
    fn collect_metas_in_term(&self, term: &Term) -> HashSet<MetaId> {
        let mut metas = HashSet::new();
        self.collect_metas_recursive(term, &mut metas);
        metas
    }

    fn collect_metas_recursive(&self, term: &Term, metas: &mut HashSet<MetaId>) {
        match term {
            Term::Meta(name) => {
                if let Some(id) = self.parse_meta_name(name) {
                    metas.insert(id);
                }
            }
            Term::App(f, arg) => {
                self.collect_metas_recursive(f, metas);
                self.collect_metas_recursive(arg, metas);
            }
            Term::Abs(_, ty, body) | Term::Pi(_, ty, body, _) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(body, metas);
            }
            Term::Let(_, ty, val, body) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(val, metas);
                self.collect_metas_recursive(body, metas);
            }
            _ => {}
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }

    /// Main solving loop
    pub fn solve(&mut self) -> Result<Substitution, TypeError> {
        let max_iterations = 1000;
        let mut iterations = 0;

        while !self.queue.is_empty() && iterations < max_iterations {
            iterations += 1;

            // Pick the best constraint to solve
            if let Some(constraint_id) = self.pick_constraint() {
                let constraint = self.constraints.remove(&constraint_id).unwrap();

                match self.solve_constraint(constraint.clone()) {
                    Ok(progress) => {
                        if progress {
                            // Propagate the solution
                            self.propagate_solution()?;

                            // Wake up delayed constraints that might now be solvable
                            let woken_constraints = self.wake_delayed_constraints()?;

                            // Add woken constraints back to queue with high priority
                            for woken_id in woken_constraints {
                                self.queue.push_front(woken_id);
                            }
                        }
                    }
                    Err(e) => {
                        // Try to delay the constraint if possible
                        if self.can_delay(&constraint_id) {
                            // Determine which meta-variables to wait for
                            let waiting_on = match &constraint {
                                Constraint::Unify { left, right, .. } => {
                                    let mut metas = self.collect_metas_in_term(left);
                                    metas.extend(self.collect_metas_in_term(right));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                Constraint::HasType {
                                    term,
                                    expected_type,
                                    ..
                                } => {
                                    let mut metas = self.collect_metas_in_term(term);
                                    metas.extend(self.collect_metas_in_term(expected_type));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                _ => HashSet::new(),
                            };

                            if !waiting_on.is_empty() {
                                // Put constraint back and delay it
                                self.constraints.insert(constraint_id, constraint);
                                self.delay_constraint(constraint_id, waiting_on)?;
                            } else {
                                // Can't delay, return error
                                return Err(e);
                            }
                        } else {
                            return Err(e);
                        }
                    }
                }
            }
        }

        if iterations >= max_iterations {
            return Err(TypeError::Internal {
                message: "Constraint solving did not converge".to_string(),
            });
        }

        // Check for unsolved required constraints
        for constraint in self.constraints.values() {
            if let Constraint::Unify {
                strength: ConstraintStrength::Required,
                ..
            } = constraint
            {
                return Err(TypeError::Internal {
                    message: "Unsolved required constraints remain".to_string(),
                });
            }
        }

        Ok(self.substitution.clone())
    }

    /// Pick the next constraint to solve based on heuristics
    fn pick_constraint(&mut self) -> Option<ConstraintId> {
        // Priority order:
        // 1. Constraints with no meta-variables
        // 2. Constraints with only solved meta-variables
        // 3. Simple pattern constraints (Miller patterns)
        // 4. Other constraints

        let mut best_constraint = None;
        let mut best_score = i32::MAX;

        for &id in &self.queue {
            if let Some(constraint) = self.constraints.get(&id) {
                let score = self.score_constraint(constraint);
                if score < best_score {
                    best_score = score;
                    best_constraint = Some(id);
                }
            }
        }

        if let Some(id) = best_constraint {
            self.queue.retain(|&x| x != id);
            Some(id)
        } else {
            self.queue.pop_front()
        }
    }

    /// Score a constraint for solving priority (lower is better)
    fn score_constraint(&self, constraint: &Constraint) -> i32 {
        let metas = self.collect_metas_in_constraint(constraint);

        if metas.is_empty() {
            return 0; // No metas, can solve immediately
        }

        let unsolved_count = metas
            .iter()
            .filter(|m| {
                self.metas
                    .get(m)
                    .and_then(|info| info.solution.as_ref())
                    .is_none()
            })
            .count();

        if unsolved_count == 0 {
            return 1; // All metas solved
        }

        // Check for Miller patterns
        if let Constraint::Unify { left, right, .. } = constraint {
            if self.is_miller_pattern_unification(left, right)
                || self.is_miller_pattern_unification(right, left)
            {
                return 10 + unsolved_count as i32;
            }
        }

        100 + unsolved_count as i32
    }

    /// Check if a unification is a Miller pattern (?M x1 ... xn = t)
    fn is_miller_pattern_unification(&self, left: &Term, right: &Term) -> bool {
        self.is_simple_miller_pattern(left, right)
    }

    /// Check if left is a simple Miller pattern and right is safe to solve with
    pub fn is_simple_miller_pattern(&self, left: &Term, right: &Term) -> bool {
        let (head, args) = self.decompose_application(left);

        if let Term::Meta(meta_name) = head {
            // Must have at least one argument to be interesting
            if args.is_empty() {
                return false;
            }

            // Check if all arguments are distinct variables
            let mut seen = HashSet::new();
            for arg in &args {
                if let Term::Var(x) = arg {
                    if !seen.insert(x.clone()) {
                        return false; // Not distinct
                    }
                } else {
                    return false; // Not a variable
                }
            }

            // Check occurs check - right must not contain the meta-variable
            if let Some(meta_id) = self.parse_meta_name(meta_name) {
                if self.collect_metas_in_term(right).contains(&meta_id) {
                    return false; // Occurs check failure
                }
            }

            // For now, only handle simple cases where right is a variable or constant
            match right {
                Term::Var(_) | Term::Const(_) | Term::Sort(_) => true,
                Term::Meta(_) => {
                    // Allow meta-to-meta if they're different
                    if let Term::Meta(right_name) = right {
                        meta_name != right_name
                    } else {
                        false
                    }
                }
                _ => false, // More complex cases need careful handling
            }
        } else {
            false
        }
    }

    /// Decompose an application into head and arguments
    fn decompose_application<'a>(&self, term: &'a Term) -> (&'a Term, Vec<&'a Term>) {
        let mut current = term;
        let mut args = Vec::new();

        while let Term::App(f, arg) = current {
            args.push(arg.as_ref());
            current = f;
        }

        args.reverse();
        (current, args)
    }

    /// Solve a single constraint
    fn solve_constraint(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 100;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in solve_constraint",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.solve_constraint_impl(constraint);
        self.recursion_depth -= 1;
        result
    }

    fn solve_constraint_impl(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        match constraint {
            Constraint::Unify {
                left,
                right,
                strength,
                ..
            } => self.unify(&left, &right, strength),
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                if self.enable_has_type_solving {
                    self.solve_has_type_constraint(&term, &expected_type)
                } else {
                    // Default behavior: don't solve HasType constraints
                    Ok(false)
                }
            }
            Constraint::UnifyUniverse {
                left,
                right,
                strength,
                ..
            } => self.unify_universe(&left, &right, strength),
            Constraint::UniverseLevel { left, right, .. } => {
                // For now, treat level constraints as unification constraints
                self.unify_universe(&left, &right, ConstraintStrength::Preferred)
            }
            Constraint::Delayed { constraint, .. } => {
                // Re-queue the delayed constraint
                let inner = *constraint;
                self.add_constraint(inner);
                Ok(false)
            }
        }
    }

    /// Unify two terms
    fn unify(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 50;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in unify",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.unify_impl(left, right, strength);
        self.recursion_depth -= 1;
        result
    }

    fn unify_impl(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply(left);
        let right = self.substitution.apply(right);

        // Check if already equal
        if self.alpha_equal(&left, &right) {
            return Ok(false);
        }

        // Miller pattern detection and solving (if enabled)
        if self.enable_miller_patterns {
            if self.is_simple_miller_pattern(&left, &right) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&left) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &right);
                }
            } else if self.is_simple_miller_pattern(&right, &left) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&right) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &left);
                }
            }
        }

        match (&left, &right) {
            // Meta-variable cases
            (Term::Meta(m), t) | (t, Term::Meta(m)) => {
                if let Some(meta_id) = self.parse_meta_name(m) {
                    self.solve_meta(meta_id, t)
                } else {
                    Err(TypeError::Internal {
                        message: format!("Invalid meta-variable: {}", m),
                    })
                }
            }

            // Structural cases
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *f1.clone(),
                    right: *f2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: *a1.clone(),
                    right: *a2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                // Rename bound variables to be consistent
                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            // Universe (Sort) cases
            (Term::Sort(u1), Term::Sort(u2)) => self.unify_universe(u1, u2, strength),

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::TypeMismatch {
                        expected: left,
                        actual: right,
                    })
                } else {
                    Ok(false) // Can't solve, but not an error for weak
                              // constraints
                }
            }
        }
    }

    /// Unify two universes
    fn unify_universe(
        &mut self,
        left: &Universe,
        right: &Universe,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply_universe(left);
        let right = self.substitution.apply_universe(right);

        // Normalize universes (simplify expressions like 0+1 -> 1)
        let left = self.normalize_universe(&left);
        let right = self.normalize_universe(&right);

        // Check if already equal
        if left == right {
            return Ok(false);
        }

        match (&left, &right) {
            // Same constants
            (Universe::Const(n1), Universe::Const(n2)) if n1 == n2 => Ok(false),

            // Universe variable cases
            (Universe::ScopedVar(s1, v1), Universe::ScopedVar(s2, v2)) if s1 == s2 && v1 == v2 => {
                Ok(false)
            }
            (Universe::ScopedVar(_, _), Universe::ScopedVar(_, _)) => {
                // Different universe variables cannot be unified
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!(
                            "Cannot unify distinct universe variables {} and {}",
                            left, right
                        ),
                    })
                } else {
                    Ok(false)
                }
            }

            // Meta-variable cases
            (Universe::Meta(id), u) | (u, Universe::Meta(id)) => self.solve_universe_meta(*id, u),

            // Structural cases
            (Universe::Add(base1, n1), Universe::Add(base2, n2)) if n1 == n2 => {
                let constraint_id = self.new_constraint_id();
                self.add_constraint(Constraint::UnifyUniverse {
                    id: constraint_id,
                    left: *base1.clone(),
                    right: *base2.clone(),
                    strength,
                });
                Ok(true)
            }

            // Arithmetic constraints: ?u+n = m means ?u = m-n
            (Universe::Add(base, n), Universe::Const(m))
            | (Universe::Const(m), Universe::Add(base, n)) => {
                if *m >= *n {
                    if let Universe::Meta(id) = base.as_ref() {
                        self.solve_universe_meta(*id, &Universe::Const(m - n))
                    } else {
                        // More complex case - generate constraint for base
                        let constraint_id = self.new_constraint_id();
                        self.add_constraint(Constraint::UnifyUniverse {
                            id: constraint_id,
                            left: *base.clone(),
                            right: Universe::Const(m - n),
                            strength,
                        });
                        Ok(true)
                    }
                } else {
                    Err(TypeError::Internal {
                        message: format!("Cannot solve constraint: {} cannot equal {} (would require negative universe)",
                                       if matches!(&left, Universe::Add(_, _)) { &left } else { &right },
                                       if matches!(&left, Universe::Const(_)) { &left } else { &right }),
                    })
                }
            }

            (Universe::Max(u1, v1), Universe::Max(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Universe::IMax(u1, v1), Universe::IMax(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!("Cannot unify universes {} and {}", left, right),
                    })
                } else {
                    Ok(false)
                }
            }
        }
    }

    /// Normalize a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe(u1);
                let u2_norm = self.normalize_universe(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    /// Solve a universe meta-variable
    fn solve_universe_meta(
        &mut self,
        meta_id: u32,
        solution: &Universe,
    ) -> Result<bool, TypeError> {
        // Occurs check for universe meta-variables
        if self.occurs_in_universe(meta_id, solution) {
            return Err(TypeError::Internal {
                message: format!("Universe occurs check failed for ?u{}", meta_id),
            });
        }

        // Record solution
        self.substitution.insert_universe(meta_id, solution.clone());
        Ok(true)
    }

    /// Check if a universe meta-variable occurs in a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn occurs_in_universe(&self, meta_id: u32, universe: &Universe) -> bool {
        match universe {
            Universe::Meta(id) => *id == meta_id,
            Universe::Add(base, _) => self.occurs_in_universe(meta_id, base),
            Universe::Max(u, v) | Universe::IMax(u, v) => {
                self.occurs_in_universe(meta_id, u) || self.occurs_in_universe(meta_id, v)
            }
            _ => false,
        }
    }

    /// Solve a meta-variable
    fn solve_meta(&mut self, meta_id: MetaId, solution: &Term) -> Result<bool, TypeError> {
        // Occurs check
        if self.collect_metas_in_term(solution).contains(&meta_id) {
            return Err(TypeError::Internal {
                message: format!("Occurs check failed for ?m{}", meta_id.0),
            });
        }

        // Check solution is well-scoped
        if let Some(meta_info) = self.metas.get(&meta_id) {
            if !self.is_well_scoped(solution, &meta_info.context) {
                return Err(TypeError::Internal {
                    message: format!("Solution for ?m{} is not well-scoped", meta_id.0),
                });
            }
        }

        // Record solution
        self.substitution.insert(meta_id, solution.clone());

        if let Some(info) = self.metas.get_mut(&meta_id) {
            info.solution = Some(solution.clone());
        }

        Ok(true)
    }

    /// Check if a term is well-scoped in a context
    fn is_well_scoped(&self, term: &Term, context: &[String]) -> bool {
        match term {
            Term::Var(x) => {
                // Check if it's a bound variable or a global constant
                context.contains(x)
                    || self.context.lookup(x).is_some()
                    || self.context.lookup_axiom(x).is_some()
                    || self.context.lookup_constructor(x).is_some()
            }
            Term::App(f, arg) => {
                self.is_well_scoped(f, context) && self.is_well_scoped(arg, context)
            }
            Term::Abs(x, ty, body) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Pi(x, ty, body, _) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Const(name) => {
                // Constants should be in the global context
                self.context.lookup_axiom(name).is_some()
                    || self.context.lookup_constructor(name).is_some()
            }
            _ => true, // Sorts, meta-variables, etc.
        }
    }

    /// Alpha equality check
    fn alpha_equal(&self, t1: &Term, t2: &Term) -> bool {
        self.alpha_equal_aux(t1, t2, &mut vec![])
    }

    #[allow(clippy::only_used_in_recursion)]
    fn alpha_equal_aux(&self, t1: &Term, t2: &Term, renamings: &mut Vec<(String, String)>) -> bool {
        match (t1, t2) {
            (Term::Var(x), Term::Var(y)) => {
                // Check if this is a bound variable that was renamed
                for (a, b) in renamings.iter() {
                    if a == x && b == y {
                        return true;
                    }
                }
                x == y
            }
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                self.alpha_equal_aux(f1, f2, renamings) && self.alpha_equal_aux(a1, a2, renamings)
            }
            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Sort(u1), Term::Sort(u2)) => u1 == u2,
            (Term::Const(c1), Term::Const(c2)) => c1 == c2,
            (Term::Meta(m1), Term::Meta(m2)) => m1 == m2,
            _ => false,
        }
    }

    /// Rename a variable in a term
    #[allow(clippy::only_used_in_recursion)]
    fn rename_var(&self, old: &str, new: &str, term: &Term) -> Term {
        match term {
            Term::Var(x) if x == old => Term::Var(new.to_string()),
            Term::Var(x) => Term::Var(x.clone()),
            Term::App(f, arg) => Term::App(
                Box::new(self.rename_var(old, new, f)),
                Box::new(self.rename_var(old, new, arg)),
            ),
            Term::Abs(x, ty, body) if x != old => Term::Abs(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
            ),
            Term::Pi(x, ty, body, implicit) if x != old => Term::Pi(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
                *implicit,
            ),
            _ => term.clone(),
        }
    }

    /// Propagate solutions to dependent constraints
    fn propagate_solution(&mut self) -> Result<(), TypeError> {
        // Apply substitution to all remaining constraints
        let constraints: Vec<_> = self.constraints.drain().collect();

        for (id, constraint) in constraints {
            let updated = self.apply_subst_to_constraint(constraint);
            self.constraints.insert(id, updated);
        }

        Ok(())
    }

    /// Apply substitution to a constraint
    fn apply_subst_to_constraint(&self, constraint: Constraint) -> Constraint {
        match constraint {
            Constraint::Unify {
                id,
                left,
                right,
                strength,
            } => Constraint::Unify {
                id,
                left: self.substitution.apply(&left),
                right: self.substitution.apply(&right),
                strength,
            },
            Constraint::HasType {
                id,
                term,
                expected_type,
            } => Constraint::HasType {
                id,
                term: self.substitution.apply(&term),
                expected_type: self.substitution.apply(&expected_type),
            },
            Constraint::UnifyUniverse {
                id,
                left,
                right,
                strength,
            } => Constraint::UnifyUniverse {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
                strength,
            },
            Constraint::UniverseLevel { id, left, right } => Constraint::UniverseLevel {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
            },
            Constraint::Delayed {
                id,
                constraint,
                waiting_on,
            } => Constraint::Delayed {
                id,
                constraint: Box::new(self.apply_subst_to_constraint(*constraint)),
                waiting_on,
            },
        }
    }

    /// Solve a HasType constraint by type inference and unification
    fn solve_has_type_constraint(
        &mut self,
        term: &Term,
        expected_type: &Term,
    ) -> Result<bool, TypeError> {
        match term {
            Term::Meta(meta_name) => {
                // If the term is a meta-variable, we can potentially solve it
                if let Some(meta_id) = self.parse_meta_name(meta_name) {
                    if let Some(meta_info) = self.metas.get(&meta_id) {
                        if meta_info.solution.is_none() {
                            // Try to construct a term that has the expected type
                            if let Some(solution) = self.synthesize_term_of_type(expected_type)? {
                                return self.solve_meta(meta_id, &solution);
                            }
                        }
                    }
                }
                // If we can't solve the meta, try unifying with expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }

            Term::App(f, arg) => {
                // For applications, we need to ensure the function type is compatible
                // Create a fresh meta for the function type: ?F : ?A -> ?B
                let arg_type_meta_id = self.fresh_meta(vec![], None);
                let result_type_meta_id = self.fresh_meta(vec![], None);
                let arg_type_meta = Term::Meta(format!("?m{}", arg_type_meta_id.0));
                let result_type_meta = Term::Meta(format!("?m{}", result_type_meta_id.0));
                let func_type = Term::Pi(
                    "_".to_string(),
                    Box::new(arg_type_meta.clone()),
                    Box::new(result_type_meta.clone()),
                    false,
                );

                // Add constraints:
                // 1. f : ?A -> ?B
                // 2. arg : ?A
                // 3. ?B ≡ expected_type
                let f_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: f.as_ref().clone(),
                    expected_type: func_type,
                };
                let arg_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: arg.as_ref().clone(),
                    expected_type: arg_type_meta,
                };
                let result_constraint = Constraint::Unify {
                    id: self.new_constraint_id(),
                    left: result_type_meta,
                    right: expected_type.clone(),
                    strength: ConstraintStrength::Required,
                };

                self.add_constraint(f_constraint);
                self.add_constraint(arg_constraint);
                self.add_constraint(result_constraint);

                Ok(true)
            }

            Term::Abs(param, param_type, body) => {
                // For lambda abstractions, expected type should be a Pi type
                match expected_type {
                    Term::Pi(_, expected_param_type, expected_body_type, _) => {
                        // Unify parameter types and check body type
                        let param_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: param_type.as_ref().clone(),
                            right: expected_param_type.as_ref().clone(),
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: expected_body_type.as_ref().clone(),
                        };

                        self.add_constraint(param_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    Term::Meta(_) => {
                        // Expected type is a meta-variable, create a Pi type for it
                        let body_type_meta_id = self.fresh_meta(vec![param.clone()], None);
                        let body_type_meta = Term::Meta(format!("?m{}", body_type_meta_id.0));
                        let pi_type = Term::Pi(
                            param.clone(),
                            param_type.clone(),
                            Box::new(body_type_meta.clone()),
                            false,
                        );

                        let type_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: expected_type.clone(),
                            right: pi_type,
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: body_type_meta,
                        };

                        self.add_constraint(type_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    _ => {
                        // Type mismatch - lambda can't have non-function type
                        Err(TypeError::TypeMismatch {
                            expected: expected_type.clone(),
                            actual: Term::Pi(
                                param.clone(),
                                param_type.clone(),
                                Box::new(Term::Meta("?unknown".to_string())),
                                false,
                            ),
                        })
                    }
                }
            }

            _ => {
                // For other terms, just unify with the expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }
        }
    }

    /// Attempt to synthesize a term of a given type (basic heuristics)
    fn synthesize_term_of_type(&self, expected_type: &Term) -> Result<Option<Term>, TypeError> {
        match expected_type {
            Term::Sort(_) => {
                // For Sort types, we could return a fresh meta or a simple type
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
            Term::Pi(param, param_type, _body_type, _implicit) => {
                // For Pi types, synthesize a lambda that matches the Pi structure
                // Use a fresh meta-variable for the body to avoid infinite recursion
                let body_term = Term::Meta(format!("?body{}", self.next_meta_id));

                Ok(Some(Term::Abs(
                    param.clone(),
                    param_type.clone(),
                    Box::new(body_term),
                )))
            }
            Term::Meta(_) => {
                // Can't synthesize for unknown types
                Ok(None)
            }
            _ => {
                // For concrete types, return a meta-variable
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
        }
    }

    /// Check if a term is a Miller pattern
    /// Miller patterns are terms of the form: ?M x1 x2 ... xn where xi are
    /// distinct bound variables
    fn is_miller_pattern(&self, term: &Term) -> bool {
        let (head, args) = self.decompose_application(term);

        match head {
            Term::Meta(_) => {
                // Check that all arguments are distinct variables
                let mut seen = HashSet::new();
                for arg in args {
                    if let Term::Var(x) = arg {
                        if !seen.insert(x.clone()) {
                            return false; // Not distinct
                        }
                    } else {
                        return false; // Not a variable
                    }
                }
                true
            }
            _ => false,
        }
    }

    /// Extract the spine of a Miller pattern (the meta-variable and its
    /// arguments)
    fn extract_miller_spine(&self, term: &Term) -> Option<(String, Vec<String>)> {
        // First validate it's actually a Miller pattern
        if !self.is_miller_pattern(term) {
            return None;
        }

        let (head, args) = self.decompose_application(term);

        if let Term::Meta(meta_name) = head {
            // Collect variable names (we already know they're all variables from
            // is_miller_pattern)
            let var_names: Vec<String> = args
                .iter()
                .map(|arg| {
                    if let Term::Var(name) = arg {
                        name.clone()
                    } else {
                        unreachable!()
                    }
                })
                .collect();

            Some((meta_name.clone(), var_names))
        } else {
            None
        }
    }

    /// Solve simple Miller pattern unification
    /// Only handles simple cases
    fn solve_miller_pattern(
        &mut self,
        meta_name: &str,
        spine: &[String],
        other: &Term,
    ) -> Result<bool, TypeError> {
        // Check if we can abstract over the spine variables
        if !Self::can_abstract_over_variables(other, spine) {
            // Can't abstract, fall back to regular unification
            return self.unify(
                &Term::Meta(meta_name.to_string()),
                other,
                ConstraintStrength::Required,
            );
        }

        if let Some(meta_id) = self.parse_meta_name(meta_name) {
            // Only handle simple cases to avoid complexity
            match other {
                // Case 1: ?M x ≡ y (different variable) -> ?M := λx. y
                Term::Var(y) if spine.len() == 1 && spine[0] != *y => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))), /* Type inferred later */
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                // Case 2: ?M x ≡ c (constant) -> ?M := λx. c
                Term::Const(_) | Term::Sort(_) if spine.len() == 1 => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))),
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                _ => {
                    // For more complex cases, we could build the full lambda
                    // abstraction but for now, fall back to
                    // regular unification
                }
            }
        }

        // Fall back to regular unification
        self.unify(
            &Term::Meta(meta_name.to_string()),
            other,
            ConstraintStrength::Required,
        )
    }

    /// Check if a term can be abstracted over the given variables
    fn can_abstract_over_variables(term: &Term, vars: &[String]) -> bool {
        // For now, implement a simple check
        // In a full implementation, this would check that the term only uses variables
        // that are either in the spine or are bound locally
        match term {
            Term::Var(v) => vars.contains(v), // Variable must be in spine
            Term::App(f, arg) => {
                Self::can_abstract_over_variables(f, vars)
                    && Self::can_abstract_over_variables(arg, vars)
            }
            Term::Abs(param, ty, body) => {
                // Extend the scope with the bound parameter
                let mut extended_vars = vars.to_vec();
                extended_vars.push(param.clone());
                Self::can_abstract_over_variables(ty, vars)
                    && Self::can_abstract_over_variables(body, &extended_vars)
            }
            Term::Meta(_) => true,  // Meta-variables are always abstractable
            Term::Const(_) => true, // Constants are always abstractable
            Term::Sort(_) => true,  // Sorts are always abstractable
            _ => false,             // Conservative for complex constructs
        }
    }

    /// Delay a constraint for later processing
    fn delay_constraint(
        &mut self,
        constraint_id: ConstraintId,
        waiting_on: HashSet<MetaId>,
    ) -> Result<(), TypeError> {
        if let Some(constraint) = self.constraints.remove(&constraint_id) {
            let delayed = Constraint::Delayed {
                id: constraint_id,
                constraint: Box::new(constraint),
                waiting_on,
            };
            self.constraints.insert(constraint_id, delayed);
            self.delayed_constraints.insert(constraint_id);
        }
        Ok(())
    }

    /// Wake up delayed constraints that are no longer blocked
    fn wake_delayed_constraints(&mut self) -> Result<Vec<ConstraintId>, TypeError> {
        let mut woken = Vec::new();
        let mut to_wake = Vec::new();

        // Check which delayed constraints can now be woken up
        for constraint_id in &self.delayed_constraints {
            if let Some(Constraint::Delayed {
                waiting_on,
                constraint,
                ..
            }) = self.constraints.get(constraint_id)
            {
                // Check if all blocking metas are now solved
                let all_solved = waiting_on.iter().all(|meta_id| {
                    self.metas
                        .get(meta_id)
                        .is_some_and(|info| info.solution.is_some())
                });

                if all_solved {
                    to_wake.push((*constraint_id, constraint.as_ref().clone()));
                }
            }
        }

        // Wake up the constraints
        for (constraint_id, original_constraint) in to_wake {
            self.constraints.insert(constraint_id, original_constraint);
            self.delayed_constraints.remove(&constraint_id);
            woken.push(constraint_id);
        }

        Ok(woken)
    }

    /// Check if we can delay a constraint
    fn can_delay(&self, constraint_id: &ConstraintId) -> bool {
        if let Some(constraint) = self.constraints.get(constraint_id) {
            match constraint {
                Constraint::Unify {
                    left,
                    right,
                    strength,
                    ..
                } => {
                    // We can delay weak constraints that involve unsolved metas
                    if *strength == ConstraintStrength::Weak {
                        let left_metas = self.collect_metas_in_term(left);
                        let right_metas = self.collect_metas_in_term(right);

                        // Check if any involved metas are unsolved
                        for meta_id in left_metas.iter().chain(right_metas.iter()) {
                            if self.is_unsolved_meta(meta_id) {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::HasType {
                    term,
                    expected_type,
                    ..
                } => {
                    // Can delay HasType constraints involving unsolved metas
                    let term_metas = self.collect_metas_in_term(term);
                    let type_metas = self.collect_metas_in_term(expected_type);

                    for meta_id in term_metas.iter().chain(type_metas.iter()) {
                        if let Some(meta_info) = self.metas.get(meta_id) {
                            if meta_info.solution.is_none() {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::Delayed { .. } => true, // Already delayed
                _ => false,
            }
        } else {
            false
        }
    }

    /// Check if a meta-variable is unsolved
    fn is_unsolved_meta(&self, meta_id: &MetaId) -> bool {
        self.metas
            .get(meta_id)
            .is_some_and(|info| info.solution.is_none())
    }
}
/// Constraint types
#[derive(Debug, Clone)]
pub enum Constraint {
    /// Unification: t1 ≡ t2
    Unify {
        id: ConstraintId,
        left: Term,
        right: Term,
        strength: ConstraintStrength,
    },
    /// Type constraint: term : type
    HasType {
        id: ConstraintId,
        term: Term,
        expected_type: Term,
    },
    /// Universe unification: u1 ≡ u2
    UnifyUniverse {
        id: ConstraintId,
        left: Universe,
        right: Universe,
        strength: ConstraintStrength,
    },
    /// Universe level constraint: u1 ≤ u2
    UniverseLevel {
        id: ConstraintId,
        left: Universe,
        right: Universe,
    },
    /// Delayed constraint (for complex patterns)
    Delayed {
        id: ConstraintId,
        constraint: Box<Constraint>,
        waiting_on: HashSet<MetaId>,
    },
}
}

Our constraint system supports multiple categories of relationships that arise during dependent type checking:

Unification Constraints (Unify) represent the core requirement that two terms must be equal, with strength indicating solving priority. These constraints drive the primary unification algorithm and handle most structural equality requirements.

Type Constraints (HasType) ensure that terms inhabit their expected types, enabling type-directed constraint generation that guides the solver toward meaningful solutions.

Universe Constraints handle the complex relationships between universe levels, supporting both equality (UnifyUniverse) and ordering (UniverseLevel) requirements that maintain the hierarchy’s consistency.

Delayed Constraints represent patterns that cannot be solved immediately, waiting for specific meta-variables to be resolved before attempting solution.

Constraint Strength and Prioritization

#![allow(unused)]
fn main() {
use std::collections::{HashMap, HashSet, VecDeque};
use std::fmt;
use crate::ast::{Term, Universe};
use crate::context::Context;
use crate::errors::TypeError;
/// Meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct MetaId(pub u64);
impl fmt::Display for MetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?m{}", self.0)
    }
}
/// Universe meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct UniverseMetaId(pub u64);
impl fmt::Display for UniverseMetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?u{}", self.0)
    }
}
/// Information about a meta-variable
#[derive(Debug, Clone)]
pub struct MetaInfo {
    pub id: MetaId,
    pub context: Vec<String>, // Variables in scope when meta was created
    pub expected_type: Option<Term>, // Expected type of the meta-variable
    pub solution: Option<Term>, // Solution if solved
    pub dependencies: HashSet<MetaId>, // Meta-variables this depends on
    pub occurrences: Vec<ConstraintId>, // Constraints this meta appears in
}
/// Constraint identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct ConstraintId(pub u64);
/// Constraint types
#[derive(Debug, Clone)]
pub enum Constraint {
    /// Unification: t1 ≡ t2
    Unify {
        id: ConstraintId,
        left: Term,
        right: Term,
        strength: ConstraintStrength,
    },
    /// Type constraint: term : type
    HasType {
        id: ConstraintId,
        term: Term,
        expected_type: Term,
    },
    /// Universe unification: u1 ≡ u2
    UnifyUniverse {
        id: ConstraintId,
        left: Universe,
        right: Universe,
        strength: ConstraintStrength,
    },
    /// Universe level constraint: u1 ≤ u2
    UniverseLevel {
        id: ConstraintId,
        left: Universe,
        right: Universe,
    },
    /// Delayed constraint (for complex patterns)
    Delayed {
        id: ConstraintId,
        constraint: Box<Constraint>,
        waiting_on: HashSet<MetaId>,
    },
}
/// Substitution mapping meta-variables to terms
#[derive(Debug, Clone, Default)]
pub struct Substitution {
    mapping: HashMap<MetaId, Term>,
    universe_mapping: HashMap<u32, Universe>,
}
impl Substitution {
    pub fn new() -> Self {
        Self::default()
    }

    pub fn insert(&mut self, meta: MetaId, term: Term) {
        self.mapping.insert(meta, term);
    }

    pub fn get(&self, meta: &MetaId) -> Option<&Term> {
        self.mapping.get(meta)
    }

    pub fn insert_universe(&mut self, meta_id: u32, universe: Universe) {
        self.universe_mapping.insert(meta_id, universe);
    }

    pub fn get_universe(&self, meta_id: &u32) -> Option<&Universe> {
        self.universe_mapping.get(meta_id)
    }

    /// Apply substitution to a term
    pub fn apply(&self, term: &Term) -> Term {
        match term {
            Term::Meta(name) => {
                // Try to parse meta ID and look up substitution
                if let Some(meta_id) = self.parse_meta_name(name) {
                    if let Some(subst) = self.mapping.get(&meta_id) {
                        // Recursively apply substitution
                        return self.apply(subst);
                    }
                }
                term.clone()
            }
            Term::App(f, arg) => Term::App(Box::new(self.apply(f)), Box::new(self.apply(arg))),
            Term::Abs(x, ty, body) => Term::Abs(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
            ),
            Term::Pi(x, ty, body, implicit) => Term::Pi(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
                *implicit,
            ),
            Term::Let(x, ty, val, body) => Term::Let(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(val)),
                Box::new(self.apply(body)),
            ),
            Term::Sort(u) => Term::Sort(self.apply_universe(u)),
            // Other cases remain unchanged
            _ => term.clone(),
        }
    }

    /// Apply substitution to a universe
    pub fn apply_universe(&self, universe: &Universe) -> Universe {
        let result = match universe {
            Universe::Meta(id) => {
                if let Some(subst) = self.universe_mapping.get(id) {
                    // Recursively apply substitution
                    self.apply_universe(subst)
                } else {
                    universe.clone()
                }
            }
            Universe::Add(base, n) => Universe::Add(Box::new(self.apply_universe(base)), *n),
            Universe::Max(u, v) => Universe::Max(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            Universe::IMax(u, v) => Universe::IMax(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            // Constants, variables, and scoped variables remain unchanged during substitution
            Universe::Const(_) | Universe::ScopedVar(_, _) => universe.clone(),
        };
        // Normalize the result to simplify expressions like Const(0) + 1 -> Const(1)
        self.normalize_universe_static(&result)
    }

    /// Static normalization (doesn't require &self)
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe_static(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe_static(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe_static(u1);
                let u2_norm = self.normalize_universe_static(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }
}
/// Advanced constraint solver with dependency tracking
pub struct Solver {
    /// Meta-variables
    metas: HashMap<MetaId, MetaInfo>,
    /// Active constraints
    constraints: HashMap<ConstraintId, Constraint>,
    /// Constraint queue (ordered by priority)
    queue: VecDeque<ConstraintId>,
    /// Solved substitutions
    substitution: Substitution,
    /// Next meta ID
    next_meta_id: u64,
    /// Next universe meta ID
    next_universe_meta_id: u32,
    /// Next constraint ID
    next_constraint_id: u64,
    /// Type checking context
    context: Context,
    /// Dependency graph: meta -> constraints that depend on it
    dependencies: HashMap<MetaId, HashSet<ConstraintId>>,
    /// Delayed constraints that couldn't be solved immediately
    delayed_constraints: HashSet<ConstraintId>,
    /// Recursion depth counter to prevent stack overflow
    recursion_depth: u32,
    /// Flag to enable Miller pattern solving (advanced)
    enable_miller_patterns: bool,
    /// Flag to enable HasType constraint solving (advanced)
    enable_has_type_solving: bool,
}
impl Solver {
    pub fn new(context: Context) -> Self {
        Self {
            metas: HashMap::new(),
            constraints: HashMap::new(),
            queue: VecDeque::new(),
            substitution: Substitution::new(),
            next_meta_id: 0,
            next_universe_meta_id: 0,
            next_constraint_id: 0,
            context,
            dependencies: HashMap::new(),
            delayed_constraints: HashSet::new(),
            recursion_depth: 0,
            enable_miller_patterns: false,  // Advanced feature
            enable_has_type_solving: false, // Advanced feature
        }
    }

    /// Enable Miller pattern solving (advanced)
    pub fn enable_miller_patterns(&mut self) {
        self.enable_miller_patterns = true;
    }

    /// Check if Miller pattern solving is enabled
    pub fn miller_patterns_enabled(&self) -> bool {
        self.enable_miller_patterns
    }

    /// Enable HasType constraint solving (advanced)
    pub fn enable_has_type_solving(&mut self) {
        self.enable_has_type_solving = true;
    }

    /// Check if HasType constraint solving is enabled
    pub fn has_type_solving_enabled(&self) -> bool {
        self.enable_has_type_solving
    }

    /// Create a fresh meta-variable
    pub fn fresh_meta(&mut self, ctx_vars: Vec<String>, expected_type: Option<Term>) -> MetaId {
        let id = MetaId(self.next_meta_id);
        self.next_meta_id += 1;

        let info = MetaInfo {
            id,
            context: ctx_vars,
            expected_type,
            solution: None,
            dependencies: HashSet::new(),
            occurrences: Vec::new(),
        };

        self.metas.insert(id, info);
        id
    }

    /// Create a fresh universe meta-variable
    pub fn fresh_universe_meta(&mut self) -> Universe {
        let id = self.next_universe_meta_id;
        self.next_universe_meta_id += 1;
        Universe::Meta(id)
    }

    /// Add a constraint to the solver
    pub fn add_constraint(&mut self, constraint: Constraint) -> ConstraintId {
        let id = match &constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        // Track which metas appear in this constraint
        self.track_meta_occurrences(&constraint);

        self.constraints.insert(id, constraint);
        self.queue.push_back(id);
        id
    }

    /// Create a new constraint ID
    pub fn new_constraint_id(&mut self) -> ConstraintId {
        let id = ConstraintId(self.next_constraint_id);
        self.next_constraint_id += 1;
        id
    }

    /// Track meta-variable occurrences in constraints
    fn track_meta_occurrences(&mut self, constraint: &Constraint) {
        let metas = self.collect_metas_in_constraint(constraint);
        let constraint_id = match constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        for meta_id in metas {
            if let Some(info) = self.metas.get_mut(&meta_id) {
                info.occurrences.push(constraint_id);
            }
            self.dependencies
                .entry(meta_id)
                .or_default()
                .insert(constraint_id);
        }
    }

    /// Collect all meta-variables in a constraint
    fn collect_metas_in_constraint(&self, constraint: &Constraint) -> HashSet<MetaId> {
        match constraint {
            Constraint::Unify { left, right, .. } => {
                let mut metas = self.collect_metas_in_term(left);
                metas.extend(self.collect_metas_in_term(right));
                metas
            }
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                let mut metas = self.collect_metas_in_term(term);
                metas.extend(self.collect_metas_in_term(expected_type));
                metas
            }
            Constraint::UnifyUniverse { .. } => {
                // Universe constraints don't directly contain term meta-variables
                // but they might contain universe meta-variables (which we track separately)
                HashSet::new()
            }
            Constraint::UniverseLevel { .. } => {
                // Same as above
                HashSet::new()
            }
            Constraint::Delayed {
                constraint,
                waiting_on,
                ..
            } => {
                let mut metas = waiting_on.clone();
                metas.extend(self.collect_metas_in_constraint(constraint));
                metas
            }
        }
    }

    /// Collect all meta-variables in a term
    fn collect_metas_in_term(&self, term: &Term) -> HashSet<MetaId> {
        let mut metas = HashSet::new();
        self.collect_metas_recursive(term, &mut metas);
        metas
    }

    fn collect_metas_recursive(&self, term: &Term, metas: &mut HashSet<MetaId>) {
        match term {
            Term::Meta(name) => {
                if let Some(id) = self.parse_meta_name(name) {
                    metas.insert(id);
                }
            }
            Term::App(f, arg) => {
                self.collect_metas_recursive(f, metas);
                self.collect_metas_recursive(arg, metas);
            }
            Term::Abs(_, ty, body) | Term::Pi(_, ty, body, _) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(body, metas);
            }
            Term::Let(_, ty, val, body) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(val, metas);
                self.collect_metas_recursive(body, metas);
            }
            _ => {}
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }

    /// Main solving loop
    pub fn solve(&mut self) -> Result<Substitution, TypeError> {
        let max_iterations = 1000;
        let mut iterations = 0;

        while !self.queue.is_empty() && iterations < max_iterations {
            iterations += 1;

            // Pick the best constraint to solve
            if let Some(constraint_id) = self.pick_constraint() {
                let constraint = self.constraints.remove(&constraint_id).unwrap();

                match self.solve_constraint(constraint.clone()) {
                    Ok(progress) => {
                        if progress {
                            // Propagate the solution
                            self.propagate_solution()?;

                            // Wake up delayed constraints that might now be solvable
                            let woken_constraints = self.wake_delayed_constraints()?;

                            // Add woken constraints back to queue with high priority
                            for woken_id in woken_constraints {
                                self.queue.push_front(woken_id);
                            }
                        }
                    }
                    Err(e) => {
                        // Try to delay the constraint if possible
                        if self.can_delay(&constraint_id) {
                            // Determine which meta-variables to wait for
                            let waiting_on = match &constraint {
                                Constraint::Unify { left, right, .. } => {
                                    let mut metas = self.collect_metas_in_term(left);
                                    metas.extend(self.collect_metas_in_term(right));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                Constraint::HasType {
                                    term,
                                    expected_type,
                                    ..
                                } => {
                                    let mut metas = self.collect_metas_in_term(term);
                                    metas.extend(self.collect_metas_in_term(expected_type));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                _ => HashSet::new(),
                            };

                            if !waiting_on.is_empty() {
                                // Put constraint back and delay it
                                self.constraints.insert(constraint_id, constraint);
                                self.delay_constraint(constraint_id, waiting_on)?;
                            } else {
                                // Can't delay, return error
                                return Err(e);
                            }
                        } else {
                            return Err(e);
                        }
                    }
                }
            }
        }

        if iterations >= max_iterations {
            return Err(TypeError::Internal {
                message: "Constraint solving did not converge".to_string(),
            });
        }

        // Check for unsolved required constraints
        for constraint in self.constraints.values() {
            if let Constraint::Unify {
                strength: ConstraintStrength::Required,
                ..
            } = constraint
            {
                return Err(TypeError::Internal {
                    message: "Unsolved required constraints remain".to_string(),
                });
            }
        }

        Ok(self.substitution.clone())
    }

    /// Pick the next constraint to solve based on heuristics
    fn pick_constraint(&mut self) -> Option<ConstraintId> {
        // Priority order:
        // 1. Constraints with no meta-variables
        // 2. Constraints with only solved meta-variables
        // 3. Simple pattern constraints (Miller patterns)
        // 4. Other constraints

        let mut best_constraint = None;
        let mut best_score = i32::MAX;

        for &id in &self.queue {
            if let Some(constraint) = self.constraints.get(&id) {
                let score = self.score_constraint(constraint);
                if score < best_score {
                    best_score = score;
                    best_constraint = Some(id);
                }
            }
        }

        if let Some(id) = best_constraint {
            self.queue.retain(|&x| x != id);
            Some(id)
        } else {
            self.queue.pop_front()
        }
    }

    /// Score a constraint for solving priority (lower is better)
    fn score_constraint(&self, constraint: &Constraint) -> i32 {
        let metas = self.collect_metas_in_constraint(constraint);

        if metas.is_empty() {
            return 0; // No metas, can solve immediately
        }

        let unsolved_count = metas
            .iter()
            .filter(|m| {
                self.metas
                    .get(m)
                    .and_then(|info| info.solution.as_ref())
                    .is_none()
            })
            .count();

        if unsolved_count == 0 {
            return 1; // All metas solved
        }

        // Check for Miller patterns
        if let Constraint::Unify { left, right, .. } = constraint {
            if self.is_miller_pattern_unification(left, right)
                || self.is_miller_pattern_unification(right, left)
            {
                return 10 + unsolved_count as i32;
            }
        }

        100 + unsolved_count as i32
    }

    /// Check if a unification is a Miller pattern (?M x1 ... xn = t)
    fn is_miller_pattern_unification(&self, left: &Term, right: &Term) -> bool {
        self.is_simple_miller_pattern(left, right)
    }

    /// Check if left is a simple Miller pattern and right is safe to solve with
    pub fn is_simple_miller_pattern(&self, left: &Term, right: &Term) -> bool {
        let (head, args) = self.decompose_application(left);

        if let Term::Meta(meta_name) = head {
            // Must have at least one argument to be interesting
            if args.is_empty() {
                return false;
            }

            // Check if all arguments are distinct variables
            let mut seen = HashSet::new();
            for arg in &args {
                if let Term::Var(x) = arg {
                    if !seen.insert(x.clone()) {
                        return false; // Not distinct
                    }
                } else {
                    return false; // Not a variable
                }
            }

            // Check occurs check - right must not contain the meta-variable
            if let Some(meta_id) = self.parse_meta_name(meta_name) {
                if self.collect_metas_in_term(right).contains(&meta_id) {
                    return false; // Occurs check failure
                }
            }

            // For now, only handle simple cases where right is a variable or constant
            match right {
                Term::Var(_) | Term::Const(_) | Term::Sort(_) => true,
                Term::Meta(_) => {
                    // Allow meta-to-meta if they're different
                    if let Term::Meta(right_name) = right {
                        meta_name != right_name
                    } else {
                        false
                    }
                }
                _ => false, // More complex cases need careful handling
            }
        } else {
            false
        }
    }

    /// Decompose an application into head and arguments
    fn decompose_application<'a>(&self, term: &'a Term) -> (&'a Term, Vec<&'a Term>) {
        let mut current = term;
        let mut args = Vec::new();

        while let Term::App(f, arg) = current {
            args.push(arg.as_ref());
            current = f;
        }

        args.reverse();
        (current, args)
    }

    /// Solve a single constraint
    fn solve_constraint(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 100;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in solve_constraint",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.solve_constraint_impl(constraint);
        self.recursion_depth -= 1;
        result
    }

    fn solve_constraint_impl(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        match constraint {
            Constraint::Unify {
                left,
                right,
                strength,
                ..
            } => self.unify(&left, &right, strength),
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                if self.enable_has_type_solving {
                    self.solve_has_type_constraint(&term, &expected_type)
                } else {
                    // Default behavior: don't solve HasType constraints
                    Ok(false)
                }
            }
            Constraint::UnifyUniverse {
                left,
                right,
                strength,
                ..
            } => self.unify_universe(&left, &right, strength),
            Constraint::UniverseLevel { left, right, .. } => {
                // For now, treat level constraints as unification constraints
                self.unify_universe(&left, &right, ConstraintStrength::Preferred)
            }
            Constraint::Delayed { constraint, .. } => {
                // Re-queue the delayed constraint
                let inner = *constraint;
                self.add_constraint(inner);
                Ok(false)
            }
        }
    }

    /// Unify two terms
    fn unify(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 50;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in unify",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.unify_impl(left, right, strength);
        self.recursion_depth -= 1;
        result
    }

    fn unify_impl(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply(left);
        let right = self.substitution.apply(right);

        // Check if already equal
        if self.alpha_equal(&left, &right) {
            return Ok(false);
        }

        // Miller pattern detection and solving (if enabled)
        if self.enable_miller_patterns {
            if self.is_simple_miller_pattern(&left, &right) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&left) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &right);
                }
            } else if self.is_simple_miller_pattern(&right, &left) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&right) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &left);
                }
            }
        }

        match (&left, &right) {
            // Meta-variable cases
            (Term::Meta(m), t) | (t, Term::Meta(m)) => {
                if let Some(meta_id) = self.parse_meta_name(m) {
                    self.solve_meta(meta_id, t)
                } else {
                    Err(TypeError::Internal {
                        message: format!("Invalid meta-variable: {}", m),
                    })
                }
            }

            // Structural cases
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *f1.clone(),
                    right: *f2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: *a1.clone(),
                    right: *a2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                // Rename bound variables to be consistent
                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            // Universe (Sort) cases
            (Term::Sort(u1), Term::Sort(u2)) => self.unify_universe(u1, u2, strength),

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::TypeMismatch {
                        expected: left,
                        actual: right,
                    })
                } else {
                    Ok(false) // Can't solve, but not an error for weak
                              // constraints
                }
            }
        }
    }

    /// Unify two universes
    fn unify_universe(
        &mut self,
        left: &Universe,
        right: &Universe,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply_universe(left);
        let right = self.substitution.apply_universe(right);

        // Normalize universes (simplify expressions like 0+1 -> 1)
        let left = self.normalize_universe(&left);
        let right = self.normalize_universe(&right);

        // Check if already equal
        if left == right {
            return Ok(false);
        }

        match (&left, &right) {
            // Same constants
            (Universe::Const(n1), Universe::Const(n2)) if n1 == n2 => Ok(false),

            // Universe variable cases
            (Universe::ScopedVar(s1, v1), Universe::ScopedVar(s2, v2)) if s1 == s2 && v1 == v2 => {
                Ok(false)
            }
            (Universe::ScopedVar(_, _), Universe::ScopedVar(_, _)) => {
                // Different universe variables cannot be unified
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!(
                            "Cannot unify distinct universe variables {} and {}",
                            left, right
                        ),
                    })
                } else {
                    Ok(false)
                }
            }

            // Meta-variable cases
            (Universe::Meta(id), u) | (u, Universe::Meta(id)) => self.solve_universe_meta(*id, u),

            // Structural cases
            (Universe::Add(base1, n1), Universe::Add(base2, n2)) if n1 == n2 => {
                let constraint_id = self.new_constraint_id();
                self.add_constraint(Constraint::UnifyUniverse {
                    id: constraint_id,
                    left: *base1.clone(),
                    right: *base2.clone(),
                    strength,
                });
                Ok(true)
            }

            // Arithmetic constraints: ?u+n = m means ?u = m-n
            (Universe::Add(base, n), Universe::Const(m))
            | (Universe::Const(m), Universe::Add(base, n)) => {
                if *m >= *n {
                    if let Universe::Meta(id) = base.as_ref() {
                        self.solve_universe_meta(*id, &Universe::Const(m - n))
                    } else {
                        // More complex case - generate constraint for base
                        let constraint_id = self.new_constraint_id();
                        self.add_constraint(Constraint::UnifyUniverse {
                            id: constraint_id,
                            left: *base.clone(),
                            right: Universe::Const(m - n),
                            strength,
                        });
                        Ok(true)
                    }
                } else {
                    Err(TypeError::Internal {
                        message: format!("Cannot solve constraint: {} cannot equal {} (would require negative universe)",
                                       if matches!(&left, Universe::Add(_, _)) { &left } else { &right },
                                       if matches!(&left, Universe::Const(_)) { &left } else { &right }),
                    })
                }
            }

            (Universe::Max(u1, v1), Universe::Max(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Universe::IMax(u1, v1), Universe::IMax(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!("Cannot unify universes {} and {}", left, right),
                    })
                } else {
                    Ok(false)
                }
            }
        }
    }

    /// Normalize a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe(u1);
                let u2_norm = self.normalize_universe(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    /// Solve a universe meta-variable
    fn solve_universe_meta(
        &mut self,
        meta_id: u32,
        solution: &Universe,
    ) -> Result<bool, TypeError> {
        // Occurs check for universe meta-variables
        if self.occurs_in_universe(meta_id, solution) {
            return Err(TypeError::Internal {
                message: format!("Universe occurs check failed for ?u{}", meta_id),
            });
        }

        // Record solution
        self.substitution.insert_universe(meta_id, solution.clone());
        Ok(true)
    }

    /// Check if a universe meta-variable occurs in a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn occurs_in_universe(&self, meta_id: u32, universe: &Universe) -> bool {
        match universe {
            Universe::Meta(id) => *id == meta_id,
            Universe::Add(base, _) => self.occurs_in_universe(meta_id, base),
            Universe::Max(u, v) | Universe::IMax(u, v) => {
                self.occurs_in_universe(meta_id, u) || self.occurs_in_universe(meta_id, v)
            }
            _ => false,
        }
    }

    /// Solve a meta-variable
    fn solve_meta(&mut self, meta_id: MetaId, solution: &Term) -> Result<bool, TypeError> {
        // Occurs check
        if self.collect_metas_in_term(solution).contains(&meta_id) {
            return Err(TypeError::Internal {
                message: format!("Occurs check failed for ?m{}", meta_id.0),
            });
        }

        // Check solution is well-scoped
        if let Some(meta_info) = self.metas.get(&meta_id) {
            if !self.is_well_scoped(solution, &meta_info.context) {
                return Err(TypeError::Internal {
                    message: format!("Solution for ?m{} is not well-scoped", meta_id.0),
                });
            }
        }

        // Record solution
        self.substitution.insert(meta_id, solution.clone());

        if let Some(info) = self.metas.get_mut(&meta_id) {
            info.solution = Some(solution.clone());
        }

        Ok(true)
    }

    /// Check if a term is well-scoped in a context
    fn is_well_scoped(&self, term: &Term, context: &[String]) -> bool {
        match term {
            Term::Var(x) => {
                // Check if it's a bound variable or a global constant
                context.contains(x)
                    || self.context.lookup(x).is_some()
                    || self.context.lookup_axiom(x).is_some()
                    || self.context.lookup_constructor(x).is_some()
            }
            Term::App(f, arg) => {
                self.is_well_scoped(f, context) && self.is_well_scoped(arg, context)
            }
            Term::Abs(x, ty, body) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Pi(x, ty, body, _) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Const(name) => {
                // Constants should be in the global context
                self.context.lookup_axiom(name).is_some()
                    || self.context.lookup_constructor(name).is_some()
            }
            _ => true, // Sorts, meta-variables, etc.
        }
    }

    /// Alpha equality check
    fn alpha_equal(&self, t1: &Term, t2: &Term) -> bool {
        self.alpha_equal_aux(t1, t2, &mut vec![])
    }

    #[allow(clippy::only_used_in_recursion)]
    fn alpha_equal_aux(&self, t1: &Term, t2: &Term, renamings: &mut Vec<(String, String)>) -> bool {
        match (t1, t2) {
            (Term::Var(x), Term::Var(y)) => {
                // Check if this is a bound variable that was renamed
                for (a, b) in renamings.iter() {
                    if a == x && b == y {
                        return true;
                    }
                }
                x == y
            }
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                self.alpha_equal_aux(f1, f2, renamings) && self.alpha_equal_aux(a1, a2, renamings)
            }
            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Sort(u1), Term::Sort(u2)) => u1 == u2,
            (Term::Const(c1), Term::Const(c2)) => c1 == c2,
            (Term::Meta(m1), Term::Meta(m2)) => m1 == m2,
            _ => false,
        }
    }

    /// Rename a variable in a term
    #[allow(clippy::only_used_in_recursion)]
    fn rename_var(&self, old: &str, new: &str, term: &Term) -> Term {
        match term {
            Term::Var(x) if x == old => Term::Var(new.to_string()),
            Term::Var(x) => Term::Var(x.clone()),
            Term::App(f, arg) => Term::App(
                Box::new(self.rename_var(old, new, f)),
                Box::new(self.rename_var(old, new, arg)),
            ),
            Term::Abs(x, ty, body) if x != old => Term::Abs(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
            ),
            Term::Pi(x, ty, body, implicit) if x != old => Term::Pi(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
                *implicit,
            ),
            _ => term.clone(),
        }
    }

    /// Propagate solutions to dependent constraints
    fn propagate_solution(&mut self) -> Result<(), TypeError> {
        // Apply substitution to all remaining constraints
        let constraints: Vec<_> = self.constraints.drain().collect();

        for (id, constraint) in constraints {
            let updated = self.apply_subst_to_constraint(constraint);
            self.constraints.insert(id, updated);
        }

        Ok(())
    }

    /// Apply substitution to a constraint
    fn apply_subst_to_constraint(&self, constraint: Constraint) -> Constraint {
        match constraint {
            Constraint::Unify {
                id,
                left,
                right,
                strength,
            } => Constraint::Unify {
                id,
                left: self.substitution.apply(&left),
                right: self.substitution.apply(&right),
                strength,
            },
            Constraint::HasType {
                id,
                term,
                expected_type,
            } => Constraint::HasType {
                id,
                term: self.substitution.apply(&term),
                expected_type: self.substitution.apply(&expected_type),
            },
            Constraint::UnifyUniverse {
                id,
                left,
                right,
                strength,
            } => Constraint::UnifyUniverse {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
                strength,
            },
            Constraint::UniverseLevel { id, left, right } => Constraint::UniverseLevel {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
            },
            Constraint::Delayed {
                id,
                constraint,
                waiting_on,
            } => Constraint::Delayed {
                id,
                constraint: Box::new(self.apply_subst_to_constraint(*constraint)),
                waiting_on,
            },
        }
    }

    /// Solve a HasType constraint by type inference and unification
    fn solve_has_type_constraint(
        &mut self,
        term: &Term,
        expected_type: &Term,
    ) -> Result<bool, TypeError> {
        match term {
            Term::Meta(meta_name) => {
                // If the term is a meta-variable, we can potentially solve it
                if let Some(meta_id) = self.parse_meta_name(meta_name) {
                    if let Some(meta_info) = self.metas.get(&meta_id) {
                        if meta_info.solution.is_none() {
                            // Try to construct a term that has the expected type
                            if let Some(solution) = self.synthesize_term_of_type(expected_type)? {
                                return self.solve_meta(meta_id, &solution);
                            }
                        }
                    }
                }
                // If we can't solve the meta, try unifying with expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }

            Term::App(f, arg) => {
                // For applications, we need to ensure the function type is compatible
                // Create a fresh meta for the function type: ?F : ?A -> ?B
                let arg_type_meta_id = self.fresh_meta(vec![], None);
                let result_type_meta_id = self.fresh_meta(vec![], None);
                let arg_type_meta = Term::Meta(format!("?m{}", arg_type_meta_id.0));
                let result_type_meta = Term::Meta(format!("?m{}", result_type_meta_id.0));
                let func_type = Term::Pi(
                    "_".to_string(),
                    Box::new(arg_type_meta.clone()),
                    Box::new(result_type_meta.clone()),
                    false,
                );

                // Add constraints:
                // 1. f : ?A -> ?B
                // 2. arg : ?A
                // 3. ?B ≡ expected_type
                let f_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: f.as_ref().clone(),
                    expected_type: func_type,
                };
                let arg_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: arg.as_ref().clone(),
                    expected_type: arg_type_meta,
                };
                let result_constraint = Constraint::Unify {
                    id: self.new_constraint_id(),
                    left: result_type_meta,
                    right: expected_type.clone(),
                    strength: ConstraintStrength::Required,
                };

                self.add_constraint(f_constraint);
                self.add_constraint(arg_constraint);
                self.add_constraint(result_constraint);

                Ok(true)
            }

            Term::Abs(param, param_type, body) => {
                // For lambda abstractions, expected type should be a Pi type
                match expected_type {
                    Term::Pi(_, expected_param_type, expected_body_type, _) => {
                        // Unify parameter types and check body type
                        let param_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: param_type.as_ref().clone(),
                            right: expected_param_type.as_ref().clone(),
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: expected_body_type.as_ref().clone(),
                        };

                        self.add_constraint(param_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    Term::Meta(_) => {
                        // Expected type is a meta-variable, create a Pi type for it
                        let body_type_meta_id = self.fresh_meta(vec![param.clone()], None);
                        let body_type_meta = Term::Meta(format!("?m{}", body_type_meta_id.0));
                        let pi_type = Term::Pi(
                            param.clone(),
                            param_type.clone(),
                            Box::new(body_type_meta.clone()),
                            false,
                        );

                        let type_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: expected_type.clone(),
                            right: pi_type,
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: body_type_meta,
                        };

                        self.add_constraint(type_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    _ => {
                        // Type mismatch - lambda can't have non-function type
                        Err(TypeError::TypeMismatch {
                            expected: expected_type.clone(),
                            actual: Term::Pi(
                                param.clone(),
                                param_type.clone(),
                                Box::new(Term::Meta("?unknown".to_string())),
                                false,
                            ),
                        })
                    }
                }
            }

            _ => {
                // For other terms, just unify with the expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }
        }
    }

    /// Attempt to synthesize a term of a given type (basic heuristics)
    fn synthesize_term_of_type(&self, expected_type: &Term) -> Result<Option<Term>, TypeError> {
        match expected_type {
            Term::Sort(_) => {
                // For Sort types, we could return a fresh meta or a simple type
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
            Term::Pi(param, param_type, _body_type, _implicit) => {
                // For Pi types, synthesize a lambda that matches the Pi structure
                // Use a fresh meta-variable for the body to avoid infinite recursion
                let body_term = Term::Meta(format!("?body{}", self.next_meta_id));

                Ok(Some(Term::Abs(
                    param.clone(),
                    param_type.clone(),
                    Box::new(body_term),
                )))
            }
            Term::Meta(_) => {
                // Can't synthesize for unknown types
                Ok(None)
            }
            _ => {
                // For concrete types, return a meta-variable
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
        }
    }

    /// Check if a term is a Miller pattern
    /// Miller patterns are terms of the form: ?M x1 x2 ... xn where xi are
    /// distinct bound variables
    fn is_miller_pattern(&self, term: &Term) -> bool {
        let (head, args) = self.decompose_application(term);

        match head {
            Term::Meta(_) => {
                // Check that all arguments are distinct variables
                let mut seen = HashSet::new();
                for arg in args {
                    if let Term::Var(x) = arg {
                        if !seen.insert(x.clone()) {
                            return false; // Not distinct
                        }
                    } else {
                        return false; // Not a variable
                    }
                }
                true
            }
            _ => false,
        }
    }

    /// Extract the spine of a Miller pattern (the meta-variable and its
    /// arguments)
    fn extract_miller_spine(&self, term: &Term) -> Option<(String, Vec<String>)> {
        // First validate it's actually a Miller pattern
        if !self.is_miller_pattern(term) {
            return None;
        }

        let (head, args) = self.decompose_application(term);

        if let Term::Meta(meta_name) = head {
            // Collect variable names (we already know they're all variables from
            // is_miller_pattern)
            let var_names: Vec<String> = args
                .iter()
                .map(|arg| {
                    if let Term::Var(name) = arg {
                        name.clone()
                    } else {
                        unreachable!()
                    }
                })
                .collect();

            Some((meta_name.clone(), var_names))
        } else {
            None
        }
    }

    /// Solve simple Miller pattern unification
    /// Only handles simple cases
    fn solve_miller_pattern(
        &mut self,
        meta_name: &str,
        spine: &[String],
        other: &Term,
    ) -> Result<bool, TypeError> {
        // Check if we can abstract over the spine variables
        if !Self::can_abstract_over_variables(other, spine) {
            // Can't abstract, fall back to regular unification
            return self.unify(
                &Term::Meta(meta_name.to_string()),
                other,
                ConstraintStrength::Required,
            );
        }

        if let Some(meta_id) = self.parse_meta_name(meta_name) {
            // Only handle simple cases to avoid complexity
            match other {
                // Case 1: ?M x ≡ y (different variable) -> ?M := λx. y
                Term::Var(y) if spine.len() == 1 && spine[0] != *y => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))), /* Type inferred later */
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                // Case 2: ?M x ≡ c (constant) -> ?M := λx. c
                Term::Const(_) | Term::Sort(_) if spine.len() == 1 => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))),
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                _ => {
                    // For more complex cases, we could build the full lambda
                    // abstraction but for now, fall back to
                    // regular unification
                }
            }
        }

        // Fall back to regular unification
        self.unify(
            &Term::Meta(meta_name.to_string()),
            other,
            ConstraintStrength::Required,
        )
    }

    /// Check if a term can be abstracted over the given variables
    fn can_abstract_over_variables(term: &Term, vars: &[String]) -> bool {
        // For now, implement a simple check
        // In a full implementation, this would check that the term only uses variables
        // that are either in the spine or are bound locally
        match term {
            Term::Var(v) => vars.contains(v), // Variable must be in spine
            Term::App(f, arg) => {
                Self::can_abstract_over_variables(f, vars)
                    && Self::can_abstract_over_variables(arg, vars)
            }
            Term::Abs(param, ty, body) => {
                // Extend the scope with the bound parameter
                let mut extended_vars = vars.to_vec();
                extended_vars.push(param.clone());
                Self::can_abstract_over_variables(ty, vars)
                    && Self::can_abstract_over_variables(body, &extended_vars)
            }
            Term::Meta(_) => true,  // Meta-variables are always abstractable
            Term::Const(_) => true, // Constants are always abstractable
            Term::Sort(_) => true,  // Sorts are always abstractable
            _ => false,             // Conservative for complex constructs
        }
    }

    /// Delay a constraint for later processing
    fn delay_constraint(
        &mut self,
        constraint_id: ConstraintId,
        waiting_on: HashSet<MetaId>,
    ) -> Result<(), TypeError> {
        if let Some(constraint) = self.constraints.remove(&constraint_id) {
            let delayed = Constraint::Delayed {
                id: constraint_id,
                constraint: Box::new(constraint),
                waiting_on,
            };
            self.constraints.insert(constraint_id, delayed);
            self.delayed_constraints.insert(constraint_id);
        }
        Ok(())
    }

    /// Wake up delayed constraints that are no longer blocked
    fn wake_delayed_constraints(&mut self) -> Result<Vec<ConstraintId>, TypeError> {
        let mut woken = Vec::new();
        let mut to_wake = Vec::new();

        // Check which delayed constraints can now be woken up
        for constraint_id in &self.delayed_constraints {
            if let Some(Constraint::Delayed {
                waiting_on,
                constraint,
                ..
            }) = self.constraints.get(constraint_id)
            {
                // Check if all blocking metas are now solved
                let all_solved = waiting_on.iter().all(|meta_id| {
                    self.metas
                        .get(meta_id)
                        .is_some_and(|info| info.solution.is_some())
                });

                if all_solved {
                    to_wake.push((*constraint_id, constraint.as_ref().clone()));
                }
            }
        }

        // Wake up the constraints
        for (constraint_id, original_constraint) in to_wake {
            self.constraints.insert(constraint_id, original_constraint);
            self.delayed_constraints.remove(&constraint_id);
            woken.push(constraint_id);
        }

        Ok(woken)
    }

    /// Check if we can delay a constraint
    fn can_delay(&self, constraint_id: &ConstraintId) -> bool {
        if let Some(constraint) = self.constraints.get(constraint_id) {
            match constraint {
                Constraint::Unify {
                    left,
                    right,
                    strength,
                    ..
                } => {
                    // We can delay weak constraints that involve unsolved metas
                    if *strength == ConstraintStrength::Weak {
                        let left_metas = self.collect_metas_in_term(left);
                        let right_metas = self.collect_metas_in_term(right);

                        // Check if any involved metas are unsolved
                        for meta_id in left_metas.iter().chain(right_metas.iter()) {
                            if self.is_unsolved_meta(meta_id) {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::HasType {
                    term,
                    expected_type,
                    ..
                } => {
                    // Can delay HasType constraints involving unsolved metas
                    let term_metas = self.collect_metas_in_term(term);
                    let type_metas = self.collect_metas_in_term(expected_type);

                    for meta_id in term_metas.iter().chain(type_metas.iter()) {
                        if let Some(meta_info) = self.metas.get(meta_id) {
                            if meta_info.solution.is_none() {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::Delayed { .. } => true, // Already delayed
                _ => false,
            }
        } else {
            false
        }
    }

    /// Check if a meta-variable is unsolved
    fn is_unsolved_meta(&self, meta_id: &MetaId) -> bool {
        self.metas
            .get(meta_id)
            .is_some_and(|info| info.solution.is_none())
    }
}
/// Constraint strength for prioritization
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ConstraintStrength {
    Required,  // Must be solved
    Preferred, // Should be solved if possible
    Weak,      // Can be postponed
}
}

The constraint strength system enables intelligent solving order that maximizes the chances of finding solutions through a three-tier priority hierarchy. Required constraints represent fundamental relationships that must be satisfied for type checking to succeed and receive the highest priority during constraint resolution. These constraints typically arise from explicit type annotations or structural requirements that cannot be compromised.

Preferred constraints should be solved when possible but can be postponed if necessary to make progress on more critical constraints. These constraints often represent desirable properties or optimizations that improve the quality of solutions without being strictly necessary for correctness. Weak constraints provide guidance to the solver about preferred solutions but will not block progress if they prove unsolvable, allowing the algorithm to find workable solutions even when ideal solutions are not available.

Constraint Solver

#![allow(unused)]
fn main() {
use std::collections::{HashMap, HashSet, VecDeque};
use std::fmt;
use crate::ast::{Term, Universe};
use crate::context::Context;
use crate::errors::TypeError;
/// Meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct MetaId(pub u64);
impl fmt::Display for MetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?m{}", self.0)
    }
}
/// Universe meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct UniverseMetaId(pub u64);
impl fmt::Display for UniverseMetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?u{}", self.0)
    }
}
/// Information about a meta-variable
#[derive(Debug, Clone)]
pub struct MetaInfo {
    pub id: MetaId,
    pub context: Vec<String>, // Variables in scope when meta was created
    pub expected_type: Option<Term>, // Expected type of the meta-variable
    pub solution: Option<Term>, // Solution if solved
    pub dependencies: HashSet<MetaId>, // Meta-variables this depends on
    pub occurrences: Vec<ConstraintId>, // Constraints this meta appears in
}
/// Constraint identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct ConstraintId(pub u64);
/// Constraint types
#[derive(Debug, Clone)]
pub enum Constraint {
    /// Unification: t1 ≡ t2
    Unify {
        id: ConstraintId,
        left: Term,
        right: Term,
        strength: ConstraintStrength,
    },
    /// Type constraint: term : type
    HasType {
        id: ConstraintId,
        term: Term,
        expected_type: Term,
    },
    /// Universe unification: u1 ≡ u2
    UnifyUniverse {
        id: ConstraintId,
        left: Universe,
        right: Universe,
        strength: ConstraintStrength,
    },
    /// Universe level constraint: u1 ≤ u2
    UniverseLevel {
        id: ConstraintId,
        left: Universe,
        right: Universe,
    },
    /// Delayed constraint (for complex patterns)
    Delayed {
        id: ConstraintId,
        constraint: Box<Constraint>,
        waiting_on: HashSet<MetaId>,
    },
}
/// Constraint strength for prioritization
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ConstraintStrength {
    Required,  // Must be solved
    Preferred, // Should be solved if possible
    Weak,      // Can be postponed
}
/// Substitution mapping meta-variables to terms
#[derive(Debug, Clone, Default)]
pub struct Substitution {
    mapping: HashMap<MetaId, Term>,
    universe_mapping: HashMap<u32, Universe>,
}
impl Substitution {
    pub fn new() -> Self {
        Self::default()
    }

    pub fn insert(&mut self, meta: MetaId, term: Term) {
        self.mapping.insert(meta, term);
    }

    pub fn get(&self, meta: &MetaId) -> Option<&Term> {
        self.mapping.get(meta)
    }

    pub fn insert_universe(&mut self, meta_id: u32, universe: Universe) {
        self.universe_mapping.insert(meta_id, universe);
    }

    pub fn get_universe(&self, meta_id: &u32) -> Option<&Universe> {
        self.universe_mapping.get(meta_id)
    }

    /// Apply substitution to a term
    pub fn apply(&self, term: &Term) -> Term {
        match term {
            Term::Meta(name) => {
                // Try to parse meta ID and look up substitution
                if let Some(meta_id) = self.parse_meta_name(name) {
                    if let Some(subst) = self.mapping.get(&meta_id) {
                        // Recursively apply substitution
                        return self.apply(subst);
                    }
                }
                term.clone()
            }
            Term::App(f, arg) => Term::App(Box::new(self.apply(f)), Box::new(self.apply(arg))),
            Term::Abs(x, ty, body) => Term::Abs(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
            ),
            Term::Pi(x, ty, body, implicit) => Term::Pi(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
                *implicit,
            ),
            Term::Let(x, ty, val, body) => Term::Let(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(val)),
                Box::new(self.apply(body)),
            ),
            Term::Sort(u) => Term::Sort(self.apply_universe(u)),
            // Other cases remain unchanged
            _ => term.clone(),
        }
    }

    /// Apply substitution to a universe
    pub fn apply_universe(&self, universe: &Universe) -> Universe {
        let result = match universe {
            Universe::Meta(id) => {
                if let Some(subst) = self.universe_mapping.get(id) {
                    // Recursively apply substitution
                    self.apply_universe(subst)
                } else {
                    universe.clone()
                }
            }
            Universe::Add(base, n) => Universe::Add(Box::new(self.apply_universe(base)), *n),
            Universe::Max(u, v) => Universe::Max(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            Universe::IMax(u, v) => Universe::IMax(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            // Constants, variables, and scoped variables remain unchanged during substitution
            Universe::Const(_) | Universe::ScopedVar(_, _) => universe.clone(),
        };
        // Normalize the result to simplify expressions like Const(0) + 1 -> Const(1)
        self.normalize_universe_static(&result)
    }

    /// Static normalization (doesn't require &self)
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe_static(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe_static(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe_static(u1);
                let u2_norm = self.normalize_universe_static(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }
}
impl Solver {
    pub fn new(context: Context) -> Self {
        Self {
            metas: HashMap::new(),
            constraints: HashMap::new(),
            queue: VecDeque::new(),
            substitution: Substitution::new(),
            next_meta_id: 0,
            next_universe_meta_id: 0,
            next_constraint_id: 0,
            context,
            dependencies: HashMap::new(),
            delayed_constraints: HashSet::new(),
            recursion_depth: 0,
            enable_miller_patterns: false,  // Advanced feature
            enable_has_type_solving: false, // Advanced feature
        }
    }

    /// Enable Miller pattern solving (advanced)
    pub fn enable_miller_patterns(&mut self) {
        self.enable_miller_patterns = true;
    }

    /// Check if Miller pattern solving is enabled
    pub fn miller_patterns_enabled(&self) -> bool {
        self.enable_miller_patterns
    }

    /// Enable HasType constraint solving (advanced)
    pub fn enable_has_type_solving(&mut self) {
        self.enable_has_type_solving = true;
    }

    /// Check if HasType constraint solving is enabled
    pub fn has_type_solving_enabled(&self) -> bool {
        self.enable_has_type_solving
    }

    /// Create a fresh meta-variable
    pub fn fresh_meta(&mut self, ctx_vars: Vec<String>, expected_type: Option<Term>) -> MetaId {
        let id = MetaId(self.next_meta_id);
        self.next_meta_id += 1;

        let info = MetaInfo {
            id,
            context: ctx_vars,
            expected_type,
            solution: None,
            dependencies: HashSet::new(),
            occurrences: Vec::new(),
        };

        self.metas.insert(id, info);
        id
    }

    /// Create a fresh universe meta-variable
    pub fn fresh_universe_meta(&mut self) -> Universe {
        let id = self.next_universe_meta_id;
        self.next_universe_meta_id += 1;
        Universe::Meta(id)
    }

    /// Add a constraint to the solver
    pub fn add_constraint(&mut self, constraint: Constraint) -> ConstraintId {
        let id = match &constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        // Track which metas appear in this constraint
        self.track_meta_occurrences(&constraint);

        self.constraints.insert(id, constraint);
        self.queue.push_back(id);
        id
    }

    /// Create a new constraint ID
    pub fn new_constraint_id(&mut self) -> ConstraintId {
        let id = ConstraintId(self.next_constraint_id);
        self.next_constraint_id += 1;
        id
    }

    /// Track meta-variable occurrences in constraints
    fn track_meta_occurrences(&mut self, constraint: &Constraint) {
        let metas = self.collect_metas_in_constraint(constraint);
        let constraint_id = match constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        for meta_id in metas {
            if let Some(info) = self.metas.get_mut(&meta_id) {
                info.occurrences.push(constraint_id);
            }
            self.dependencies
                .entry(meta_id)
                .or_default()
                .insert(constraint_id);
        }
    }

    /// Collect all meta-variables in a constraint
    fn collect_metas_in_constraint(&self, constraint: &Constraint) -> HashSet<MetaId> {
        match constraint {
            Constraint::Unify { left, right, .. } => {
                let mut metas = self.collect_metas_in_term(left);
                metas.extend(self.collect_metas_in_term(right));
                metas
            }
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                let mut metas = self.collect_metas_in_term(term);
                metas.extend(self.collect_metas_in_term(expected_type));
                metas
            }
            Constraint::UnifyUniverse { .. } => {
                // Universe constraints don't directly contain term meta-variables
                // but they might contain universe meta-variables (which we track separately)
                HashSet::new()
            }
            Constraint::UniverseLevel { .. } => {
                // Same as above
                HashSet::new()
            }
            Constraint::Delayed {
                constraint,
                waiting_on,
                ..
            } => {
                let mut metas = waiting_on.clone();
                metas.extend(self.collect_metas_in_constraint(constraint));
                metas
            }
        }
    }

    /// Collect all meta-variables in a term
    fn collect_metas_in_term(&self, term: &Term) -> HashSet<MetaId> {
        let mut metas = HashSet::new();
        self.collect_metas_recursive(term, &mut metas);
        metas
    }

    fn collect_metas_recursive(&self, term: &Term, metas: &mut HashSet<MetaId>) {
        match term {
            Term::Meta(name) => {
                if let Some(id) = self.parse_meta_name(name) {
                    metas.insert(id);
                }
            }
            Term::App(f, arg) => {
                self.collect_metas_recursive(f, metas);
                self.collect_metas_recursive(arg, metas);
            }
            Term::Abs(_, ty, body) | Term::Pi(_, ty, body, _) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(body, metas);
            }
            Term::Let(_, ty, val, body) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(val, metas);
                self.collect_metas_recursive(body, metas);
            }
            _ => {}
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }

    /// Main solving loop
    pub fn solve(&mut self) -> Result<Substitution, TypeError> {
        let max_iterations = 1000;
        let mut iterations = 0;

        while !self.queue.is_empty() && iterations < max_iterations {
            iterations += 1;

            // Pick the best constraint to solve
            if let Some(constraint_id) = self.pick_constraint() {
                let constraint = self.constraints.remove(&constraint_id).unwrap();

                match self.solve_constraint(constraint.clone()) {
                    Ok(progress) => {
                        if progress {
                            // Propagate the solution
                            self.propagate_solution()?;

                            // Wake up delayed constraints that might now be solvable
                            let woken_constraints = self.wake_delayed_constraints()?;

                            // Add woken constraints back to queue with high priority
                            for woken_id in woken_constraints {
                                self.queue.push_front(woken_id);
                            }
                        }
                    }
                    Err(e) => {
                        // Try to delay the constraint if possible
                        if self.can_delay(&constraint_id) {
                            // Determine which meta-variables to wait for
                            let waiting_on = match &constraint {
                                Constraint::Unify { left, right, .. } => {
                                    let mut metas = self.collect_metas_in_term(left);
                                    metas.extend(self.collect_metas_in_term(right));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                Constraint::HasType {
                                    term,
                                    expected_type,
                                    ..
                                } => {
                                    let mut metas = self.collect_metas_in_term(term);
                                    metas.extend(self.collect_metas_in_term(expected_type));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                _ => HashSet::new(),
                            };

                            if !waiting_on.is_empty() {
                                // Put constraint back and delay it
                                self.constraints.insert(constraint_id, constraint);
                                self.delay_constraint(constraint_id, waiting_on)?;
                            } else {
                                // Can't delay, return error
                                return Err(e);
                            }
                        } else {
                            return Err(e);
                        }
                    }
                }
            }
        }

        if iterations >= max_iterations {
            return Err(TypeError::Internal {
                message: "Constraint solving did not converge".to_string(),
            });
        }

        // Check for unsolved required constraints
        for constraint in self.constraints.values() {
            if let Constraint::Unify {
                strength: ConstraintStrength::Required,
                ..
            } = constraint
            {
                return Err(TypeError::Internal {
                    message: "Unsolved required constraints remain".to_string(),
                });
            }
        }

        Ok(self.substitution.clone())
    }

    /// Pick the next constraint to solve based on heuristics
    fn pick_constraint(&mut self) -> Option<ConstraintId> {
        // Priority order:
        // 1. Constraints with no meta-variables
        // 2. Constraints with only solved meta-variables
        // 3. Simple pattern constraints (Miller patterns)
        // 4. Other constraints

        let mut best_constraint = None;
        let mut best_score = i32::MAX;

        for &id in &self.queue {
            if let Some(constraint) = self.constraints.get(&id) {
                let score = self.score_constraint(constraint);
                if score < best_score {
                    best_score = score;
                    best_constraint = Some(id);
                }
            }
        }

        if let Some(id) = best_constraint {
            self.queue.retain(|&x| x != id);
            Some(id)
        } else {
            self.queue.pop_front()
        }
    }

    /// Score a constraint for solving priority (lower is better)
    fn score_constraint(&self, constraint: &Constraint) -> i32 {
        let metas = self.collect_metas_in_constraint(constraint);

        if metas.is_empty() {
            return 0; // No metas, can solve immediately
        }

        let unsolved_count = metas
            .iter()
            .filter(|m| {
                self.metas
                    .get(m)
                    .and_then(|info| info.solution.as_ref())
                    .is_none()
            })
            .count();

        if unsolved_count == 0 {
            return 1; // All metas solved
        }

        // Check for Miller patterns
        if let Constraint::Unify { left, right, .. } = constraint {
            if self.is_miller_pattern_unification(left, right)
                || self.is_miller_pattern_unification(right, left)
            {
                return 10 + unsolved_count as i32;
            }
        }

        100 + unsolved_count as i32
    }

    /// Check if a unification is a Miller pattern (?M x1 ... xn = t)
    fn is_miller_pattern_unification(&self, left: &Term, right: &Term) -> bool {
        self.is_simple_miller_pattern(left, right)
    }

    /// Check if left is a simple Miller pattern and right is safe to solve with
    pub fn is_simple_miller_pattern(&self, left: &Term, right: &Term) -> bool {
        let (head, args) = self.decompose_application(left);

        if let Term::Meta(meta_name) = head {
            // Must have at least one argument to be interesting
            if args.is_empty() {
                return false;
            }

            // Check if all arguments are distinct variables
            let mut seen = HashSet::new();
            for arg in &args {
                if let Term::Var(x) = arg {
                    if !seen.insert(x.clone()) {
                        return false; // Not distinct
                    }
                } else {
                    return false; // Not a variable
                }
            }

            // Check occurs check - right must not contain the meta-variable
            if let Some(meta_id) = self.parse_meta_name(meta_name) {
                if self.collect_metas_in_term(right).contains(&meta_id) {
                    return false; // Occurs check failure
                }
            }

            // For now, only handle simple cases where right is a variable or constant
            match right {
                Term::Var(_) | Term::Const(_) | Term::Sort(_) => true,
                Term::Meta(_) => {
                    // Allow meta-to-meta if they're different
                    if let Term::Meta(right_name) = right {
                        meta_name != right_name
                    } else {
                        false
                    }
                }
                _ => false, // More complex cases need careful handling
            }
        } else {
            false
        }
    }

    /// Decompose an application into head and arguments
    fn decompose_application<'a>(&self, term: &'a Term) -> (&'a Term, Vec<&'a Term>) {
        let mut current = term;
        let mut args = Vec::new();

        while let Term::App(f, arg) = current {
            args.push(arg.as_ref());
            current = f;
        }

        args.reverse();
        (current, args)
    }

    /// Solve a single constraint
    fn solve_constraint(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 100;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in solve_constraint",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.solve_constraint_impl(constraint);
        self.recursion_depth -= 1;
        result
    }

    fn solve_constraint_impl(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        match constraint {
            Constraint::Unify {
                left,
                right,
                strength,
                ..
            } => self.unify(&left, &right, strength),
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                if self.enable_has_type_solving {
                    self.solve_has_type_constraint(&term, &expected_type)
                } else {
                    // Default behavior: don't solve HasType constraints
                    Ok(false)
                }
            }
            Constraint::UnifyUniverse {
                left,
                right,
                strength,
                ..
            } => self.unify_universe(&left, &right, strength),
            Constraint::UniverseLevel { left, right, .. } => {
                // For now, treat level constraints as unification constraints
                self.unify_universe(&left, &right, ConstraintStrength::Preferred)
            }
            Constraint::Delayed { constraint, .. } => {
                // Re-queue the delayed constraint
                let inner = *constraint;
                self.add_constraint(inner);
                Ok(false)
            }
        }
    }

    /// Unify two terms
    fn unify(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 50;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in unify",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.unify_impl(left, right, strength);
        self.recursion_depth -= 1;
        result
    }

    fn unify_impl(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply(left);
        let right = self.substitution.apply(right);

        // Check if already equal
        if self.alpha_equal(&left, &right) {
            return Ok(false);
        }

        // Miller pattern detection and solving (if enabled)
        if self.enable_miller_patterns {
            if self.is_simple_miller_pattern(&left, &right) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&left) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &right);
                }
            } else if self.is_simple_miller_pattern(&right, &left) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&right) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &left);
                }
            }
        }

        match (&left, &right) {
            // Meta-variable cases
            (Term::Meta(m), t) | (t, Term::Meta(m)) => {
                if let Some(meta_id) = self.parse_meta_name(m) {
                    self.solve_meta(meta_id, t)
                } else {
                    Err(TypeError::Internal {
                        message: format!("Invalid meta-variable: {}", m),
                    })
                }
            }

            // Structural cases
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *f1.clone(),
                    right: *f2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: *a1.clone(),
                    right: *a2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                // Rename bound variables to be consistent
                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            // Universe (Sort) cases
            (Term::Sort(u1), Term::Sort(u2)) => self.unify_universe(u1, u2, strength),

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::TypeMismatch {
                        expected: left,
                        actual: right,
                    })
                } else {
                    Ok(false) // Can't solve, but not an error for weak
                              // constraints
                }
            }
        }
    }

    /// Unify two universes
    fn unify_universe(
        &mut self,
        left: &Universe,
        right: &Universe,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply_universe(left);
        let right = self.substitution.apply_universe(right);

        // Normalize universes (simplify expressions like 0+1 -> 1)
        let left = self.normalize_universe(&left);
        let right = self.normalize_universe(&right);

        // Check if already equal
        if left == right {
            return Ok(false);
        }

        match (&left, &right) {
            // Same constants
            (Universe::Const(n1), Universe::Const(n2)) if n1 == n2 => Ok(false),

            // Universe variable cases
            (Universe::ScopedVar(s1, v1), Universe::ScopedVar(s2, v2)) if s1 == s2 && v1 == v2 => {
                Ok(false)
            }
            (Universe::ScopedVar(_, _), Universe::ScopedVar(_, _)) => {
                // Different universe variables cannot be unified
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!(
                            "Cannot unify distinct universe variables {} and {}",
                            left, right
                        ),
                    })
                } else {
                    Ok(false)
                }
            }

            // Meta-variable cases
            (Universe::Meta(id), u) | (u, Universe::Meta(id)) => self.solve_universe_meta(*id, u),

            // Structural cases
            (Universe::Add(base1, n1), Universe::Add(base2, n2)) if n1 == n2 => {
                let constraint_id = self.new_constraint_id();
                self.add_constraint(Constraint::UnifyUniverse {
                    id: constraint_id,
                    left: *base1.clone(),
                    right: *base2.clone(),
                    strength,
                });
                Ok(true)
            }

            // Arithmetic constraints: ?u+n = m means ?u = m-n
            (Universe::Add(base, n), Universe::Const(m))
            | (Universe::Const(m), Universe::Add(base, n)) => {
                if *m >= *n {
                    if let Universe::Meta(id) = base.as_ref() {
                        self.solve_universe_meta(*id, &Universe::Const(m - n))
                    } else {
                        // More complex case - generate constraint for base
                        let constraint_id = self.new_constraint_id();
                        self.add_constraint(Constraint::UnifyUniverse {
                            id: constraint_id,
                            left: *base.clone(),
                            right: Universe::Const(m - n),
                            strength,
                        });
                        Ok(true)
                    }
                } else {
                    Err(TypeError::Internal {
                        message: format!("Cannot solve constraint: {} cannot equal {} (would require negative universe)",
                                       if matches!(&left, Universe::Add(_, _)) { &left } else { &right },
                                       if matches!(&left, Universe::Const(_)) { &left } else { &right }),
                    })
                }
            }

            (Universe::Max(u1, v1), Universe::Max(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Universe::IMax(u1, v1), Universe::IMax(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!("Cannot unify universes {} and {}", left, right),
                    })
                } else {
                    Ok(false)
                }
            }
        }
    }

    /// Normalize a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe(u1);
                let u2_norm = self.normalize_universe(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    /// Solve a universe meta-variable
    fn solve_universe_meta(
        &mut self,
        meta_id: u32,
        solution: &Universe,
    ) -> Result<bool, TypeError> {
        // Occurs check for universe meta-variables
        if self.occurs_in_universe(meta_id, solution) {
            return Err(TypeError::Internal {
                message: format!("Universe occurs check failed for ?u{}", meta_id),
            });
        }

        // Record solution
        self.substitution.insert_universe(meta_id, solution.clone());
        Ok(true)
    }

    /// Check if a universe meta-variable occurs in a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn occurs_in_universe(&self, meta_id: u32, universe: &Universe) -> bool {
        match universe {
            Universe::Meta(id) => *id == meta_id,
            Universe::Add(base, _) => self.occurs_in_universe(meta_id, base),
            Universe::Max(u, v) | Universe::IMax(u, v) => {
                self.occurs_in_universe(meta_id, u) || self.occurs_in_universe(meta_id, v)
            }
            _ => false,
        }
    }

    /// Solve a meta-variable
    fn solve_meta(&mut self, meta_id: MetaId, solution: &Term) -> Result<bool, TypeError> {
        // Occurs check
        if self.collect_metas_in_term(solution).contains(&meta_id) {
            return Err(TypeError::Internal {
                message: format!("Occurs check failed for ?m{}", meta_id.0),
            });
        }

        // Check solution is well-scoped
        if let Some(meta_info) = self.metas.get(&meta_id) {
            if !self.is_well_scoped(solution, &meta_info.context) {
                return Err(TypeError::Internal {
                    message: format!("Solution for ?m{} is not well-scoped", meta_id.0),
                });
            }
        }

        // Record solution
        self.substitution.insert(meta_id, solution.clone());

        if let Some(info) = self.metas.get_mut(&meta_id) {
            info.solution = Some(solution.clone());
        }

        Ok(true)
    }

    /// Check if a term is well-scoped in a context
    fn is_well_scoped(&self, term: &Term, context: &[String]) -> bool {
        match term {
            Term::Var(x) => {
                // Check if it's a bound variable or a global constant
                context.contains(x)
                    || self.context.lookup(x).is_some()
                    || self.context.lookup_axiom(x).is_some()
                    || self.context.lookup_constructor(x).is_some()
            }
            Term::App(f, arg) => {
                self.is_well_scoped(f, context) && self.is_well_scoped(arg, context)
            }
            Term::Abs(x, ty, body) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Pi(x, ty, body, _) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Const(name) => {
                // Constants should be in the global context
                self.context.lookup_axiom(name).is_some()
                    || self.context.lookup_constructor(name).is_some()
            }
            _ => true, // Sorts, meta-variables, etc.
        }
    }

    /// Alpha equality check
    fn alpha_equal(&self, t1: &Term, t2: &Term) -> bool {
        self.alpha_equal_aux(t1, t2, &mut vec![])
    }

    #[allow(clippy::only_used_in_recursion)]
    fn alpha_equal_aux(&self, t1: &Term, t2: &Term, renamings: &mut Vec<(String, String)>) -> bool {
        match (t1, t2) {
            (Term::Var(x), Term::Var(y)) => {
                // Check if this is a bound variable that was renamed
                for (a, b) in renamings.iter() {
                    if a == x && b == y {
                        return true;
                    }
                }
                x == y
            }
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                self.alpha_equal_aux(f1, f2, renamings) && self.alpha_equal_aux(a1, a2, renamings)
            }
            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Sort(u1), Term::Sort(u2)) => u1 == u2,
            (Term::Const(c1), Term::Const(c2)) => c1 == c2,
            (Term::Meta(m1), Term::Meta(m2)) => m1 == m2,
            _ => false,
        }
    }

    /// Rename a variable in a term
    #[allow(clippy::only_used_in_recursion)]
    fn rename_var(&self, old: &str, new: &str, term: &Term) -> Term {
        match term {
            Term::Var(x) if x == old => Term::Var(new.to_string()),
            Term::Var(x) => Term::Var(x.clone()),
            Term::App(f, arg) => Term::App(
                Box::new(self.rename_var(old, new, f)),
                Box::new(self.rename_var(old, new, arg)),
            ),
            Term::Abs(x, ty, body) if x != old => Term::Abs(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
            ),
            Term::Pi(x, ty, body, implicit) if x != old => Term::Pi(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
                *implicit,
            ),
            _ => term.clone(),
        }
    }

    /// Propagate solutions to dependent constraints
    fn propagate_solution(&mut self) -> Result<(), TypeError> {
        // Apply substitution to all remaining constraints
        let constraints: Vec<_> = self.constraints.drain().collect();

        for (id, constraint) in constraints {
            let updated = self.apply_subst_to_constraint(constraint);
            self.constraints.insert(id, updated);
        }

        Ok(())
    }

    /// Apply substitution to a constraint
    fn apply_subst_to_constraint(&self, constraint: Constraint) -> Constraint {
        match constraint {
            Constraint::Unify {
                id,
                left,
                right,
                strength,
            } => Constraint::Unify {
                id,
                left: self.substitution.apply(&left),
                right: self.substitution.apply(&right),
                strength,
            },
            Constraint::HasType {
                id,
                term,
                expected_type,
            } => Constraint::HasType {
                id,
                term: self.substitution.apply(&term),
                expected_type: self.substitution.apply(&expected_type),
            },
            Constraint::UnifyUniverse {
                id,
                left,
                right,
                strength,
            } => Constraint::UnifyUniverse {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
                strength,
            },
            Constraint::UniverseLevel { id, left, right } => Constraint::UniverseLevel {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
            },
            Constraint::Delayed {
                id,
                constraint,
                waiting_on,
            } => Constraint::Delayed {
                id,
                constraint: Box::new(self.apply_subst_to_constraint(*constraint)),
                waiting_on,
            },
        }
    }

    /// Solve a HasType constraint by type inference and unification
    fn solve_has_type_constraint(
        &mut self,
        term: &Term,
        expected_type: &Term,
    ) -> Result<bool, TypeError> {
        match term {
            Term::Meta(meta_name) => {
                // If the term is a meta-variable, we can potentially solve it
                if let Some(meta_id) = self.parse_meta_name(meta_name) {
                    if let Some(meta_info) = self.metas.get(&meta_id) {
                        if meta_info.solution.is_none() {
                            // Try to construct a term that has the expected type
                            if let Some(solution) = self.synthesize_term_of_type(expected_type)? {
                                return self.solve_meta(meta_id, &solution);
                            }
                        }
                    }
                }
                // If we can't solve the meta, try unifying with expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }

            Term::App(f, arg) => {
                // For applications, we need to ensure the function type is compatible
                // Create a fresh meta for the function type: ?F : ?A -> ?B
                let arg_type_meta_id = self.fresh_meta(vec![], None);
                let result_type_meta_id = self.fresh_meta(vec![], None);
                let arg_type_meta = Term::Meta(format!("?m{}", arg_type_meta_id.0));
                let result_type_meta = Term::Meta(format!("?m{}", result_type_meta_id.0));
                let func_type = Term::Pi(
                    "_".to_string(),
                    Box::new(arg_type_meta.clone()),
                    Box::new(result_type_meta.clone()),
                    false,
                );

                // Add constraints:
                // 1. f : ?A -> ?B
                // 2. arg : ?A
                // 3. ?B ≡ expected_type
                let f_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: f.as_ref().clone(),
                    expected_type: func_type,
                };
                let arg_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: arg.as_ref().clone(),
                    expected_type: arg_type_meta,
                };
                let result_constraint = Constraint::Unify {
                    id: self.new_constraint_id(),
                    left: result_type_meta,
                    right: expected_type.clone(),
                    strength: ConstraintStrength::Required,
                };

                self.add_constraint(f_constraint);
                self.add_constraint(arg_constraint);
                self.add_constraint(result_constraint);

                Ok(true)
            }

            Term::Abs(param, param_type, body) => {
                // For lambda abstractions, expected type should be a Pi type
                match expected_type {
                    Term::Pi(_, expected_param_type, expected_body_type, _) => {
                        // Unify parameter types and check body type
                        let param_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: param_type.as_ref().clone(),
                            right: expected_param_type.as_ref().clone(),
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: expected_body_type.as_ref().clone(),
                        };

                        self.add_constraint(param_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    Term::Meta(_) => {
                        // Expected type is a meta-variable, create a Pi type for it
                        let body_type_meta_id = self.fresh_meta(vec![param.clone()], None);
                        let body_type_meta = Term::Meta(format!("?m{}", body_type_meta_id.0));
                        let pi_type = Term::Pi(
                            param.clone(),
                            param_type.clone(),
                            Box::new(body_type_meta.clone()),
                            false,
                        );

                        let type_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: expected_type.clone(),
                            right: pi_type,
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: body_type_meta,
                        };

                        self.add_constraint(type_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    _ => {
                        // Type mismatch - lambda can't have non-function type
                        Err(TypeError::TypeMismatch {
                            expected: expected_type.clone(),
                            actual: Term::Pi(
                                param.clone(),
                                param_type.clone(),
                                Box::new(Term::Meta("?unknown".to_string())),
                                false,
                            ),
                        })
                    }
                }
            }

            _ => {
                // For other terms, just unify with the expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }
        }
    }

    /// Attempt to synthesize a term of a given type (basic heuristics)
    fn synthesize_term_of_type(&self, expected_type: &Term) -> Result<Option<Term>, TypeError> {
        match expected_type {
            Term::Sort(_) => {
                // For Sort types, we could return a fresh meta or a simple type
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
            Term::Pi(param, param_type, _body_type, _implicit) => {
                // For Pi types, synthesize a lambda that matches the Pi structure
                // Use a fresh meta-variable for the body to avoid infinite recursion
                let body_term = Term::Meta(format!("?body{}", self.next_meta_id));

                Ok(Some(Term::Abs(
                    param.clone(),
                    param_type.clone(),
                    Box::new(body_term),
                )))
            }
            Term::Meta(_) => {
                // Can't synthesize for unknown types
                Ok(None)
            }
            _ => {
                // For concrete types, return a meta-variable
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
        }
    }

    /// Check if a term is a Miller pattern
    /// Miller patterns are terms of the form: ?M x1 x2 ... xn where xi are
    /// distinct bound variables
    fn is_miller_pattern(&self, term: &Term) -> bool {
        let (head, args) = self.decompose_application(term);

        match head {
            Term::Meta(_) => {
                // Check that all arguments are distinct variables
                let mut seen = HashSet::new();
                for arg in args {
                    if let Term::Var(x) = arg {
                        if !seen.insert(x.clone()) {
                            return false; // Not distinct
                        }
                    } else {
                        return false; // Not a variable
                    }
                }
                true
            }
            _ => false,
        }
    }

    /// Extract the spine of a Miller pattern (the meta-variable and its
    /// arguments)
    fn extract_miller_spine(&self, term: &Term) -> Option<(String, Vec<String>)> {
        // First validate it's actually a Miller pattern
        if !self.is_miller_pattern(term) {
            return None;
        }

        let (head, args) = self.decompose_application(term);

        if let Term::Meta(meta_name) = head {
            // Collect variable names (we already know they're all variables from
            // is_miller_pattern)
            let var_names: Vec<String> = args
                .iter()
                .map(|arg| {
                    if let Term::Var(name) = arg {
                        name.clone()
                    } else {
                        unreachable!()
                    }
                })
                .collect();

            Some((meta_name.clone(), var_names))
        } else {
            None
        }
    }

    /// Solve simple Miller pattern unification
    /// Only handles simple cases
    fn solve_miller_pattern(
        &mut self,
        meta_name: &str,
        spine: &[String],
        other: &Term,
    ) -> Result<bool, TypeError> {
        // Check if we can abstract over the spine variables
        if !Self::can_abstract_over_variables(other, spine) {
            // Can't abstract, fall back to regular unification
            return self.unify(
                &Term::Meta(meta_name.to_string()),
                other,
                ConstraintStrength::Required,
            );
        }

        if let Some(meta_id) = self.parse_meta_name(meta_name) {
            // Only handle simple cases to avoid complexity
            match other {
                // Case 1: ?M x ≡ y (different variable) -> ?M := λx. y
                Term::Var(y) if spine.len() == 1 && spine[0] != *y => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))), /* Type inferred later */
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                // Case 2: ?M x ≡ c (constant) -> ?M := λx. c
                Term::Const(_) | Term::Sort(_) if spine.len() == 1 => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))),
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                _ => {
                    // For more complex cases, we could build the full lambda
                    // abstraction but for now, fall back to
                    // regular unification
                }
            }
        }

        // Fall back to regular unification
        self.unify(
            &Term::Meta(meta_name.to_string()),
            other,
            ConstraintStrength::Required,
        )
    }

    /// Check if a term can be abstracted over the given variables
    fn can_abstract_over_variables(term: &Term, vars: &[String]) -> bool {
        // For now, implement a simple check
        // In a full implementation, this would check that the term only uses variables
        // that are either in the spine or are bound locally
        match term {
            Term::Var(v) => vars.contains(v), // Variable must be in spine
            Term::App(f, arg) => {
                Self::can_abstract_over_variables(f, vars)
                    && Self::can_abstract_over_variables(arg, vars)
            }
            Term::Abs(param, ty, body) => {
                // Extend the scope with the bound parameter
                let mut extended_vars = vars.to_vec();
                extended_vars.push(param.clone());
                Self::can_abstract_over_variables(ty, vars)
                    && Self::can_abstract_over_variables(body, &extended_vars)
            }
            Term::Meta(_) => true,  // Meta-variables are always abstractable
            Term::Const(_) => true, // Constants are always abstractable
            Term::Sort(_) => true,  // Sorts are always abstractable
            _ => false,             // Conservative for complex constructs
        }
    }

    /// Delay a constraint for later processing
    fn delay_constraint(
        &mut self,
        constraint_id: ConstraintId,
        waiting_on: HashSet<MetaId>,
    ) -> Result<(), TypeError> {
        if let Some(constraint) = self.constraints.remove(&constraint_id) {
            let delayed = Constraint::Delayed {
                id: constraint_id,
                constraint: Box::new(constraint),
                waiting_on,
            };
            self.constraints.insert(constraint_id, delayed);
            self.delayed_constraints.insert(constraint_id);
        }
        Ok(())
    }

    /// Wake up delayed constraints that are no longer blocked
    fn wake_delayed_constraints(&mut self) -> Result<Vec<ConstraintId>, TypeError> {
        let mut woken = Vec::new();
        let mut to_wake = Vec::new();

        // Check which delayed constraints can now be woken up
        for constraint_id in &self.delayed_constraints {
            if let Some(Constraint::Delayed {
                waiting_on,
                constraint,
                ..
            }) = self.constraints.get(constraint_id)
            {
                // Check if all blocking metas are now solved
                let all_solved = waiting_on.iter().all(|meta_id| {
                    self.metas
                        .get(meta_id)
                        .is_some_and(|info| info.solution.is_some())
                });

                if all_solved {
                    to_wake.push((*constraint_id, constraint.as_ref().clone()));
                }
            }
        }

        // Wake up the constraints
        for (constraint_id, original_constraint) in to_wake {
            self.constraints.insert(constraint_id, original_constraint);
            self.delayed_constraints.remove(&constraint_id);
            woken.push(constraint_id);
        }

        Ok(woken)
    }

    /// Check if we can delay a constraint
    fn can_delay(&self, constraint_id: &ConstraintId) -> bool {
        if let Some(constraint) = self.constraints.get(constraint_id) {
            match constraint {
                Constraint::Unify {
                    left,
                    right,
                    strength,
                    ..
                } => {
                    // We can delay weak constraints that involve unsolved metas
                    if *strength == ConstraintStrength::Weak {
                        let left_metas = self.collect_metas_in_term(left);
                        let right_metas = self.collect_metas_in_term(right);

                        // Check if any involved metas are unsolved
                        for meta_id in left_metas.iter().chain(right_metas.iter()) {
                            if self.is_unsolved_meta(meta_id) {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::HasType {
                    term,
                    expected_type,
                    ..
                } => {
                    // Can delay HasType constraints involving unsolved metas
                    let term_metas = self.collect_metas_in_term(term);
                    let type_metas = self.collect_metas_in_term(expected_type);

                    for meta_id in term_metas.iter().chain(type_metas.iter()) {
                        if let Some(meta_info) = self.metas.get(meta_id) {
                            if meta_info.solution.is_none() {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::Delayed { .. } => true, // Already delayed
                _ => false,
            }
        } else {
            false
        }
    }

    /// Check if a meta-variable is unsolved
    fn is_unsolved_meta(&self, meta_id: &MetaId) -> bool {
        self.metas
            .get(meta_id)
            .is_some_and(|info| info.solution.is_none())
    }
}
/// Advanced constraint solver with dependency tracking
pub struct Solver {
    /// Meta-variables
    metas: HashMap<MetaId, MetaInfo>,
    /// Active constraints
    constraints: HashMap<ConstraintId, Constraint>,
    /// Constraint queue (ordered by priority)
    queue: VecDeque<ConstraintId>,
    /// Solved substitutions
    substitution: Substitution,
    /// Next meta ID
    next_meta_id: u64,
    /// Next universe meta ID
    next_universe_meta_id: u32,
    /// Next constraint ID
    next_constraint_id: u64,
    /// Type checking context
    context: Context,
    /// Dependency graph: meta -> constraints that depend on it
    dependencies: HashMap<MetaId, HashSet<ConstraintId>>,
    /// Delayed constraints that couldn't be solved immediately
    delayed_constraints: HashSet<ConstraintId>,
    /// Recursion depth counter to prevent stack overflow
    recursion_depth: u32,
    /// Flag to enable Miller pattern solving (advanced)
    enable_miller_patterns: bool,
    /// Flag to enable HasType constraint solving (advanced)
    enable_has_type_solving: bool,
}
}

The Solver represents the culmination of our constraint solving approach, integrating multiple algorithms into a unified framework. The solver maintains several critical data structures:

Meta-variable Registry (metas) tracks all unknown terms with their complete metadata and dependency relationships.

Constraint Management (constraints, queue) maintains active constraints in a priority queue that enables intelligent solving order.

Dependency Tracking (dependencies) builds a graph of relationships between meta-variables and constraints, enabling topological solving and cycle detection.

Advanced Features (enable_miller_patterns, enable_has_type_solving) can be activated for handling higher-order unification patterns and complex type constraints.

Meta-Variable Creation and Tracking

#![allow(unused)]
fn main() {
/// Create a fresh meta-variable
pub fn fresh_meta(&mut self, ctx_vars: Vec<String>, expected_type: Option<Term>) -> MetaId {
    let id = MetaId(self.next_meta_id);
    self.next_meta_id += 1;

    let info = MetaInfo {
        id,
        context: ctx_vars,
        expected_type,
        solution: None,
        dependencies: HashSet::new(),
        occurrences: Vec::new(),
    };

    self.metas.insert(id, info);
    id
}
}

Fresh meta-variable generation demonstrates how we maintain proper scoping and context information. Each meta-variable captures the variables that were in scope at its creation point, enabling proper variable capture analysis during solution.

#![allow(unused)]
fn main() {
/// Create a fresh universe meta-variable
pub fn fresh_universe_meta(&mut self) -> Universe {
    let id = self.next_universe_meta_id;
    self.next_universe_meta_id += 1;
    Universe::Meta(id)
}
}

Universe meta-variables represent unknown universe levels that get resolved through the universe constraint solver, enabling flexible universe polymorphism.

Constraint Management and Dependencies

#![allow(unused)]
fn main() {
/// Add a constraint to the solver
pub fn add_constraint(&mut self, constraint: Constraint) -> ConstraintId {
    let id = match &constraint {
        Constraint::Unify { id, .. }
        | Constraint::HasType { id, .. }
        | Constraint::UnifyUniverse { id, .. }
        | Constraint::UniverseLevel { id, .. }
        | Constraint::Delayed { id, .. } => *id,
    };

    // Track which metas appear in this constraint
    self.track_meta_occurrences(&constraint);

    self.constraints.insert(id, constraint);
    self.queue.push_back(id);
    id
}
}

Adding constraints to the solver involves dependency tracking that identifies which meta-variables appear in each constraint:

#![allow(unused)]
fn main() {
/// Track meta-variable occurrences in constraints
fn track_meta_occurrences(&mut self, constraint: &Constraint) {
    let metas = self.collect_metas_in_constraint(constraint);
    let constraint_id = match constraint {
        Constraint::Unify { id, .. }
        | Constraint::HasType { id, .. }
        | Constraint::UnifyUniverse { id, .. }
        | Constraint::UniverseLevel { id, .. }
        | Constraint::Delayed { id, .. } => *id,
    };

    for meta_id in metas {
        if let Some(info) = self.metas.get_mut(&meta_id) {
            info.occurrences.push(constraint_id);
        }
        self.dependencies
            .entry(meta_id)
            .or_default()
            .insert(constraint_id);
    }
}
}

This dependency tracking enables several critical solver capabilities that ensure efficient and correct constraint resolution. The solver can wake up sleeping constraints when their dependent meta-variables get resolved, allowing previously blocked constraints to become active and potentially solvable. Topological ordering of constraint resolution ensures that constraints are solved in an order that respects their dependencies, maximizing the chances of successful resolution by handling simpler constraints before more complex ones that depend on their solutions.

The dependency tracking also enables detection of circular dependencies that might indicate unsolvable constraint systems. When the solver identifies cycles in the dependency graph, it can report these as errors rather than entering infinite loops, providing users with meaningful feedback about problematic constraint patterns.

#![allow(unused)]
fn main() {
/// Collect all meta-variables in a constraint
fn collect_metas_in_constraint(&self, constraint: &Constraint) -> HashSet<MetaId> {
    match constraint {
        Constraint::Unify { left, right, .. } => {
            let mut metas = self.collect_metas_in_term(left);
            metas.extend(self.collect_metas_in_term(right));
            metas
        }
        Constraint::HasType {
            term,
            expected_type,
            ..
        } => {
            let mut metas = self.collect_metas_in_term(term);
            metas.extend(self.collect_metas_in_term(expected_type));
            metas
        }
        Constraint::UnifyUniverse { .. } => {
            // Universe constraints don't directly contain term meta-variables
            // but they might contain universe meta-variables (which we track separately)
            HashSet::new()
        }
        Constraint::UniverseLevel { .. } => {
            // Same as above
            HashSet::new()
        }
        Constraint::Delayed {
            constraint,
            waiting_on,
            ..
        } => {
            let mut metas = waiting_on.clone();
            metas.extend(self.collect_metas_in_constraint(constraint));
            metas
        }
    }
}
}

The meta-variable collection algorithm recursively traverses constraint structures to identify all dependencies, building the complete dependency graph that guides solving order.

Substitution System

#![allow(unused)]
fn main() {
use std::collections::{HashMap, HashSet, VecDeque};
use std::fmt;
use crate::ast::{Term, Universe};
use crate::context::Context;
use crate::errors::TypeError;
/// Meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct MetaId(pub u64);
impl fmt::Display for MetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?m{}", self.0)
    }
}
/// Universe meta-variable identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct UniverseMetaId(pub u64);
impl fmt::Display for UniverseMetaId {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "?u{}", self.0)
    }
}
/// Information about a meta-variable
#[derive(Debug, Clone)]
pub struct MetaInfo {
    pub id: MetaId,
    pub context: Vec<String>, // Variables in scope when meta was created
    pub expected_type: Option<Term>, // Expected type of the meta-variable
    pub solution: Option<Term>, // Solution if solved
    pub dependencies: HashSet<MetaId>, // Meta-variables this depends on
    pub occurrences: Vec<ConstraintId>, // Constraints this meta appears in
}
/// Constraint identifier
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct ConstraintId(pub u64);
/// Constraint types
#[derive(Debug, Clone)]
pub enum Constraint {
    /// Unification: t1 ≡ t2
    Unify {
        id: ConstraintId,
        left: Term,
        right: Term,
        strength: ConstraintStrength,
    },
    /// Type constraint: term : type
    HasType {
        id: ConstraintId,
        term: Term,
        expected_type: Term,
    },
    /// Universe unification: u1 ≡ u2
    UnifyUniverse {
        id: ConstraintId,
        left: Universe,
        right: Universe,
        strength: ConstraintStrength,
    },
    /// Universe level constraint: u1 ≤ u2
    UniverseLevel {
        id: ConstraintId,
        left: Universe,
        right: Universe,
    },
    /// Delayed constraint (for complex patterns)
    Delayed {
        id: ConstraintId,
        constraint: Box<Constraint>,
        waiting_on: HashSet<MetaId>,
    },
}
/// Constraint strength for prioritization
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ConstraintStrength {
    Required,  // Must be solved
    Preferred, // Should be solved if possible
    Weak,      // Can be postponed
}
impl Substitution {
    pub fn new() -> Self {
        Self::default()
    }

    pub fn insert(&mut self, meta: MetaId, term: Term) {
        self.mapping.insert(meta, term);
    }

    pub fn get(&self, meta: &MetaId) -> Option<&Term> {
        self.mapping.get(meta)
    }

    pub fn insert_universe(&mut self, meta_id: u32, universe: Universe) {
        self.universe_mapping.insert(meta_id, universe);
    }

    pub fn get_universe(&self, meta_id: &u32) -> Option<&Universe> {
        self.universe_mapping.get(meta_id)
    }

    /// Apply substitution to a term
    pub fn apply(&self, term: &Term) -> Term {
        match term {
            Term::Meta(name) => {
                // Try to parse meta ID and look up substitution
                if let Some(meta_id) = self.parse_meta_name(name) {
                    if let Some(subst) = self.mapping.get(&meta_id) {
                        // Recursively apply substitution
                        return self.apply(subst);
                    }
                }
                term.clone()
            }
            Term::App(f, arg) => Term::App(Box::new(self.apply(f)), Box::new(self.apply(arg))),
            Term::Abs(x, ty, body) => Term::Abs(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
            ),
            Term::Pi(x, ty, body, implicit) => Term::Pi(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(body)),
                *implicit,
            ),
            Term::Let(x, ty, val, body) => Term::Let(
                x.clone(),
                Box::new(self.apply(ty)),
                Box::new(self.apply(val)),
                Box::new(self.apply(body)),
            ),
            Term::Sort(u) => Term::Sort(self.apply_universe(u)),
            // Other cases remain unchanged
            _ => term.clone(),
        }
    }

    /// Apply substitution to a universe
    pub fn apply_universe(&self, universe: &Universe) -> Universe {
        let result = match universe {
            Universe::Meta(id) => {
                if let Some(subst) = self.universe_mapping.get(id) {
                    // Recursively apply substitution
                    self.apply_universe(subst)
                } else {
                    universe.clone()
                }
            }
            Universe::Add(base, n) => Universe::Add(Box::new(self.apply_universe(base)), *n),
            Universe::Max(u, v) => Universe::Max(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            Universe::IMax(u, v) => Universe::IMax(
                Box::new(self.apply_universe(u)),
                Box::new(self.apply_universe(v)),
            ),
            // Constants, variables, and scoped variables remain unchanged during substitution
            Universe::Const(_) | Universe::ScopedVar(_, _) => universe.clone(),
        };
        // Normalize the result to simplify expressions like Const(0) + 1 -> Const(1)
        self.normalize_universe_static(&result)
    }

    /// Static normalization (doesn't require &self)
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe_static(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe_static(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe_static(u1);
                let u2_norm = self.normalize_universe_static(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }
}
/// Advanced constraint solver with dependency tracking
pub struct Solver {
    /// Meta-variables
    metas: HashMap<MetaId, MetaInfo>,
    /// Active constraints
    constraints: HashMap<ConstraintId, Constraint>,
    /// Constraint queue (ordered by priority)
    queue: VecDeque<ConstraintId>,
    /// Solved substitutions
    substitution: Substitution,
    /// Next meta ID
    next_meta_id: u64,
    /// Next universe meta ID
    next_universe_meta_id: u32,
    /// Next constraint ID
    next_constraint_id: u64,
    /// Type checking context
    context: Context,
    /// Dependency graph: meta -> constraints that depend on it
    dependencies: HashMap<MetaId, HashSet<ConstraintId>>,
    /// Delayed constraints that couldn't be solved immediately
    delayed_constraints: HashSet<ConstraintId>,
    /// Recursion depth counter to prevent stack overflow
    recursion_depth: u32,
    /// Flag to enable Miller pattern solving (advanced)
    enable_miller_patterns: bool,
    /// Flag to enable HasType constraint solving (advanced)
    enable_has_type_solving: bool,
}
impl Solver {
    pub fn new(context: Context) -> Self {
        Self {
            metas: HashMap::new(),
            constraints: HashMap::new(),
            queue: VecDeque::new(),
            substitution: Substitution::new(),
            next_meta_id: 0,
            next_universe_meta_id: 0,
            next_constraint_id: 0,
            context,
            dependencies: HashMap::new(),
            delayed_constraints: HashSet::new(),
            recursion_depth: 0,
            enable_miller_patterns: false,  // Advanced feature
            enable_has_type_solving: false, // Advanced feature
        }
    }

    /// Enable Miller pattern solving (advanced)
    pub fn enable_miller_patterns(&mut self) {
        self.enable_miller_patterns = true;
    }

    /// Check if Miller pattern solving is enabled
    pub fn miller_patterns_enabled(&self) -> bool {
        self.enable_miller_patterns
    }

    /// Enable HasType constraint solving (advanced)
    pub fn enable_has_type_solving(&mut self) {
        self.enable_has_type_solving = true;
    }

    /// Check if HasType constraint solving is enabled
    pub fn has_type_solving_enabled(&self) -> bool {
        self.enable_has_type_solving
    }

    /// Create a fresh meta-variable
    pub fn fresh_meta(&mut self, ctx_vars: Vec<String>, expected_type: Option<Term>) -> MetaId {
        let id = MetaId(self.next_meta_id);
        self.next_meta_id += 1;

        let info = MetaInfo {
            id,
            context: ctx_vars,
            expected_type,
            solution: None,
            dependencies: HashSet::new(),
            occurrences: Vec::new(),
        };

        self.metas.insert(id, info);
        id
    }

    /// Create a fresh universe meta-variable
    pub fn fresh_universe_meta(&mut self) -> Universe {
        let id = self.next_universe_meta_id;
        self.next_universe_meta_id += 1;
        Universe::Meta(id)
    }

    /// Add a constraint to the solver
    pub fn add_constraint(&mut self, constraint: Constraint) -> ConstraintId {
        let id = match &constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        // Track which metas appear in this constraint
        self.track_meta_occurrences(&constraint);

        self.constraints.insert(id, constraint);
        self.queue.push_back(id);
        id
    }

    /// Create a new constraint ID
    pub fn new_constraint_id(&mut self) -> ConstraintId {
        let id = ConstraintId(self.next_constraint_id);
        self.next_constraint_id += 1;
        id
    }

    /// Track meta-variable occurrences in constraints
    fn track_meta_occurrences(&mut self, constraint: &Constraint) {
        let metas = self.collect_metas_in_constraint(constraint);
        let constraint_id = match constraint {
            Constraint::Unify { id, .. }
            | Constraint::HasType { id, .. }
            | Constraint::UnifyUniverse { id, .. }
            | Constraint::UniverseLevel { id, .. }
            | Constraint::Delayed { id, .. } => *id,
        };

        for meta_id in metas {
            if let Some(info) = self.metas.get_mut(&meta_id) {
                info.occurrences.push(constraint_id);
            }
            self.dependencies
                .entry(meta_id)
                .or_default()
                .insert(constraint_id);
        }
    }

    /// Collect all meta-variables in a constraint
    fn collect_metas_in_constraint(&self, constraint: &Constraint) -> HashSet<MetaId> {
        match constraint {
            Constraint::Unify { left, right, .. } => {
                let mut metas = self.collect_metas_in_term(left);
                metas.extend(self.collect_metas_in_term(right));
                metas
            }
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                let mut metas = self.collect_metas_in_term(term);
                metas.extend(self.collect_metas_in_term(expected_type));
                metas
            }
            Constraint::UnifyUniverse { .. } => {
                // Universe constraints don't directly contain term meta-variables
                // but they might contain universe meta-variables (which we track separately)
                HashSet::new()
            }
            Constraint::UniverseLevel { .. } => {
                // Same as above
                HashSet::new()
            }
            Constraint::Delayed {
                constraint,
                waiting_on,
                ..
            } => {
                let mut metas = waiting_on.clone();
                metas.extend(self.collect_metas_in_constraint(constraint));
                metas
            }
        }
    }

    /// Collect all meta-variables in a term
    fn collect_metas_in_term(&self, term: &Term) -> HashSet<MetaId> {
        let mut metas = HashSet::new();
        self.collect_metas_recursive(term, &mut metas);
        metas
    }

    fn collect_metas_recursive(&self, term: &Term, metas: &mut HashSet<MetaId>) {
        match term {
            Term::Meta(name) => {
                if let Some(id) = self.parse_meta_name(name) {
                    metas.insert(id);
                }
            }
            Term::App(f, arg) => {
                self.collect_metas_recursive(f, metas);
                self.collect_metas_recursive(arg, metas);
            }
            Term::Abs(_, ty, body) | Term::Pi(_, ty, body, _) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(body, metas);
            }
            Term::Let(_, ty, val, body) => {
                self.collect_metas_recursive(ty, metas);
                self.collect_metas_recursive(val, metas);
                self.collect_metas_recursive(body, metas);
            }
            _ => {}
        }
    }

    fn parse_meta_name(&self, name: &str) -> Option<MetaId> {
        if let Some(stripped) = name.strip_prefix("?m") {
            stripped.parse::<u64>().ok().map(MetaId)
        } else {
            None
        }
    }

    /// Main solving loop
    pub fn solve(&mut self) -> Result<Substitution, TypeError> {
        let max_iterations = 1000;
        let mut iterations = 0;

        while !self.queue.is_empty() && iterations < max_iterations {
            iterations += 1;

            // Pick the best constraint to solve
            if let Some(constraint_id) = self.pick_constraint() {
                let constraint = self.constraints.remove(&constraint_id).unwrap();

                match self.solve_constraint(constraint.clone()) {
                    Ok(progress) => {
                        if progress {
                            // Propagate the solution
                            self.propagate_solution()?;

                            // Wake up delayed constraints that might now be solvable
                            let woken_constraints = self.wake_delayed_constraints()?;

                            // Add woken constraints back to queue with high priority
                            for woken_id in woken_constraints {
                                self.queue.push_front(woken_id);
                            }
                        }
                    }
                    Err(e) => {
                        // Try to delay the constraint if possible
                        if self.can_delay(&constraint_id) {
                            // Determine which meta-variables to wait for
                            let waiting_on = match &constraint {
                                Constraint::Unify { left, right, .. } => {
                                    let mut metas = self.collect_metas_in_term(left);
                                    metas.extend(self.collect_metas_in_term(right));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                Constraint::HasType {
                                    term,
                                    expected_type,
                                    ..
                                } => {
                                    let mut metas = self.collect_metas_in_term(term);
                                    metas.extend(self.collect_metas_in_term(expected_type));
                                    metas
                                        .into_iter()
                                        .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                        .collect()
                                }
                                _ => HashSet::new(),
                            };

                            if !waiting_on.is_empty() {
                                // Put constraint back and delay it
                                self.constraints.insert(constraint_id, constraint);
                                self.delay_constraint(constraint_id, waiting_on)?;
                            } else {
                                // Can't delay, return error
                                return Err(e);
                            }
                        } else {
                            return Err(e);
                        }
                    }
                }
            }
        }

        if iterations >= max_iterations {
            return Err(TypeError::Internal {
                message: "Constraint solving did not converge".to_string(),
            });
        }

        // Check for unsolved required constraints
        for constraint in self.constraints.values() {
            if let Constraint::Unify {
                strength: ConstraintStrength::Required,
                ..
            } = constraint
            {
                return Err(TypeError::Internal {
                    message: "Unsolved required constraints remain".to_string(),
                });
            }
        }

        Ok(self.substitution.clone())
    }

    /// Pick the next constraint to solve based on heuristics
    fn pick_constraint(&mut self) -> Option<ConstraintId> {
        // Priority order:
        // 1. Constraints with no meta-variables
        // 2. Constraints with only solved meta-variables
        // 3. Simple pattern constraints (Miller patterns)
        // 4. Other constraints

        let mut best_constraint = None;
        let mut best_score = i32::MAX;

        for &id in &self.queue {
            if let Some(constraint) = self.constraints.get(&id) {
                let score = self.score_constraint(constraint);
                if score < best_score {
                    best_score = score;
                    best_constraint = Some(id);
                }
            }
        }

        if let Some(id) = best_constraint {
            self.queue.retain(|&x| x != id);
            Some(id)
        } else {
            self.queue.pop_front()
        }
    }

    /// Score a constraint for solving priority (lower is better)
    fn score_constraint(&self, constraint: &Constraint) -> i32 {
        let metas = self.collect_metas_in_constraint(constraint);

        if metas.is_empty() {
            return 0; // No metas, can solve immediately
        }

        let unsolved_count = metas
            .iter()
            .filter(|m| {
                self.metas
                    .get(m)
                    .and_then(|info| info.solution.as_ref())
                    .is_none()
            })
            .count();

        if unsolved_count == 0 {
            return 1; // All metas solved
        }

        // Check for Miller patterns
        if let Constraint::Unify { left, right, .. } = constraint {
            if self.is_miller_pattern_unification(left, right)
                || self.is_miller_pattern_unification(right, left)
            {
                return 10 + unsolved_count as i32;
            }
        }

        100 + unsolved_count as i32
    }

    /// Check if a unification is a Miller pattern (?M x1 ... xn = t)
    fn is_miller_pattern_unification(&self, left: &Term, right: &Term) -> bool {
        self.is_simple_miller_pattern(left, right)
    }

    /// Check if left is a simple Miller pattern and right is safe to solve with
    pub fn is_simple_miller_pattern(&self, left: &Term, right: &Term) -> bool {
        let (head, args) = self.decompose_application(left);

        if let Term::Meta(meta_name) = head {
            // Must have at least one argument to be interesting
            if args.is_empty() {
                return false;
            }

            // Check if all arguments are distinct variables
            let mut seen = HashSet::new();
            for arg in &args {
                if let Term::Var(x) = arg {
                    if !seen.insert(x.clone()) {
                        return false; // Not distinct
                    }
                } else {
                    return false; // Not a variable
                }
            }

            // Check occurs check - right must not contain the meta-variable
            if let Some(meta_id) = self.parse_meta_name(meta_name) {
                if self.collect_metas_in_term(right).contains(&meta_id) {
                    return false; // Occurs check failure
                }
            }

            // For now, only handle simple cases where right is a variable or constant
            match right {
                Term::Var(_) | Term::Const(_) | Term::Sort(_) => true,
                Term::Meta(_) => {
                    // Allow meta-to-meta if they're different
                    if let Term::Meta(right_name) = right {
                        meta_name != right_name
                    } else {
                        false
                    }
                }
                _ => false, // More complex cases need careful handling
            }
        } else {
            false
        }
    }

    /// Decompose an application into head and arguments
    fn decompose_application<'a>(&self, term: &'a Term) -> (&'a Term, Vec<&'a Term>) {
        let mut current = term;
        let mut args = Vec::new();

        while let Term::App(f, arg) = current {
            args.push(arg.as_ref());
            current = f;
        }

        args.reverse();
        (current, args)
    }

    /// Solve a single constraint
    fn solve_constraint(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 100;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in solve_constraint",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.solve_constraint_impl(constraint);
        self.recursion_depth -= 1;
        result
    }

    fn solve_constraint_impl(&mut self, constraint: Constraint) -> Result<bool, TypeError> {
        match constraint {
            Constraint::Unify {
                left,
                right,
                strength,
                ..
            } => self.unify(&left, &right, strength),
            Constraint::HasType {
                term,
                expected_type,
                ..
            } => {
                if self.enable_has_type_solving {
                    self.solve_has_type_constraint(&term, &expected_type)
                } else {
                    // Default behavior: don't solve HasType constraints
                    Ok(false)
                }
            }
            Constraint::UnifyUniverse {
                left,
                right,
                strength,
                ..
            } => self.unify_universe(&left, &right, strength),
            Constraint::UniverseLevel { left, right, .. } => {
                // For now, treat level constraints as unification constraints
                self.unify_universe(&left, &right, ConstraintStrength::Preferred)
            }
            Constraint::Delayed { constraint, .. } => {
                // Re-queue the delayed constraint
                let inner = *constraint;
                self.add_constraint(inner);
                Ok(false)
            }
        }
    }

    /// Unify two terms
    fn unify(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        const MAX_RECURSION_DEPTH: u32 = 50;

        if self.recursion_depth >= MAX_RECURSION_DEPTH {
            return Err(TypeError::Internal {
                message: format!(
                    "Maximum recursion depth {} exceeded in unify",
                    MAX_RECURSION_DEPTH
                ),
            });
        }

        self.recursion_depth += 1;
        let result = self.unify_impl(left, right, strength);
        self.recursion_depth -= 1;
        result
    }

    fn unify_impl(
        &mut self,
        left: &Term,
        right: &Term,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply(left);
        let right = self.substitution.apply(right);

        // Check if already equal
        if self.alpha_equal(&left, &right) {
            return Ok(false);
        }

        // Miller pattern detection and solving (if enabled)
        if self.enable_miller_patterns {
            if self.is_simple_miller_pattern(&left, &right) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&left) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &right);
                }
            } else if self.is_simple_miller_pattern(&right, &left) {
                if let Some((meta_name, spine)) = self.extract_miller_spine(&right) {
                    // Try to solve the Miller pattern directly
                    return self.solve_miller_pattern(&meta_name, &spine, &left);
                }
            }
        }

        match (&left, &right) {
            // Meta-variable cases
            (Term::Meta(m), t) | (t, Term::Meta(m)) => {
                if let Some(meta_id) = self.parse_meta_name(m) {
                    self.solve_meta(meta_id, t)
                } else {
                    Err(TypeError::Internal {
                        message: format!("Invalid meta-variable: {}", m),
                    })
                }
            }

            // Structural cases
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *f1.clone(),
                    right: *f2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: *a1.clone(),
                    right: *a2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                // Rename bound variables to be consistent
                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::Unify {
                    id: id1,
                    left: *t1.clone(),
                    right: *t2.clone(),
                    strength,
                });

                let fresh_var = format!("x{}", self.next_meta_id);
                let b1_renamed = self.rename_var(x1, &fresh_var, b1);
                let b2_renamed = self.rename_var(x2, &fresh_var, b2);

                self.add_constraint(Constraint::Unify {
                    id: id2,
                    left: b1_renamed,
                    right: b2_renamed,
                    strength,
                });

                Ok(true)
            }

            // Universe (Sort) cases
            (Term::Sort(u1), Term::Sort(u2)) => self.unify_universe(u1, u2, strength),

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::TypeMismatch {
                        expected: left,
                        actual: right,
                    })
                } else {
                    Ok(false) // Can't solve, but not an error for weak
                              // constraints
                }
            }
        }
    }

    /// Unify two universes
    fn unify_universe(
        &mut self,
        left: &Universe,
        right: &Universe,
        strength: ConstraintStrength,
    ) -> Result<bool, TypeError> {
        // Apply current substitution
        let left = self.substitution.apply_universe(left);
        let right = self.substitution.apply_universe(right);

        // Normalize universes (simplify expressions like 0+1 -> 1)
        let left = self.normalize_universe(&left);
        let right = self.normalize_universe(&right);

        // Check if already equal
        if left == right {
            return Ok(false);
        }

        match (&left, &right) {
            // Same constants
            (Universe::Const(n1), Universe::Const(n2)) if n1 == n2 => Ok(false),

            // Universe variable cases
            (Universe::ScopedVar(s1, v1), Universe::ScopedVar(s2, v2)) if s1 == s2 && v1 == v2 => {
                Ok(false)
            }
            (Universe::ScopedVar(_, _), Universe::ScopedVar(_, _)) => {
                // Different universe variables cannot be unified
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!(
                            "Cannot unify distinct universe variables {} and {}",
                            left, right
                        ),
                    })
                } else {
                    Ok(false)
                }
            }

            // Meta-variable cases
            (Universe::Meta(id), u) | (u, Universe::Meta(id)) => self.solve_universe_meta(*id, u),

            // Structural cases
            (Universe::Add(base1, n1), Universe::Add(base2, n2)) if n1 == n2 => {
                let constraint_id = self.new_constraint_id();
                self.add_constraint(Constraint::UnifyUniverse {
                    id: constraint_id,
                    left: *base1.clone(),
                    right: *base2.clone(),
                    strength,
                });
                Ok(true)
            }

            // Arithmetic constraints: ?u+n = m means ?u = m-n
            (Universe::Add(base, n), Universe::Const(m))
            | (Universe::Const(m), Universe::Add(base, n)) => {
                if *m >= *n {
                    if let Universe::Meta(id) = base.as_ref() {
                        self.solve_universe_meta(*id, &Universe::Const(m - n))
                    } else {
                        // More complex case - generate constraint for base
                        let constraint_id = self.new_constraint_id();
                        self.add_constraint(Constraint::UnifyUniverse {
                            id: constraint_id,
                            left: *base.clone(),
                            right: Universe::Const(m - n),
                            strength,
                        });
                        Ok(true)
                    }
                } else {
                    Err(TypeError::Internal {
                        message: format!("Cannot solve constraint: {} cannot equal {} (would require negative universe)",
                                       if matches!(&left, Universe::Add(_, _)) { &left } else { &right },
                                       if matches!(&left, Universe::Const(_)) { &left } else { &right }),
                    })
                }
            }

            (Universe::Max(u1, v1), Universe::Max(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            (Universe::IMax(u1, v1), Universe::IMax(u2, v2)) => {
                let id1 = self.new_constraint_id();
                let id2 = self.new_constraint_id();

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id1,
                    left: *u1.clone(),
                    right: *u2.clone(),
                    strength,
                });

                self.add_constraint(Constraint::UnifyUniverse {
                    id: id2,
                    left: *v1.clone(),
                    right: *v2.clone(),
                    strength,
                });

                Ok(true)
            }

            _ => {
                if strength == ConstraintStrength::Required {
                    Err(TypeError::Internal {
                        message: format!("Cannot unify universes {} and {}", left, right),
                    })
                } else {
                    Ok(false)
                }
            }
        }
    }

    /// Normalize a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn normalize_universe(&self, u: &Universe) -> Universe {
        match u {
            Universe::Add(base, n) => {
                let base_norm = self.normalize_universe(base);
                match base_norm {
                    Universe::Const(m) => Universe::Const(m + n),
                    _ => Universe::Add(Box::new(base_norm), *n),
                }
            }
            Universe::Max(u1, u2) => {
                let u1_norm = self.normalize_universe(u1);
                let u2_norm = self.normalize_universe(u2);
                match (&u1_norm, &u2_norm) {
                    (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                    _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
                }
            }
            _ => u.clone(),
        }
    }

    /// Solve a universe meta-variable
    fn solve_universe_meta(
        &mut self,
        meta_id: u32,
        solution: &Universe,
    ) -> Result<bool, TypeError> {
        // Occurs check for universe meta-variables
        if self.occurs_in_universe(meta_id, solution) {
            return Err(TypeError::Internal {
                message: format!("Universe occurs check failed for ?u{}", meta_id),
            });
        }

        // Record solution
        self.substitution.insert_universe(meta_id, solution.clone());
        Ok(true)
    }

    /// Check if a universe meta-variable occurs in a universe expression
    #[allow(clippy::only_used_in_recursion)]
    fn occurs_in_universe(&self, meta_id: u32, universe: &Universe) -> bool {
        match universe {
            Universe::Meta(id) => *id == meta_id,
            Universe::Add(base, _) => self.occurs_in_universe(meta_id, base),
            Universe::Max(u, v) | Universe::IMax(u, v) => {
                self.occurs_in_universe(meta_id, u) || self.occurs_in_universe(meta_id, v)
            }
            _ => false,
        }
    }

    /// Solve a meta-variable
    fn solve_meta(&mut self, meta_id: MetaId, solution: &Term) -> Result<bool, TypeError> {
        // Occurs check
        if self.collect_metas_in_term(solution).contains(&meta_id) {
            return Err(TypeError::Internal {
                message: format!("Occurs check failed for ?m{}", meta_id.0),
            });
        }

        // Check solution is well-scoped
        if let Some(meta_info) = self.metas.get(&meta_id) {
            if !self.is_well_scoped(solution, &meta_info.context) {
                return Err(TypeError::Internal {
                    message: format!("Solution for ?m{} is not well-scoped", meta_id.0),
                });
            }
        }

        // Record solution
        self.substitution.insert(meta_id, solution.clone());

        if let Some(info) = self.metas.get_mut(&meta_id) {
            info.solution = Some(solution.clone());
        }

        Ok(true)
    }

    /// Check if a term is well-scoped in a context
    fn is_well_scoped(&self, term: &Term, context: &[String]) -> bool {
        match term {
            Term::Var(x) => {
                // Check if it's a bound variable or a global constant
                context.contains(x)
                    || self.context.lookup(x).is_some()
                    || self.context.lookup_axiom(x).is_some()
                    || self.context.lookup_constructor(x).is_some()
            }
            Term::App(f, arg) => {
                self.is_well_scoped(f, context) && self.is_well_scoped(arg, context)
            }
            Term::Abs(x, ty, body) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Pi(x, ty, body, _) => {
                let mut extended_ctx = context.to_vec();
                extended_ctx.push(x.clone());
                self.is_well_scoped(ty, context) && self.is_well_scoped(body, &extended_ctx)
            }
            Term::Const(name) => {
                // Constants should be in the global context
                self.context.lookup_axiom(name).is_some()
                    || self.context.lookup_constructor(name).is_some()
            }
            _ => true, // Sorts, meta-variables, etc.
        }
    }

    /// Alpha equality check
    fn alpha_equal(&self, t1: &Term, t2: &Term) -> bool {
        self.alpha_equal_aux(t1, t2, &mut vec![])
    }

    #[allow(clippy::only_used_in_recursion)]
    fn alpha_equal_aux(&self, t1: &Term, t2: &Term, renamings: &mut Vec<(String, String)>) -> bool {
        match (t1, t2) {
            (Term::Var(x), Term::Var(y)) => {
                // Check if this is a bound variable that was renamed
                for (a, b) in renamings.iter() {
                    if a == x && b == y {
                        return true;
                    }
                }
                x == y
            }
            (Term::App(f1, a1), Term::App(f2, a2)) => {
                self.alpha_equal_aux(f1, f2, renamings) && self.alpha_equal_aux(a1, a2, renamings)
            }
            (Term::Abs(x1, t1, b1), Term::Abs(x2, t2, b2)) => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Pi(x1, t1, b1, i1), Term::Pi(x2, t2, b2, i2)) if i1 == i2 => {
                if !self.alpha_equal_aux(t1, t2, renamings) {
                    return false;
                }
                renamings.push((x1.clone(), x2.clone()));
                let result = self.alpha_equal_aux(b1, b2, renamings);
                renamings.pop();
                result
            }
            (Term::Sort(u1), Term::Sort(u2)) => u1 == u2,
            (Term::Const(c1), Term::Const(c2)) => c1 == c2,
            (Term::Meta(m1), Term::Meta(m2)) => m1 == m2,
            _ => false,
        }
    }

    /// Rename a variable in a term
    #[allow(clippy::only_used_in_recursion)]
    fn rename_var(&self, old: &str, new: &str, term: &Term) -> Term {
        match term {
            Term::Var(x) if x == old => Term::Var(new.to_string()),
            Term::Var(x) => Term::Var(x.clone()),
            Term::App(f, arg) => Term::App(
                Box::new(self.rename_var(old, new, f)),
                Box::new(self.rename_var(old, new, arg)),
            ),
            Term::Abs(x, ty, body) if x != old => Term::Abs(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
            ),
            Term::Pi(x, ty, body, implicit) if x != old => Term::Pi(
                x.clone(),
                Box::new(self.rename_var(old, new, ty)),
                Box::new(self.rename_var(old, new, body)),
                *implicit,
            ),
            _ => term.clone(),
        }
    }

    /// Propagate solutions to dependent constraints
    fn propagate_solution(&mut self) -> Result<(), TypeError> {
        // Apply substitution to all remaining constraints
        let constraints: Vec<_> = self.constraints.drain().collect();

        for (id, constraint) in constraints {
            let updated = self.apply_subst_to_constraint(constraint);
            self.constraints.insert(id, updated);
        }

        Ok(())
    }

    /// Apply substitution to a constraint
    fn apply_subst_to_constraint(&self, constraint: Constraint) -> Constraint {
        match constraint {
            Constraint::Unify {
                id,
                left,
                right,
                strength,
            } => Constraint::Unify {
                id,
                left: self.substitution.apply(&left),
                right: self.substitution.apply(&right),
                strength,
            },
            Constraint::HasType {
                id,
                term,
                expected_type,
            } => Constraint::HasType {
                id,
                term: self.substitution.apply(&term),
                expected_type: self.substitution.apply(&expected_type),
            },
            Constraint::UnifyUniverse {
                id,
                left,
                right,
                strength,
            } => Constraint::UnifyUniverse {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
                strength,
            },
            Constraint::UniverseLevel { id, left, right } => Constraint::UniverseLevel {
                id,
                left: self.substitution.apply_universe(&left),
                right: self.substitution.apply_universe(&right),
            },
            Constraint::Delayed {
                id,
                constraint,
                waiting_on,
            } => Constraint::Delayed {
                id,
                constraint: Box::new(self.apply_subst_to_constraint(*constraint)),
                waiting_on,
            },
        }
    }

    /// Solve a HasType constraint by type inference and unification
    fn solve_has_type_constraint(
        &mut self,
        term: &Term,
        expected_type: &Term,
    ) -> Result<bool, TypeError> {
        match term {
            Term::Meta(meta_name) => {
                // If the term is a meta-variable, we can potentially solve it
                if let Some(meta_id) = self.parse_meta_name(meta_name) {
                    if let Some(meta_info) = self.metas.get(&meta_id) {
                        if meta_info.solution.is_none() {
                            // Try to construct a term that has the expected type
                            if let Some(solution) = self.synthesize_term_of_type(expected_type)? {
                                return self.solve_meta(meta_id, &solution);
                            }
                        }
                    }
                }
                // If we can't solve the meta, try unifying with expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }

            Term::App(f, arg) => {
                // For applications, we need to ensure the function type is compatible
                // Create a fresh meta for the function type: ?F : ?A -> ?B
                let arg_type_meta_id = self.fresh_meta(vec![], None);
                let result_type_meta_id = self.fresh_meta(vec![], None);
                let arg_type_meta = Term::Meta(format!("?m{}", arg_type_meta_id.0));
                let result_type_meta = Term::Meta(format!("?m{}", result_type_meta_id.0));
                let func_type = Term::Pi(
                    "_".to_string(),
                    Box::new(arg_type_meta.clone()),
                    Box::new(result_type_meta.clone()),
                    false,
                );

                // Add constraints:
                // 1. f : ?A -> ?B
                // 2. arg : ?A
                // 3. ?B ≡ expected_type
                let f_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: f.as_ref().clone(),
                    expected_type: func_type,
                };
                let arg_constraint = Constraint::HasType {
                    id: self.new_constraint_id(),
                    term: arg.as_ref().clone(),
                    expected_type: arg_type_meta,
                };
                let result_constraint = Constraint::Unify {
                    id: self.new_constraint_id(),
                    left: result_type_meta,
                    right: expected_type.clone(),
                    strength: ConstraintStrength::Required,
                };

                self.add_constraint(f_constraint);
                self.add_constraint(arg_constraint);
                self.add_constraint(result_constraint);

                Ok(true)
            }

            Term::Abs(param, param_type, body) => {
                // For lambda abstractions, expected type should be a Pi type
                match expected_type {
                    Term::Pi(_, expected_param_type, expected_body_type, _) => {
                        // Unify parameter types and check body type
                        let param_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: param_type.as_ref().clone(),
                            right: expected_param_type.as_ref().clone(),
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: expected_body_type.as_ref().clone(),
                        };

                        self.add_constraint(param_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    Term::Meta(_) => {
                        // Expected type is a meta-variable, create a Pi type for it
                        let body_type_meta_id = self.fresh_meta(vec![param.clone()], None);
                        let body_type_meta = Term::Meta(format!("?m{}", body_type_meta_id.0));
                        let pi_type = Term::Pi(
                            param.clone(),
                            param_type.clone(),
                            Box::new(body_type_meta.clone()),
                            false,
                        );

                        let type_unify = Constraint::Unify {
                            id: self.new_constraint_id(),
                            left: expected_type.clone(),
                            right: pi_type,
                            strength: ConstraintStrength::Required,
                        };

                        let body_check = Constraint::HasType {
                            id: self.new_constraint_id(),
                            term: body.as_ref().clone(),
                            expected_type: body_type_meta,
                        };

                        self.add_constraint(type_unify);
                        self.add_constraint(body_check);

                        Ok(true)
                    }
                    _ => {
                        // Type mismatch - lambda can't have non-function type
                        Err(TypeError::TypeMismatch {
                            expected: expected_type.clone(),
                            actual: Term::Pi(
                                param.clone(),
                                param_type.clone(),
                                Box::new(Term::Meta("?unknown".to_string())),
                                false,
                            ),
                        })
                    }
                }
            }

            _ => {
                // For other terms, just unify with the expected type
                self.unify(term, expected_type, ConstraintStrength::Preferred)
            }
        }
    }

    /// Attempt to synthesize a term of a given type (basic heuristics)
    fn synthesize_term_of_type(&self, expected_type: &Term) -> Result<Option<Term>, TypeError> {
        match expected_type {
            Term::Sort(_) => {
                // For Sort types, we could return a fresh meta or a simple type
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
            Term::Pi(param, param_type, _body_type, _implicit) => {
                // For Pi types, synthesize a lambda that matches the Pi structure
                // Use a fresh meta-variable for the body to avoid infinite recursion
                let body_term = Term::Meta(format!("?body{}", self.next_meta_id));

                Ok(Some(Term::Abs(
                    param.clone(),
                    param_type.clone(),
                    Box::new(body_term),
                )))
            }
            Term::Meta(_) => {
                // Can't synthesize for unknown types
                Ok(None)
            }
            _ => {
                // For concrete types, return a meta-variable
                Ok(Some(Term::Meta(format!("?synth{}", self.next_meta_id))))
            }
        }
    }

    /// Check if a term is a Miller pattern
    /// Miller patterns are terms of the form: ?M x1 x2 ... xn where xi are
    /// distinct bound variables
    fn is_miller_pattern(&self, term: &Term) -> bool {
        let (head, args) = self.decompose_application(term);

        match head {
            Term::Meta(_) => {
                // Check that all arguments are distinct variables
                let mut seen = HashSet::new();
                for arg in args {
                    if let Term::Var(x) = arg {
                        if !seen.insert(x.clone()) {
                            return false; // Not distinct
                        }
                    } else {
                        return false; // Not a variable
                    }
                }
                true
            }
            _ => false,
        }
    }

    /// Extract the spine of a Miller pattern (the meta-variable and its
    /// arguments)
    fn extract_miller_spine(&self, term: &Term) -> Option<(String, Vec<String>)> {
        // First validate it's actually a Miller pattern
        if !self.is_miller_pattern(term) {
            return None;
        }

        let (head, args) = self.decompose_application(term);

        if let Term::Meta(meta_name) = head {
            // Collect variable names (we already know they're all variables from
            // is_miller_pattern)
            let var_names: Vec<String> = args
                .iter()
                .map(|arg| {
                    if let Term::Var(name) = arg {
                        name.clone()
                    } else {
                        unreachable!()
                    }
                })
                .collect();

            Some((meta_name.clone(), var_names))
        } else {
            None
        }
    }

    /// Solve simple Miller pattern unification
    /// Only handles simple cases
    fn solve_miller_pattern(
        &mut self,
        meta_name: &str,
        spine: &[String],
        other: &Term,
    ) -> Result<bool, TypeError> {
        // Check if we can abstract over the spine variables
        if !Self::can_abstract_over_variables(other, spine) {
            // Can't abstract, fall back to regular unification
            return self.unify(
                &Term::Meta(meta_name.to_string()),
                other,
                ConstraintStrength::Required,
            );
        }

        if let Some(meta_id) = self.parse_meta_name(meta_name) {
            // Only handle simple cases to avoid complexity
            match other {
                // Case 1: ?M x ≡ y (different variable) -> ?M := λx. y
                Term::Var(y) if spine.len() == 1 && spine[0] != *y => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))), /* Type inferred later */
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                // Case 2: ?M x ≡ c (constant) -> ?M := λx. c
                Term::Const(_) | Term::Sort(_) if spine.len() == 1 => {
                    let solution = Term::Abs(
                        spine[0].clone(),
                        Box::new(Term::Meta(format!("?T{}", self.next_meta_id))),
                        Box::new(other.clone()),
                    );
                    return self.solve_meta(meta_id, &solution);
                }

                _ => {
                    // For more complex cases, we could build the full lambda
                    // abstraction but for now, fall back to
                    // regular unification
                }
            }
        }

        // Fall back to regular unification
        self.unify(
            &Term::Meta(meta_name.to_string()),
            other,
            ConstraintStrength::Required,
        )
    }

    /// Check if a term can be abstracted over the given variables
    fn can_abstract_over_variables(term: &Term, vars: &[String]) -> bool {
        // For now, implement a simple check
        // In a full implementation, this would check that the term only uses variables
        // that are either in the spine or are bound locally
        match term {
            Term::Var(v) => vars.contains(v), // Variable must be in spine
            Term::App(f, arg) => {
                Self::can_abstract_over_variables(f, vars)
                    && Self::can_abstract_over_variables(arg, vars)
            }
            Term::Abs(param, ty, body) => {
                // Extend the scope with the bound parameter
                let mut extended_vars = vars.to_vec();
                extended_vars.push(param.clone());
                Self::can_abstract_over_variables(ty, vars)
                    && Self::can_abstract_over_variables(body, &extended_vars)
            }
            Term::Meta(_) => true,  // Meta-variables are always abstractable
            Term::Const(_) => true, // Constants are always abstractable
            Term::Sort(_) => true,  // Sorts are always abstractable
            _ => false,             // Conservative for complex constructs
        }
    }

    /// Delay a constraint for later processing
    fn delay_constraint(
        &mut self,
        constraint_id: ConstraintId,
        waiting_on: HashSet<MetaId>,
    ) -> Result<(), TypeError> {
        if let Some(constraint) = self.constraints.remove(&constraint_id) {
            let delayed = Constraint::Delayed {
                id: constraint_id,
                constraint: Box::new(constraint),
                waiting_on,
            };
            self.constraints.insert(constraint_id, delayed);
            self.delayed_constraints.insert(constraint_id);
        }
        Ok(())
    }

    /// Wake up delayed constraints that are no longer blocked
    fn wake_delayed_constraints(&mut self) -> Result<Vec<ConstraintId>, TypeError> {
        let mut woken = Vec::new();
        let mut to_wake = Vec::new();

        // Check which delayed constraints can now be woken up
        for constraint_id in &self.delayed_constraints {
            if let Some(Constraint::Delayed {
                waiting_on,
                constraint,
                ..
            }) = self.constraints.get(constraint_id)
            {
                // Check if all blocking metas are now solved
                let all_solved = waiting_on.iter().all(|meta_id| {
                    self.metas
                        .get(meta_id)
                        .is_some_and(|info| info.solution.is_some())
                });

                if all_solved {
                    to_wake.push((*constraint_id, constraint.as_ref().clone()));
                }
            }
        }

        // Wake up the constraints
        for (constraint_id, original_constraint) in to_wake {
            self.constraints.insert(constraint_id, original_constraint);
            self.delayed_constraints.remove(&constraint_id);
            woken.push(constraint_id);
        }

        Ok(woken)
    }

    /// Check if we can delay a constraint
    fn can_delay(&self, constraint_id: &ConstraintId) -> bool {
        if let Some(constraint) = self.constraints.get(constraint_id) {
            match constraint {
                Constraint::Unify {
                    left,
                    right,
                    strength,
                    ..
                } => {
                    // We can delay weak constraints that involve unsolved metas
                    if *strength == ConstraintStrength::Weak {
                        let left_metas = self.collect_metas_in_term(left);
                        let right_metas = self.collect_metas_in_term(right);

                        // Check if any involved metas are unsolved
                        for meta_id in left_metas.iter().chain(right_metas.iter()) {
                            if self.is_unsolved_meta(meta_id) {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::HasType {
                    term,
                    expected_type,
                    ..
                } => {
                    // Can delay HasType constraints involving unsolved metas
                    let term_metas = self.collect_metas_in_term(term);
                    let type_metas = self.collect_metas_in_term(expected_type);

                    for meta_id in term_metas.iter().chain(type_metas.iter()) {
                        if let Some(meta_info) = self.metas.get(meta_id) {
                            if meta_info.solution.is_none() {
                                return true;
                            }
                        }
                    }
                    false
                }
                Constraint::Delayed { .. } => true, // Already delayed
                _ => false,
            }
        } else {
            false
        }
    }

    /// Check if a meta-variable is unsolved
    fn is_unsolved_meta(&self, meta_id: &MetaId) -> bool {
        self.metas
            .get(meta_id)
            .is_some_and(|info| info.solution.is_none())
    }
}
/// Substitution mapping meta-variables to terms
#[derive(Debug, Clone, Default)]
pub struct Substitution {
    mapping: HashMap<MetaId, Term>,
    universe_mapping: HashMap<u32, Universe>,
}
}

Substitutions represent partial solutions to the constraint system, mapping meta-variables to their resolved terms. Our substitution system handles both term-level and universe-level substitutions with proper normalization.

Substitution Application

#![allow(unused)]
fn main() {
/// Apply substitution to a term
pub fn apply(&self, term: &Term) -> Term {
    match term {
        Term::Meta(name) => {
            // Try to parse meta ID and look up substitution
            if let Some(meta_id) = self.parse_meta_name(name) {
                if let Some(subst) = self.mapping.get(&meta_id) {
                    // Recursively apply substitution
                    return self.apply(subst);
                }
            }
            term.clone()
        }
        Term::App(f, arg) => Term::App(Box::new(self.apply(f)), Box::new(self.apply(arg))),
        Term::Abs(x, ty, body) => Term::Abs(
            x.clone(),
            Box::new(self.apply(ty)),
            Box::new(self.apply(body)),
        ),
        Term::Pi(x, ty, body, implicit) => Term::Pi(
            x.clone(),
            Box::new(self.apply(ty)),
            Box::new(self.apply(body)),
            *implicit,
        ),
        Term::Let(x, ty, val, body) => Term::Let(
            x.clone(),
            Box::new(self.apply(ty)),
            Box::new(self.apply(val)),
            Box::new(self.apply(body)),
        ),
        Term::Sort(u) => Term::Sort(self.apply_universe(u)),
        // Other cases remain unchanged
        _ => term.clone(),
    }
}
}

The substitution application algorithm demonstrates the recursive nature of constraint solving in dependent type systems. When applying substitutions to terms like Pi, Abs, and Let, we must carefully handle variable binding and scope to avoid capture issues.

#![allow(unused)]
fn main() {
/// Apply substitution to a universe
pub fn apply_universe(&self, universe: &Universe) -> Universe {
    let result = match universe {
        Universe::Meta(id) => {
            if let Some(subst) = self.universe_mapping.get(id) {
                // Recursively apply substitution
                self.apply_universe(subst)
            } else {
                universe.clone()
            }
        }
        Universe::Add(base, n) => Universe::Add(Box::new(self.apply_universe(base)), *n),
        Universe::Max(u, v) => Universe::Max(
            Box::new(self.apply_universe(u)),
            Box::new(self.apply_universe(v)),
        ),
        Universe::IMax(u, v) => Universe::IMax(
            Box::new(self.apply_universe(u)),
            Box::new(self.apply_universe(v)),
        ),
        // Constants, variables, and scoped variables remain unchanged during substitution
        Universe::Const(_) | Universe::ScopedVar(_, _) => universe.clone(),
    };
    // Normalize the result to simplify expressions like Const(0) + 1 -> Const(1)
    self.normalize_universe_static(&result)
}
}

Universe substitutions require special handling due to their arithmetic nature. The normalization process simplifies expressions like Const(0) + 1 to Const(1), maintaining canonical forms that improve unification success rates.

#![allow(unused)]
fn main() {
/// Static normalization (doesn't require &self)
#[allow(clippy::only_used_in_recursion)]
fn normalize_universe_static(&self, u: &Universe) -> Universe {
    match u {
        Universe::Add(base, n) => {
            let base_norm = self.normalize_universe_static(base);
            match base_norm {
                Universe::Const(m) => Universe::Const(m + n),
                _ => Universe::Add(Box::new(base_norm), *n),
            }
        }
        Universe::Max(u1, u2) => {
            let u1_norm = self.normalize_universe_static(u1);
            let u2_norm = self.normalize_universe_static(u2);
            match (&u1_norm, &u2_norm) {
                (Universe::Const(n1), Universe::Const(n2)) => Universe::Const((*n1).max(*n2)),
                _ => Universe::Max(Box::new(u1_norm), Box::new(u2_norm)),
            }
        }
        _ => u.clone(),
    }
}
}

Static normalization performs universe-level arithmetic, combining constants and simplifying maximum expressions. This normalization is crucial for universe constraint solving, as it enables recognition of equivalent universe expressions that might otherwise appear different.

Main Constraint Solving Algorithm

#![allow(unused)]
fn main() {
/// Main solving loop
pub fn solve(&mut self) -> Result<Substitution, TypeError> {
    let max_iterations = 1000;
    let mut iterations = 0;

    while !self.queue.is_empty() && iterations < max_iterations {
        iterations += 1;

        // Pick the best constraint to solve
        if let Some(constraint_id) = self.pick_constraint() {
            let constraint = self.constraints.remove(&constraint_id).unwrap();

            match self.solve_constraint(constraint.clone()) {
                Ok(progress) => {
                    if progress {
                        // Propagate the solution
                        self.propagate_solution()?;

                        // Wake up delayed constraints that might now be solvable
                        let woken_constraints = self.wake_delayed_constraints()?;

                        // Add woken constraints back to queue with high priority
                        for woken_id in woken_constraints {
                            self.queue.push_front(woken_id);
                        }
                    }
                }
                Err(e) => {
                    // Try to delay the constraint if possible
                    if self.can_delay(&constraint_id) {
                        // Determine which meta-variables to wait for
                        let waiting_on = match &constraint {
                            Constraint::Unify { left, right, .. } => {
                                let mut metas = self.collect_metas_in_term(left);
                                metas.extend(self.collect_metas_in_term(right));
                                metas
                                    .into_iter()
                                    .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                    .collect()
                            }
                            Constraint::HasType {
                                term,
                                expected_type,
                                ..
                            } => {
                                let mut metas = self.collect_metas_in_term(term);
                                metas.extend(self.collect_metas_in_term(expected_type));
                                metas
                                    .into_iter()
                                    .filter(|meta_id| self.is_unsolved_meta(meta_id))
                                    .collect()
                            }
                            _ => HashSet::new(),
                        };

                        if !waiting_on.is_empty() {
                            // Put constraint back and delay it
                            self.constraints.insert(constraint_id, constraint);
                            self.delay_constraint(constraint_id, waiting_on)?;
                        } else {
                            // Can't delay, return error
                            return Err(e);
                        }
                    } else {
                        return Err(e);
                    }
                }
            }
        }
    }

    if iterations >= max_iterations {
        return Err(TypeError::Internal {
            message: "Constraint solving did not converge".to_string(),
        });
    }

    // Check for unsolved required constraints
    for constraint in self.constraints.values() {
        if let Constraint::Unify {
            strength: ConstraintStrength::Required,
            ..
        } = constraint
        {
            return Err(TypeError::Internal {
                message: "Unsolved required constraints remain".to_string(),
            });
        }
    }

    Ok(self.substitution.clone())
}
}

The main solving loop demonstrates the iterative constraint propagation approach that drives our constraint solver through a systematic process of constraint resolution and solution propagation. The algorithm maintains several critical invariants that ensure both correctness and termination. The progress guarantee ensures that each iteration either successfully solves one or more constraints or detects unsolvable situations that can be reported as errors, preventing the solver from entering infinite loops without making meaningful progress.

Dependency respect ensures that constraints are solved in topological order based on meta-variable dependencies, so that simpler constraints whose solutions might enable the resolution of more complex constraints are prioritized appropriately. Solution propagation guarantees that when meta-variables are resolved, their solutions immediately propagate throughout the entire constraint system, potentially enabling the resolution of previously blocked constraints and maintaining consistency across the solver state.

Constraint Selection Strategy

The solver uses intelligent constraint selection to maximize solving success:

#![allow(unused)]
fn main() {
let constraint_id = self.pick_constraint()?;
let constraint = self.constraints.remove(&constraint_id).unwrap();
}

The pick_constraint method prioritizes constraints based on several strategic factors that maximize solving efficiency. Constraint strength provides the primary ordering criterion, with required constraints taking precedence over preferred constraints, which in turn take precedence over weak constraints. This ensures that fundamental type checking requirements are addressed before optional optimizations or guidance hints.

The number of unknown meta-variables in each constraint provides a secondary ordering criterion, with constraints containing fewer unknowns receiving higher priority since they are more likely to be solvable immediately. Constraint type also influences prioritization, as unification constraints often resolve other constraints by instantiating meta-variables that appear in multiple constraint relationships.

Solution Propagation

When a constraint is successfully solved, the solver propagates the solution throughout the system:

#![allow(unused)]
fn main() {
if progress {
    self.propagate_solution()?;
    let woken_constraints = self.wake_delayed_constraints()?;
    // Continue with newly awakened constraints...
}
}

Solution propagation involves a systematic process of updating the entire constraint system to reflect newly discovered solutions. Substitution application ensures that new solutions are applied to all remaining constraints in the system, potentially simplifying them or enabling their resolution. This step transforms the constraint system by replacing meta-variables with their concrete solutions wherever they appear.

Constraint wakeup activates delayed constraints that were waiting for the resolved meta-variables, bringing previously blocked constraints back into active consideration for solving. This mechanism ensures that the constraint resolution process can handle complex interdependencies where some constraints cannot be solved until others provide the necessary information.

Dependency updates modify the dependency graph to reflect newly resolved variables, removing solved meta-variables from dependency tracking and updating the topological ordering used for constraint selection. This maintenance ensures that the solver’s internal data structures remain consistent and efficient as solutions accumulate.

Unification in Dependent Type Systems

Unification in dependent type systems presents challenges beyond those encountered in simple type systems like System F. The interdependence between terms and types means that unifying types may require solving for unknown terms, while unifying terms may generate constraints on their types.

This interdependency creates several complications:

Type-Term Dependencies: When unifying Π(x : A). B x with Π(y : A'). B' y, we must unify both the parameter types A and A' and the dependent result types B x and B' y. The unification of result types depends on the solution to parameter type unification.

Meta-Variable Scope Management: Meta-variables representing unknown terms must respect the variable binding structure of dependent types. A meta-variable created in a particular binding context cannot be instantiated with a term that references variables outside that context.

Higher-Order Meta-Variables: In dependent type systems, meta-variables can represent unknown functions, leading to higher-order unification problems where we must solve for unknown function-level terms rather than just unknown ground terms.

Unification Algorithm

The core unification algorithm handles the fundamental task of making two dependent types equal:

#![allow(unused)]
fn main() {
/// Unify two terms
pub fn unify(&mut self, t1: &Term, t2: &Term) -> UnificationResult<Substitution> {
    self.unify_impl(t1, t2, &mut Substitution::new())
}
}

Our unification algorithm supports several patterns:

Structural Unification: When both terms have the same head constructor, unification proceeds by recursively unifying subcomponents.

Meta-Variable Instantiation: When one side is a meta-variable, we create a substitution that maps the variable to the other term, subject to occurs checking.

Higher-Order Patterns: Advanced patterns like Miller patterns enable limited higher-order unification that remains decidable.

Meta-Variable Resolution

#![allow(unused)]
fn main() {
match (&term1, &term2) {
    (Term::Meta(name), _) => {
        if let Some(meta_id) = self.parse_meta_name(name) {
            // Check if already solved
            if let Some(solution) = self.substitution.get(&meta_id) {
                return self.unify(&self.substitution.apply(solution), term2);
            }
            // Create new solution
            self.solve_meta_variable(meta_id, term2)
        }
    }
    // ... other cases
}
}

Meta-variable resolution involves a systematic multi-step process that ensures both correctness and consistency. Solution lookup first checks whether the meta-variable already has a solution from previous constraint resolution, avoiding redundant work and ensuring that existing solutions are properly utilized. The occurs check ensures that the proposed solution would not create infinite types by verifying that the meta-variable does not occur within its own solution term, preventing the creation of cyclic type definitions that would violate the soundness of the type system.

Context validation verifies that the solution respects variable scoping requirements, ensuring that the solution term does not reference variables that are not in scope at the meta-variable’s binding site. This check is crucial for maintaining the lexical scoping discipline that dependent type systems require. Finally, solution recording adds the verified solution to the substitution system and propagates it throughout the constraint system, updating all constraints that reference the newly solved meta-variable.

Dependent Type Unification

Unifying dependent types requires special handling of binding structures:

#![allow(unused)]
fn main() {
(Term::Pi(x1, ty1, body1, _), Term::Pi(x2, ty2, body2, _)) => {
    // Unify parameter types
    let param_subst = self.unify(ty1, ty2)?;

    // Unify bodies under extended context with alpha-renaming
    let renamed_body2 = self.alpha_rename(x2, x1, body2);
    let body_subst = self.unify_under_context(
        &param_subst.apply(body1),
        &param_subst.apply(&renamed_body2),
        x1
    )?;

    param_subst.compose(&body_subst)
}
}

This demonstrates the intricate complexity of dependent type unification, where multiple interdependent steps must be carefully orchestrated. Parameter types must unify first to establish the foundation for the dependent relationship, as the result type depends on the parameter type’s structure and properties. Body types are then unified under the extended context that includes the parameter binding, ensuring that dependent references within the body are properly handled.

Alpha-renaming ensures that variable names do not interfere between the two dependent types being unified, preventing accidental capture or confusion between similarly named but distinct variables. The substitutions discovered during parameter unification must propagate to the body unification process, as changes to parameter types may affect the validity and structure of the dependent result types.

Advanced Constraint Patterns

Higher-Order Unification and Miller Patterns

Higher-order unification extends first-order unification to handle function-level unknowns, where meta-variables can represent unknown functions rather than just unknown terms. While first-order unification asks “what value makes these terms equal?”, higher-order unification asks “what function makes these applications equal?”

The fundamental challenge of higher-order unification lies in its undecidability. Unlike first-order unification, which always terminates with either a solution or failure, general higher-order unification can run indefinitely without reaching a conclusion. This undecidability stems from the ability to construct arbitrarily complex function expressions that satisfy unification constraints.

Consider the higher-order unification problem ?F a b = g (h a) b. The meta-variable ?F represents an unknown function of two arguments. Potential solutions include λx y. g (h x) y, but also more complex forms like λx y. g (h (id x)) y where id is the identity function. The search space of possible solutions is infinite, making termination impossible to guarantee.

Miller Pattern Restrictions

Miller patterns resolve this undecidability by imposing syntactic restrictions on higher-order unification problems that restore decidability while preserving significant expressive power. A Miller pattern has the form ?M x₁ ... xₙ = t where several critical conditions must be satisfied. All arguments x₁ ... xₙ must be distinct bound variables, ensuring that the pattern represents a proper functional relationship without duplication or confusion between parameters.

The meta-variable must be applied only to variables rather than complex terms, maintaining the “variable spine” property that prevents the exponential explosion of potential solutions that can arise when meta-variables are applied to arbitrary expressions. The term t must not contain the meta-variable ?M itself, preventing occurs check violations that would lead to infinite types. Finally, abstraction safety requires that all free variables appearing in t must appear among the arguments x₁ ... xₙ, ensuring that the solution can be properly abstracted over the pattern variables.

These restrictions ensure that Miller pattern unification problems have unique most general unifiers when solutions exist, restoring decidability to this fragment of higher-order unification.

Miller Pattern Detection

Our implementation includes comprehensive Miller pattern detection:

#![allow(unused)]
fn main() {
/// Check if left is a simple Miller pattern and right is safe to solve with
pub fn is_simple_miller_pattern(&self, left: &Term, right: &Term) -> bool {
    let (head, args) = self.decompose_application(left);

    if let Term::Meta(meta_name) = head {
        // Must have at least one argument to be interesting
        if args.is_empty() {
            return false;
        }

        // Check if all arguments are distinct variables
        let mut seen = HashSet::new();
        for arg in &args {
            if let Term::Var(x) = arg {
                if !seen.insert(x.clone()) {
                    return false; // Not distinct
                }
            } else {
                return false; // Not a variable
            }
        }

        // Check occurs check - right must not contain the meta-variable
        if let Some(meta_id) = self.parse_meta_name(meta_name) {
            if self.collect_metas_in_term(right).contains(&meta_id) {
                return false; // Occurs check failure
            }
        }

        // For now, only handle simple cases where right is a variable or constant
        match right {
            Term::Var(_) | Term::Const(_) | Term::Sort(_) => true,
            Term::Meta(_) => {
                // Allow meta-to-meta if they're different
                if let Term::Meta(right_name) = right {
                    meta_name != right_name
                } else {
                    false
                }
            }
            _ => false, // More complex cases need careful handling
        }
    } else {
        false
    }
}
}

The detection algorithm verifies that the left side forms a proper Miller pattern with distinct variable arguments and that the right side can be safely abstracted over those variables. This checking ensures that attempted solutions will satisfy the Miller pattern restrictions.

Miller Pattern Solving Algorithm

When Miller pattern solving is enabled, the solver handles these advanced patterns:

#![allow(unused)]
fn main() {
/// Solve simple Miller pattern unification
/// Only handles simple cases
fn solve_miller_pattern(
    &mut self,
    meta_name: &str,
    spine: &[String],
    other: &Term,
) -> Result<bool, TypeError> {
    // Check if we can abstract over the spine variables
    if !Self::can_abstract_over_variables(other, spine) {
        // Can't abstract, fall back to regular unification
        return self.unify(
            &Term::Meta(meta_name.to_string()),
            other,
            ConstraintStrength::Required,
        );
    }

    if let Some(meta_id) = self.parse_meta_name(meta_name) {
        // Only handle simple cases to avoid complexity
        match other {
            // Case 1: ?M x ≡ y (different variable) -> ?M := λx. y
            Term::Var(y) if spine.len() == 1 && spine[0] != *y => {
                let solution = Term::Abs(
                    spine[0].clone(),
                    Box::new(Term::Meta(format!("?T{}", self.next_meta_id))), /* Type inferred later */
                    Box::new(other.clone()),
                );
                return self.solve_meta(meta_id, &solution);
            }

            // Case 2: ?M x ≡ c (constant) -> ?M := λx. c
            Term::Const(_) | Term::Sort(_) if spine.len() == 1 => {
                let solution = Term::Abs(
                    spine[0].clone(),
                    Box::new(Term::Meta(format!("?T{}", self.next_meta_id))),
                    Box::new(other.clone()),
                );
                return self.solve_meta(meta_id, &solution);
            }

            _ => {
                // For more complex cases, we could build the full lambda
                // abstraction but for now, fall back to
                // regular unification
            }
        }
    }

    // Fall back to regular unification
    self.unify(
        &Term::Meta(meta_name.to_string()),
        other,
        ConstraintStrength::Required,
    )
}
}

The Miller pattern solver constructs lambda abstractions that capture the relationship between the pattern variables and the target term. For patterns like ?M x = t, the solution becomes ?M := λx. t, provided that t satisfies the abstraction conditions.

Miller patterns represent a restricted form of higher-order unification that remains decidable while supporting many practical programming patterns that arise in dependent type theory and proof assistants.

Delayed Constraint Resolution

Complex constraints that cannot be solved immediately get delayed until more information becomes available:

#![allow(unused)]
fn main() {
Constraint::Delayed { constraint, waiting_on, .. } => {
    if waiting_on.iter().all(|meta| self.is_solved(*meta)) {
        // All dependencies resolved - try solving now
        self.solve_constraint(*constraint)
    } else {
        // Still waiting - keep delayed
        Err(ConstraintError::NeedsMoreInfo)
    }
}
}

The delayed constraint system enables the solver to handle complex patterns that arise in dependent type checking scenarios.

Error Handling and Diagnostics

The constraint solver provides comprehensive error reporting that helps users understand solving failures:

#![allow(unused)]
fn main() {
pub enum ConstraintError {
    UnificationFailure { left: Term, right: Term, reason: String },
    OccursCheck { meta_var: MetaId, term: Term },
    ScopeViolation { meta_var: MetaId, escaped_vars: Vec<String> },
    CircularDependency { cycle: Vec<MetaId> },
    UniverseInconsistency { constraint: UniverseConstraint },
}
}

Each error type provides specific diagnostic information about why constraint solving failed, enabling users to understand and address the underlying issues. Unification failures present the conflicting terms along with a detailed explanation of why they cannot be made equal, helping programmers identify type mismatches and structural incompatibilities in their code. Occurs check violations identify situations where infinite types would result from a proposed solution, catching recursive type definitions that would violate the type system’s soundness.

Scope violations detect variables that escape their intended lexical scope, typically occurring when meta-variable solutions reference variables that are not available in the solution’s binding context. Circular dependencies identify unsolvable constraint cycles where constraints depend on each other in ways that prevent any progress, indicating fundamental problems in the constraint system structure. Universe inconsistencies indicate violations of the universe hierarchy, such as attempting to place a large universe inside a smaller one, which would compromise the type system’s logical consistency.

And phew, that’s it. We’re done.

Examples

Our Calculus of Constructions implementation demonstrates the full expressiveness of dependent type theory through comprehensive examples that span basic inductive types, higher-order polymorphic functions, universe polymorphism, implicit arguments, and dependent data structures. These examples showcase how the theoretical power of the Calculus of Constructions translates into practical programming language features.

Each example successfully type checks under our implementation, demonstrating the correctness of the constraint solving algorithms, universe system, and dependent type checker. The progression from simple types through complex dependent constructions illustrates how the Calculus of Constructions enables programming patterns while maintaining logical consistency.

Basic Inductive Types and Pattern Matching

The foundation of data structures in the Calculus of Constructions rests on inductive type definitions with constructor-based pattern matching:

#![allow(unused)]
fn main() {
inductive Bool : Type with
| true : Bool
| false : Bool

inductive Nat : Type with
| zero : Nat
| succ : Nat -> Nat

def predecessor (n : Nat) : Nat :=
  match n with
  case zero => zero
  case succ(m) => m

def not_bool (b : Bool) : Bool :=
  match b with
  case true => false
  case false => true
}

These basic examples demonstrate fundamental inductive types including natural numbers and booleans with their associated elimination functions. The predecessor function illustrates how pattern matching provides safe destructuring of inductive values, while the not_bool function shows simple enumeration-style pattern matching. The type checker ensures that all pattern cases are properly handled and that result types are consistent across branches.

Higher-Order Polymorphic Functions

The Calculus of Constructions supports polymorphic programming patterns through its dependent type system:

#![allow(unused)]
fn main() {
inductive Nat : Type with
  | zero : Nat
  | succ : Nat -> Nat

inductive Bool : Type with
  | true : Bool
  | false : Bool

axiom mult : Nat -> Nat -> Nat

def compose (A : Type) (B : Type) (C : Type) (g : B -> C) (f : A -> B) (x : A) : C :=
  g (f x)

def square (x : Nat) : Nat :=
  mult x x

def doTwice (A : Type) (h : A -> A) (x : A) : A :=
  h (h x)

def doThrice (A : Type) (h : A -> A) (x : A) : A :=
  h (h (h x))
}

These examples showcase the interaction between inductive types, higher-order functions, and polymorphic composition. The compose function demonstrates parametric polymorphism over three type parameters, while doTwice and doThrice show how higher-order functions can abstract over computational patterns. The square function illustrates interaction with primitive operations, showing how user-defined types integrate with built-in arithmetic.

Universe Polymorphism and Level Abstraction

Universe polymorphism enables definitions that work across the entire universe hierarchy, providing genuine genericity over type levels:

#![allow(unused)]
fn main() {
-- Universe Polymorphism Examples
-- Demonstrates universe-polymorphic definitions in CoC

-- 1. Basic universe polymorphic identity
def id.{u} (A : Sort (u+1)) (x : A) : A := x

-- 2. Universe polymorphic constant function  
def const.{u, v} (A : Sort (u+1)) (B : Sort (v+1)) (x : A) (y : B) : A := x

-- 3. Universe polymorphic composition
def compose.{u, v, w} (A : Sort (u+1)) (B : Sort (v+1)) (C : Sort (w+1)) (g : B -> C) (f : A -> B) (x : A) : C := g (f x)

-- 4. Tests at different universe levels
-- At Type level (u = Zero)
axiom test_id_type : (A : Type) -> A -> A
def id_at_type : (A : Type) -> A -> A := id

-- At Prop level (u = -1, but we use Zero for Prop)  
axiom test_id_prop : (A : Prop) -> A -> A

-- 5. Inductive types with universe parameters
inductive List.{u} (A : Sort (u+1)) : Sort (u+1) with
  | nil : List A
  | cons : A -> List A -> List A

-- 6. Higher-order universe polymorphism
def map.{u, v} (A : Sort (u+1)) (B : Sort (v+1)) (f : A -> B) (l : List A) : List B := match l with case nil => nil B case cons(x, xs) => cons B (f x) (map A B f xs)

-- 7. Universe constraints with max
axiom pair_type.{u, v} : Sort (u+1) -> Sort (v+1) -> Sort (u+1)

-- 8. Dependent types with universe parameters
def dependent_id.{u} (A : Sort (u+1)) (P : A -> Sort (u+1)) (x : A) (px : P x) : P x := px

-- 9. Examples of universe level arithmetic
-- Type 0 = Sort 1
-- Type 1 = Sort 2  
-- Type n = Sort (n+1)
axiom level0 : Type       -- Type 0 = Sort 1
axiom level1 : Type 1     -- Type 1 = Sort 2
axiom level2 : Type 2     -- Type 2 = Sort 3

-- 10. Universe polymorphic structures
structure Pair.{u, v} (A : Sort (u+1)) (B : Sort (v+1)) : Sort (u+1) :=
  (fst : A)
  (snd : B)
}

These examples demonstrate universe-polymorphic definitions that abstract over universe levels using explicit level parameters. The id.{u} function works at any universe level, while const.{u,v} shows polymorphism over multiple universe parameters. The inductive type List.{u} demonstrates universe-polymorphic data structures that can contain elements at arbitrary universe levels.

The universe arithmetic expressions like Sort (u+1) show how the universe solver handles level arithmetic during type checking. Universe constraints ensure that polymorphic instantiations respect the universe hierarchy while enabling maximum flexibility in generic programming.

Comprehensive Implicit Arguments

Implicit arguments provide syntactic convenience while maintaining the full expressiveness of dependent types:

#![allow(unused)]
fn main() {
-- Comprehensive tests for implicit arguments with constraint solving

-- Basic datatypes
inductive Nat : Type with
  | zero : Nat
  | succ : Nat -> Nat

inductive List (A : Type) : Type with
  | nil : List A
  | cons : A -> List A -> List A

inductive Pair (A : Type) (B : Type) : Type with
  | mkPair : A -> B -> Pair A B

-- Test 1: Simple implicit argument
def id {A : Type} (x : A) : A := x

-- Test 2: Multiple implicit arguments  
def const {A : Type} {B : Type} (x : A) (y : B) : A := x

-- Test 3: Implicit with explicit type application
def apply_id : Nat -> Nat := id

-- Test 4: Multiple implicit arguments with inference
def first : Nat -> Nat -> Nat := const

-- Test 5: Nested implicit arguments
def compose {A : Type} {B : Type} {C : Type} 
           (g : B -> C) (f : A -> B) (x : A) : C :=
  g (f x)

-- Test 6: Implicit arguments in inductive type constructors
def empty_list : List Nat := nil Nat

def nat_list : List Nat := cons Nat zero (nil Nat)

-- Test 7: Implicit with dependent types
def replicate {A : Type} (n : Nat) (x : A) : List A :=
  match n with
  case zero => nil A
  case succ(m) => cons A x (replicate m x)

-- Test 8: Higher-order functions with implicits
def map {A : Type} {B : Type} (f : A -> B) (l : List A) : List B :=
  match l with
  case nil => nil B
  case cons(x, xs) => cons B (f x) (map f xs)

-- Test 9: Polymorphic pair operations
def fst {A : Type} {B : Type} (p : Pair A B) : A :=
  match p with
  case mkPair(a, b) => a

def snd {A : Type} {B : Type} (p : Pair A B) : B :=
  match p with
  case mkPair(a, b) => b

-- Test 10: Complex composition with implicits
def double_map {A : Type} {B : Type} {C : Type}
               (f : B -> C) (g : A -> B) (l : List A) : List C :=
  map f (map g l)

-- Test 11: Implicit arguments in let bindings
def test_let : Nat :=
  let f := id in
  f zero

-- Test 12: Implicit arguments with universe polymorphism
-- axiom Type_id {u : Level} : Type u -> Type u
-- def type_identity : Type -> Type := Type_id

-- Test cases that should work
def test_id_nat : Nat := id zero
def test_const_nat : Nat := const zero (succ zero)
def test_compose_nat : Nat -> Nat := compose succ succ
def test_pair : Pair Nat Nat := mkPair Nat Nat zero (succ zero)
def test_fst : Nat := fst (mkPair Nat Nat zero (succ zero))

-- Test that implicit arguments are correctly inferred
def complex_test : List Nat :=
  map succ (cons Nat zero (cons Nat (succ zero) (nil Nat)))
}

This comprehensive example demonstrates the full range of implicit argument features. Simple functions like id {A : Type} show basic type inference, while complex functions like double_map demonstrate implicit argument propagation through nested polymorphic calls. The constraint solver automatically infers type arguments based on usage context, eliminating boilerplate while maintaining type safety.

The map and replicate functions show how implicit arguments work with recursive functions and pattern matching. The constraint solver tracks type relationships across recursive calls, ensuring that implicit arguments are consistently instantiated throughout the computation.

Dependent Data Structures

Dependent types enable data structures whose types encode computational properties, providing compile-time guarantees about program behavior:

#![allow(unused)]
fn main() {
inductive Nat : Type with
  | zero : Nat
  | succ : Nat -> Nat

inductive Vec (A : Type) : Nat -> Type with  
  | nil : Vec A zero
  | cons : (n : Nat) -> A -> Vec A n -> Vec A (succ n)
}

Length-indexed vectors represent the paradigmatic example of dependent data types. The Vec A n type carries its length n at the type level, enabling operations that are guaranteed to respect vector bounds at compile time. The nil constructor produces vectors of length zero, while cons extends vectors by exactly one element.

This example demonstrates how dependent types can encode invariants that would traditionally require runtime checking. The type system ensures that vector operations respect length constraints, eliminating entire classes of array bounds errors at compile time.

Structure Types and Record Operations

Structure types provide named field access and demonstrate the Calculus of Constructions’ support for record-like data organization:

#![allow(unused)]
fn main() {
inductive Nat : Type with
| zero : Nat
| succ : Nat -> Nat

structure Point : Type := (x : Nat) (y : Nat)

axiom point_instance : Point

axiom Point_x : Point -> Nat
axiom Point_y : Point -> Nat

def get_x_coordinate : Nat := Point_x point_instance
}

The Point structure demonstrates basic record types with named fields. While full dot notation requires additional parser support, the example shows how structures integrate with the dependent type system. Structure types can participate in dependent constructions, enabling data modeling patterns.

Polymorphic Function Composition

Complex polymorphic programming demonstrates the interaction between higher-order functions, type inference, and constraint solving:

#![allow(unused)]
fn main() {
def compose (A : Type) (B : Type) (C : Type) : (B -> C) -> (A -> B) -> A -> C :=
  fun g => fun f => fun x => g (f x)

def doTwice (A : Type) : (A -> A) -> A -> A :=
  fun h => fun x => h (h x)
}

These examples show curried function definitions that demonstrate the lambda calculus foundations of the Calculus of Constructions. The compose function shows three-way type polymorphism, while doTwice demonstrates how polymorphic higher-order functions can abstract over computational patterns.

Working Examples with Type Inference

Practical examples demonstrate how the constraint solver handles complex type inference scenarios:

#![allow(unused)]
fn main() {
axiom id_type : Type -> Type -> Type

def simple_function : Type -> Type := fun A => A

inductive MyBool : Type with
  | true : MyBool
  | false : MyBool

def constFunc (A : Type) (B : Type) : A -> B -> A :=
  fun x => fun y => x
}

These working examples demonstrate the type inference engine in action. The constFunc definition shows how the constraint solver handles nested lambda abstractions with polymorphic types. The simple_function illustrates type-level computation, where functions can manipulate types as first-class values.

Advanced Dependent Programming

The most examples demonstrate the full power of dependent types in practical programming scenarios:

#![allow(unused)]
fn main() {
-- Dependent types example

def id (A : Type) (x : A) : A := x

axiom Nat : Type
def test_id_nat : Nat -> Nat := id Nat
}

These examples show how dependent types enable programs where types can depend on term values. The id function with explicit type parameters demonstrates how dependent functions can be polymorphic over both types and their computational properties.

Bibliography