From 7409fc7c2e7306ab740b5c0d2baec2cff4e68d79 Mon Sep 17 00:00:00 2001 From: Arthur Paulino Date: Wed, 13 May 2026 10:15:16 -0700 Subject: [PATCH] Aiur kernel: complete port from Rust reference + kernel-arena coverage MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Brings the Aiur-source kernel typechecker to parity with the Rust kernel for the full stdlib, the lean-kernel-arena tutorial fixtures, and the curated ixvm test list. Validated via a new arena test suite plus the existing per-name kernel checks: 281/0 in `lake test -- --ignored ixvm`. Aiur source (`Ix/IxVM/...`): * `Kernel/{Whnf,DefEq,Levels,Infer,Subst,Inductive,Check, CanonicalCheck,Primitive}.lean`: complete kernel split out from the monolithic `Kernel.lean` (deleted along with `KERNEL.md`). Mirrors `src/ix/kernel/*.rs` module-by-module. * `Ingress.lean`: load `Constant`s + blobs from the IOBuffer with blake3 verification; build per-position `addrs`/`top`. Compute constructor content-hash addresses entirely in-Aiur via `cprj_content_addr` (no IOBuffer side channel) so `check_eq_type`'s `address_eq` against `eq_refl_addr` succeeds when an inductive's ctors weren't transitively loaded as independent CPrjs. Append synthetic `Axiom Sort 0` entries for every hardcoded `*_addr()` primitive so internal expansions (`str_lit_to_ctor`, decidability dispatch, etc.) can construct `Const(idx, _)` references that resolve via `find_addr_idx_safe` even when the primitive isn't in the user's syntactic closure. * `IxVM.lean`: `kernel_check_test(target_addr, check_deps)` entrypoint. `check_deps = 1` runs `check_all` over every loaded const; `check_deps = 0` runs `check_const` only on the target (saves the per-dep validation work — significant for proving since recursor canonical-build dominates). * `KernelTypes.lean`, `Convert.lean`, `Core.lean`: supporting types/conversions for the kernel proper. Rust side (`src/aiur/execute.rs`, `src/ffi/aiur/protocol.rs`, `src/aiur/{trace,synthesis}.rs`): * Refactor bytecode `execute` to return `Result<_, ExecError>` instead of panicking on assertion failures, missing match cases, or out-of-bounds. Plumbed through the FFI as `Except String α` so Lean callers can classify Aiur kernel rejections as expected outcomes rather than process crashes. * `synthesis::AiurSystem::prove` keeps panic semantics by `.expect`-ing the inner Result (prover assumes valid input). * `trace.rs` updated for the new `IOBuffer` return types. Lean-side IO buffer / harness (`Ix/IxVM/CheckHarness.lean`): * IOBuffer convention reorganized fundamental-first with suffix tags (`addr.push(tag)` is O(1) amortized; prefix `[tag] ++ addr` allocates): | `addr` | const bytes (or empty = blob marker) | | `addr ++ [0]` | raw blob bytes | | `addr ++ [1]` | single G (Defn ReducibilityHints) | * `loadIxonEnv` (single-target) + `loadSharedIxonEnv` (union of N targets, one compile shared across many drivers). * `closureFrom` walks `Constant.refs` + projection `block_addr` from a target. `buildKernelCheckIOBufferFor` restricts every per-address entry to that closure for a minimal per-target IOBuffer. * `addEntries` helper dedupes the per-tag insertion logic between the full-env and per-target builders. Tests (`Tests/Ix/IxVM.lean`, `Tests/Ix/Kernel/Arena.lean`, `Tests/Main.lean`): * `Tests.Ix.IxVM` reorganized into clear sections: `IxVMPrim` (primitive-reduction theorems), `IxVMInd` (mutual + nested inductives), `serdeNatAddComm` / `kernelCheck` / `kernelChecks` runners. * `Tests.Ix.Kernel.Arena` drives the Aiur kernel against every lean-kernel-arena tutorial fixture (good must typecheck, bad must be rejected). Two `knownIncompatible` skips: `tut06_bad01` (Anon-mode dup levelParams hygiene check) and `Tests.Ix.Kernel.TutorialDefs.AdvNat.rec` (malformed rec rule sanitized by aux-gen). Both hold against the Rust kernel via Meta-mode-only / bespoke-FFI paths the Aiur kernel can't see. * `Tests.Main`'s `ixvm` runner builds an `AiurTestEnv` once and weaves both `mkAiurTests`'s output and `arenaTests`'s output into the same `LSpec` suite. Misc: * `Tests/Aiur/Common.lean`: surface execute's `Except` properly. * `Benchmarks/{CheckNatAddComm,IxVM}.lean`: pass `check_deps = 1` explicitly to preserve existing semantics. * `Kernel.lean` exe: handle the `Except` return. # Outstanding TODOs Carried forward unchanged from the original Aiur kernel port: * `Ix/IxVM/Kernel/Primitive.lean::u64_add_with_carry` Duplicates `ByteStream.lean::u64_add` to expose the final carry as `(U64, G)`. Should be deleted once `ByteStream.u64_add` itself is patched to return the carry — the patch ripples beyond the kernel (Blake3, IxonSerialize, ByteStream itself), so a follow-up. * `Ix/IxVM/Kernel/Primitive.lean::divmod_256` (and `u64_mul`'s schoolbook). Currently use `divmod_256` byte-by-byte arithmetic. Should be replaced with a proper `u8_mul` Aiur gadget once it lands. Tracked on a separate branch. # WHNF fuel cap Aiur deliberately omits Rust's `MAX_WHNF_FUEL = 10_000`. In a zk prover context, divergent input fails to produce a proof — the caller guarantees termination, so a soundness-preserving early abort isn't needed. Documented inline at `Whnf.lean::whnf`. # Provenance Each Aiur file/function carries `Mirror: src/ix/kernel/...` citations so the Rust authority for any branch is one-grep away. --- Benchmarks/CheckNatAddComm.lean | 33 +- Benchmarks/IxVM.lean | 10 +- Ix/Aiur/Interpret.lean | 28 +- Ix/Aiur/Semantics/BytecodeFfi.lean | 17 +- Ix/IxVM.lean | 40 +- Ix/IxVM/CheckHarness.lean | 134 ++ Ix/IxVM/Convert.lean | 31 +- Ix/IxVM/Core.lean | 14 + Ix/IxVM/Ingress.lean | 1118 +++++++++++-- Ix/IxVM/KERNEL.md | 1149 ------------- Ix/IxVM/Kernel.lean | 1852 --------------------- Ix/IxVM/Kernel/CanonicalCheck.lean | 997 ++++++++++++ Ix/IxVM/Kernel/Check.lean | 362 +++++ Ix/IxVM/Kernel/DefEq.lean | 801 +++++++++ Ix/IxVM/Kernel/Inductive.lean | 2405 ++++++++++++++++++++++++++++ Ix/IxVM/Kernel/Infer.lean | 348 ++++ Ix/IxVM/Kernel/Levels.lean | 370 +++++ Ix/IxVM/Kernel/Primitive.lean | 2405 ++++++++++++++++++++++++++++ Ix/IxVM/Kernel/Subst.lean | 195 +++ Ix/IxVM/Kernel/Whnf.lean | 745 +++++++++ Ix/IxVM/KernelTypes.lean | 30 +- Kernel.lean | 23 +- Tests/Aiur/Common.lean | 45 +- Tests/Ix/IxVM.lean | 176 +- Tests/Ix/Kernel/Arena.lean | 118 ++ Tests/Main.lean | 23 +- src/aiur/execute.rs | 173 +- src/aiur/synthesis.rs | 9 +- src/aiur/trace.rs | 5 +- src/ffi/aiur/protocol.rs | 19 +- 30 files changed, 10340 insertions(+), 3335 deletions(-) create mode 100644 Ix/IxVM/CheckHarness.lean delete mode 100644 Ix/IxVM/KERNEL.md delete mode 100644 Ix/IxVM/Kernel.lean create mode 100644 Ix/IxVM/Kernel/CanonicalCheck.lean create mode 100644 Ix/IxVM/Kernel/Check.lean create mode 100644 Ix/IxVM/Kernel/DefEq.lean create mode 100644 Ix/IxVM/Kernel/Inductive.lean create mode 100644 Ix/IxVM/Kernel/Infer.lean create mode 100644 Ix/IxVM/Kernel/Levels.lean create mode 100644 Ix/IxVM/Kernel/Primitive.lean create mode 100644 Ix/IxVM/Kernel/Subst.lean create mode 100644 Ix/IxVM/Kernel/Whnf.lean create mode 100644 Tests/Ix/Kernel/Arena.lean diff --git a/Benchmarks/CheckNatAddComm.lean b/Benchmarks/CheckNatAddComm.lean index 160e696c..954078f4 100644 --- a/Benchmarks/CheckNatAddComm.lean +++ b/Benchmarks/CheckNatAddComm.lean @@ -29,36 +29,15 @@ def main : IO Unit := do let aiurSystem := Aiur.AiurSystem.build compiled.bytecode commitmentParameters let env ← get_env! - let constList := Lean.collectDependencies ``Nat.add_comm env.constants - let rawEnv ← Ix.CompileM.rsCompileEnvFFI constList - let ixonEnv := rawEnv.toEnv - - let mut ioBuffer : Aiur.IOBuffer := default - - -- Store ALL constants (including muts blocks) by Blake3 hash - for (addr, c) in ixonEnv.consts do - let (_, bytes) := Ixon.Serialize.put c |>.run default - let key : Array Aiur.G := addr.hash.data.map .ofUInt8 - ioBuffer := ioBuffer.extend key (bytes.data.map .ofUInt8) - - -- Store each blob: - -- 1. Raw bytes under prefixed key [1] ++ blake3_hash (for on-demand verified loading) - -- 2. Empty sentinel under plain blake3_hash (so io_get_info returns len=0, marking as blob) - for (addr, rawBytes) in ixonEnv.blobs do - let hashKey : Array Aiur.G := addr.hash.data.map .ofUInt8 - let prefixedKey : Array Aiur.G := #[1] ++ hashKey - ioBuffer := ioBuffer.extend prefixedKey (rawBytes.data.map fun b => .ofNat b.toNat) - ioBuffer := ioBuffer.extend hashKey #[] - - -- Get the blake3 address of Nat.add_comm as the target - let targetAddr := match ixonEnv.getAddr? (Ix.Name.fromLeanName ``Nat.add_comm) with - | some addr => addr - | none => panic! "Nat.add_comm not found in Ixon environment" - let targetAddrBytes : Array Aiur.G := targetAddr.hash.data.map .ofUInt8 + let ixonEnv ← IxVM.CheckHarness.loadIxonEnv ``Nat.add_comm env + let ioBuffer := IxVM.CheckHarness.buildKernelCheckIOBuffer ixonEnv + let targetAddrBytes := IxVM.CheckHarness.kernelCheckTarget ``Nat.add_comm ixonEnv + -- check_deps=1 here: bench full transitive checking. + let input := targetAddrBytes.push 1 let _ ← bgroup "Kernel typechecking" { oneShot := true } do throughput (.Elements ixonEnv.consts.size.toUInt64 "consts") bench "check Nat.add_comm" - (aiurSystem.prove friParameters funIdx targetAddrBytes) + (aiurSystem.prove friParameters funIdx input) ioBuffer return diff --git a/Benchmarks/IxVM.lean b/Benchmarks/IxVM.lean index fec4d099..e65379a7 100644 --- a/Benchmarks/IxVM.lean +++ b/Benchmarks/IxVM.lean @@ -29,14 +29,8 @@ def main : IO Unit := do let aiurSystem := Aiur.AiurSystem.build compiled.bytecode commitmentParameters let env ← get_env! - let natAddCommName := ``Nat.add_comm - let constList := Lean.collectDependencies natAddCommName env.constants - let rawEnv ← Ix.CompileM.rsCompileEnvFFI constList - let ixonEnv := rawEnv.toEnv - let ixonConsts := ixonEnv.consts.valuesIter - let (ioBuffer, n) := ixonConsts.fold (init := (default, 0)) fun (ioBuffer, i) c => - let (_, bytes) := Ixon.Serialize.put c |>.run default - (ioBuffer.extend #[.ofNat i] (bytes.data.map .ofUInt8), i + 1) + let ixonEnv ← IxVM.CheckHarness.loadIxonEnv ``Nat.add_comm env + let (ioBuffer, n) := IxVM.CheckHarness.buildSerdeIOBuffer ixonEnv let _ ← bgroup "IxVM benchmarks" { oneShot := true } do throughput (.Elements n.toUInt64 "consts") diff --git a/Ix/Aiur/Interpret.lean b/Ix/Aiur/Interpret.lean index f60f54df..45b788f5 100644 --- a/Ix/Aiur/Interpret.lean +++ b/Ix/Aiur/Interpret.lean @@ -49,6 +49,31 @@ partial def ppValue : Value → String instance : ToString Value := ⟨ppValue⟩ +/-- Pretty-print a `Value` while auto-dereferencing pointers up to `depth` +levels. Used by the `dbg!` interpreter helper so users see structured +content like `App(Const(3, []), BVar(0))` instead of opaque `&0x123`. -/ +partial def ppValueDeref (store : Store) (depth : Nat) : Value → String + | .unit => "()" + | .field g => toString g.val.toNat + | .tuple vs => "(" ++ String.intercalate ", " (vs.toList.map (ppValueDeref store depth)) ++ ")" + | .array vs => "[" ++ String.intercalate ", " (vs.toList.map (ppValueDeref store depth)) ++ "]" + | .ctor g args => + let name := g.toName.toString + if args.isEmpty then name + else name ++ "(" ++ String.intercalate ", " (args.toList.map (ppValueDeref store depth)) ++ ")" + | .fn g => "fn(" ++ g.toName.toString ++ ")" + | .pointer _ n => + if depth == 0 then "&0x" ++ natToHex n + else + match store.getByIdx n with + | some (vs, _) => + -- Stored value is `Array Value`; for tagged enums it's + -- typically `[ctor]` or `[tag, fields...]`. Recurse on each. + match vs.toList with + | [v] => ppValueDeref store (depth - 1) v + | _ => "[" ++ String.intercalate ", " (vs.toList.map (ppValueDeref store (depth - 1))) ++ "]" + | none => "&0x" ++ natToHex n ++ "(unbound)" + -- --------------------------------------------------------------------------- -- Pattern matching -- --------------------------------------------------------------------------- @@ -330,7 +355,8 @@ partial def interp (decls : Decls) (bindings : Bindings) : Term → InterpM Valu | none => dbg_trace s!"{label}" | some t => let v ← interp decls bindings t - dbg_trace s!"{label}: {v}" + let store ← getStore + dbg_trace s!"{label}: {ppValueDeref store 16 v}" interp decls bindings cont | .ioGetInfo key => do let keyGs ← expectFieldArray (← interp decls bindings key) diff --git a/Ix/Aiur/Semantics/BytecodeFfi.lean b/Ix/Aiur/Semantics/BytecodeFfi.lean index e9861dab..ecfcd555 100644 --- a/Ix/Aiur/Semantics/BytecodeFfi.lean +++ b/Ix/Aiur/Semantics/BytecodeFfi.lean @@ -56,20 +56,23 @@ namespace Bytecode.Toplevel private opaque execute' : @& Bytecode.Toplevel → @& Bytecode.FunIdx → @& Array G → (ioData : @& Array G) → (ioMap : @& Array (Array G × IOKeyInfo)) → - Array G × (Array G × Array (Array G × IOKeyInfo)) × Array Nat + Except String (Array G × (Array G × Array (Array G × IOKeyInfo)) × Array Nat) /-- Executes the bytecode function `funIdx` with the given `args` and `ioBuffer`, returning the raw output of the function, the updated `IOBuffer`, and an array -of query counts (one per function circuit, then one per memory size). -/ +of query counts (one per function circuit, then one per memory size). Returns +`Except.error msg` when execution fails (e.g. `assert_eq!` mismatch from a +typechecker rejecting a constant), so callers can recover instead of crashing. -/ def execute (toplevel : @& Bytecode.Toplevel) (funIdx : @& Bytecode.FunIdx) (args : @& Array G) (ioBuffer : IOBuffer) : - Array G × IOBuffer × Array Nat := + Except String (Array G × IOBuffer × Array Nat) := let ioData := ioBuffer.data let ioMap := ioBuffer.map - let (output, (ioData, ioMap), queryCounts) := execute' toplevel funIdx args - ioData ioMap.toArray - let ioMap := ioMap.foldl (fun acc (k, v) => acc.insert k v) ∅ - (output, ⟨ioData, ioMap⟩, queryCounts) + match execute' toplevel funIdx args ioData ioMap.toArray with + | .error e => .error e + | .ok (output, (ioData, ioMap), queryCounts) => + let ioMap := ioMap.foldl (fun acc (k, v) => acc.insert k v) ∅ + .ok (output, ⟨ioData, ioMap⟩, queryCounts) end Bytecode.Toplevel diff --git a/Ix/IxVM.lean b/Ix/IxVM.lean index 3407228a..52de9cd2 100644 --- a/Ix/IxVM.lean +++ b/Ix/IxVM.lean @@ -9,7 +9,16 @@ public import Ix.IxVM.IxonSerialize public import Ix.IxVM.IxonDeserialize public import Ix.IxVM.Convert public import Ix.IxVM.KernelTypes -public import Ix.IxVM.Kernel +public import Ix.IxVM.Kernel.Levels +public import Ix.IxVM.Kernel.Primitive +public import Ix.IxVM.Kernel.Subst +public import Ix.IxVM.Kernel.Whnf +public import Ix.IxVM.Kernel.Infer +public import Ix.IxVM.Kernel.DefEq +public import Ix.IxVM.Kernel.Inductive +public import Ix.IxVM.Kernel.CanonicalCheck +public import Ix.IxVM.Kernel.Check +public import Ix.IxVM.CheckHarness public section @@ -33,9 +42,22 @@ def entrypoints := ⟦ } } - pub fn kernel_check_test(target_addr: [G; 32]) { - let (k_consts, nat_idx, str_idx) = ingress_with_primitives(target_addr); - k_check_all_go(k_consts, k_consts, nat_idx, str_idx, 0) + -- `check_deps` controls whether transitive dependencies are + -- typechecked along with the target. When 1, runs `check_all` + -- (current behavior). When 0, runs `check_const` only on the target + -- — saving the per-dep `validate_const_well_scoped`, `k_check`, + -- recursor canonical-build, positivity, etc. Deps still need to be + -- in `k_consts`/`addrs` so the target's own `whnf`/`infer` can + -- resolve `Const` refs; the IOBuffer payload doesn't shrink. + pub fn kernel_check_test(target_addr: [G; 32], check_deps: G) { + let (k_consts, addrs) = ingress_with_primitives(target_addr); + match check_deps { + 0 => + let target_pos = find_addr_idx(target_addr, addrs, 0); + let ci = load(list_lookup(k_consts, target_pos)); + check_const(ci, target_pos, k_consts, addrs), + _ => check_all(k_consts, k_consts, addrs), + } } fn level_cmp_tests() { @@ -155,7 +177,15 @@ def ixVM : Except Aiur.Global Aiur.Source.Toplevel := do let vm ← vm.merge convert let vm ← vm.merge ingress let vm ← vm.merge kernelTypes - let vm ← vm.merge kernel + let vm ← vm.merge levels + let vm ← vm.merge primitive + let vm ← vm.merge subst + let vm ← vm.merge whnf + let vm ← vm.merge infer + let vm ← vm.merge defEq + let vm ← vm.merge inductive_check + let vm ← vm.merge canonicalCheck + let vm ← vm.merge check vm.merge entrypoints end IxVM diff --git a/Ix/IxVM/CheckHarness.lean b/Ix/IxVM/CheckHarness.lean new file mode 100644 index 00000000..a40f0b0d --- /dev/null +++ b/Ix/IxVM/CheckHarness.lean @@ -0,0 +1,134 @@ +module +public import Ix.Aiur.Protocol +public import Ix.Ixon +public import Ix.CompileM +public import Ix.Common + +public section + +namespace IxVM.CheckHarness + +/-- Run the Lean → Ixon FFI pipeline for `name`'s transitive closure. -/ +def loadIxonEnv (name : Lean.Name) (leanEnv : Lean.Environment) : IO Ixon.Env := do + let constList := Lean.collectDependencies name leanEnv.constants + let rawEnv ← Ix.CompileM.rsCompileEnvFFI constList + pure rawEnv.toEnv + +/-- Compile the union of `names`'s transitive closures into one shared + Ixon env. Drivers like `KernelArena.lean` use this to pay the + Lean→Ixon compile cost once and then build per-target IOBuffers via + `buildKernelCheckIOBufferFor`. -/ +def loadSharedIxonEnv (names : Array Lean.Name) (leanEnv : Lean.Environment) : + IO Ixon.Env := do + let constList := names.foldl (init := []) fun acc n => + Lean.collectDependencies n leanEnv.constants ++ acc + let mut seen : Lean.NameSet := {} + let mut deduped : List (Lean.Name × Lean.ConstantInfo) := [] + for entry in constList do + if !seen.contains entry.fst then + seen := seen.insert entry.fst + deduped := entry :: deduped + let rawEnv ← Ix.CompileM.rsCompileEnvFFI deduped + pure rawEnv.toEnv + +/-- Walk the Constant ref-graph from `target` to compute the set of + addresses needed to type-check it. Mirrors Aiur's `load_with_deps`: + follow `Constant.refs` plus the projection's `block_addr` (the parent + Muts wrapper) for IPrj/CPrj/RPrj/DPrj. -/ +partial def closureFrom (env : Ixon.Env) (target : Address) : Std.HashSet Address := Id.run do + let mut visited : Std.HashSet Address := {} + let mut worklist : Array Address := #[target] + while !worklist.isEmpty do + let addr := worklist.back! + worklist := worklist.pop + if visited.contains addr then continue + visited := visited.insert addr + match env.consts.get? addr with + | none => pure () + | some c => + for r in c.refs do + if !visited.contains r then worklist := worklist.push r + let blockAddr? : Option Address := match c.info with + | .iPrj p => some p.block + | .cPrj p => some p.block + | .rPrj p => some p.block + | .dPrj p => some p.block + | _ => none + match blockAddr? with + | some ba => if !visited.contains ba then worklist := worklist.push ba + | none => pure () + return visited + +/-- Build the `ixon_serde_test` / `ixon_serde_blake3_bench` IOBuffer: + one entry per const, keyed by its index. Returns the buffer and the + count `n` (which the Aiur entrypoint receives as input). -/ +def buildSerdeIOBuffer (ixonEnv : Ixon.Env) : Aiur.IOBuffer × Nat := + ixonEnv.consts.valuesIter.fold (init := (default, 0)) fun (ioBuffer, i) c => + let (_, bytes) := Ixon.Serialize.put c |>.run default + (ioBuffer.extend #[.ofNat i] (bytes.data.map .ofUInt8), i + 1) + +/-- Encode a `Lean.ReducibilityHints` as a single `G` per the convention + Aiur's `load_constant_hint` decodes (opaque → 0, abbrev → 0xFFFFFFFF, + regular n → clamp(1 + n, 0xFFFFFFFE)). -/ +private def hintToG : Lean.ReducibilityHints → Aiur.G + | .opaque => .ofNat 0 + | .abbrev => .ofNat 0xFFFFFFFF + | .regular n => + let v := 1 + n.toNat + .ofNat (if v > 0xFFFFFFFE then 0xFFFFFFFE else v) + +/-- Insert all per-address entries for `addr`s satisfying `keep` into + `ioBuffer`, following the IOBuffer convention: + + | key | value | meaning | + |------------------------|----------------|---------| + | `addr` (32 G) | const bytes | primary data; empty value = `addr` is a blob | + | `addr ++ [0]` (33 G) | raw blob bytes | referenced data (verified by Aiur via blake3) | + | `addr ++ [1]` (33 G) | single G | Defn `ReducibilityHints` encoding | + + Suffix tags use `Array.push` (O(1) amortized) rather than prefix + `++ Array` (O(n) allocation). -/ +private def addEntries (ixonEnv : Ixon.Env) (keep : Address → Bool) + (ioBuffer : Aiur.IOBuffer) : Aiur.IOBuffer := Id.run do + let mut ioBuffer := ioBuffer + for (addr, c) in ixonEnv.consts do + if !keep addr then continue + let (_, bytes) := Ixon.Serialize.put c |>.run default + let key : Array Aiur.G := addr.hash.data.map .ofUInt8 + ioBuffer := ioBuffer.extend key (bytes.data.map .ofUInt8) + for (addr, rawBytes) in ixonEnv.blobs do + if !keep addr then continue + let hashKey : Array Aiur.G := addr.hash.data.map .ofUInt8 + ioBuffer := ioBuffer.extend (hashKey.push 0) + (rawBytes.data.map fun b => .ofNat b.toNat) + ioBuffer := ioBuffer.extend hashKey #[] + for (_, named) in ixonEnv.named do + if !keep named.addr then continue + match named.constMeta with + | .defn _ _ hints _ _ _ _ _ => + let hashKey : Array Aiur.G := named.addr.hash.data.map .ofUInt8 + ioBuffer := ioBuffer.extend (hashKey.push 1) #[hintToG hints] + | _ => pure () + return ioBuffer + +/-- Build the full `kernel_check_test` IOBuffer for the entire `ixonEnv`. -/ +def buildKernelCheckIOBuffer (ixonEnv : Ixon.Env) : Aiur.IOBuffer := + addEntries ixonEnv (fun _ => true) default + +/-- Blake3 address bytes of `name` (the target input to `kernel_check_test`). -/ +def kernelCheckTarget (name : Lean.Name) (ixonEnv : Ixon.Env) : Array Aiur.G := + match ixonEnv.getAddr? (Ix.Name.fromLeanName name) with + | some addr => addr.hash.data.map .ofUInt8 + | none => panic! s!"{name} not found in Ixon environment" + +/-- Build a minimal `kernel_check_test` IOBuffer reachable from `target` + in `ixonEnv`. Used by drivers (e.g. `KernelArena.lean`) that compile + a single shared env once and then check many targets against it. -/ +def buildKernelCheckIOBufferFor (ixonEnv : Ixon.Env) (target : Address) : + Aiur.IOBuffer := + let closure := closureFrom ixonEnv target + addEntries ixonEnv closure.contains default + +end IxVM.CheckHarness + +end diff --git a/Ix/IxVM/Convert.lean b/Ix/IxVM/Convert.lean index bea5f378..cd93522c 100644 --- a/Ix/IxVM/Convert.lean +++ b/Ix/IxVM/Convert.lean @@ -33,11 +33,13 @@ def convert := ⟦ -- What to convert, with kind-specific auxiliary data enum ConvertKind { - CKDefn(Definition), + -- CKDefn carries Definition + reducibility hint G (packed: + -- 0=Opaque, 1+h=Regular(h), 0xFFFFFFFF=Abbrev). See KernelTypes. + CKDefn(Definition, G), CKAxio(Axiom), CKQuot(Quotient), - CKRecr(Recursor, List‹G›), - CKIndc(Inductive, List‹G›), + CKRecr(Recursor, List‹G›, [G; 32]), + CKIndc(Inductive, List‹G›, [G; 32]), CKCtor(Constructor, G) } @@ -241,14 +243,14 @@ def convert := ⟦ -- Per-kind conversion -- ============================================================================ - fn convert_definition(d: Definition, ctx: ConvertCtx) -> KConstantInfo { + fn convert_definition(d: Definition, ctx: ConvertCtx, hint: G) -> KConstantInfo { match d { Definition.Mk(kind, safety, lvls, &typ, &value) => let ktyp = ctx_convert_expr(typ, ctx); let kval = ctx_convert_expr(value, ctx); match kind { DefKind.Definition => - KConstantInfo.Defn(flatten_u64(lvls), ktyp, kval, safety), + KConstantInfo.Defn(flatten_u64(lvls), ktyp, kval, safety, hint), DefKind.Opaque => match safety { DefinitionSafety.Unsafe => @@ -278,7 +280,8 @@ def convert := ⟦ } } - fn convert_recursor(r: Recursor, ctx: ConvertCtx, rule_ctor_idxs: List‹G›) -> KConstantInfo { + fn convert_recursor(r: Recursor, ctx: ConvertCtx, rule_ctor_idxs: List‹G›, + block_addr: [G; 32]) -> KConstantInfo { match r { Recursor.Mk(k, is_unsafe, lvls, params, indices, motives, minors, &typ, rules) => let ktyp = ctx_convert_expr(typ, ctx); @@ -286,17 +289,18 @@ def convert := ⟦ KConstantInfo.Rec( flatten_u64(lvls), ktyp, flatten_u64(params), flatten_u64(indices), flatten_u64(motives), flatten_u64(minors), - krules, k, is_unsafe), + krules, k, is_unsafe, block_addr), } } - fn convert_inductive(ind: Inductive, ctx: ConvertCtx, ctor_idxs: List‹G›) -> KConstantInfo { + fn convert_inductive(ind: Inductive, ctx: ConvertCtx, ctor_idxs: List‹G›, + block_addr: [G; 32]) -> KConstantInfo { match ind { - Inductive.Mk(is_rec, is_refl, is_unsafe, lvls, params, indices, _, &typ, _) => + Inductive.Mk(is_rec, is_refl, is_unsafe, lvls, params, indices, nested, &typ, _) => let ktyp = ctx_convert_expr(typ, ctx); KConstantInfo.Induct( flatten_u64(lvls), ktyp, flatten_u64(params), flatten_u64(indices), - ctor_idxs, is_rec, is_refl, is_unsafe), + ctor_idxs, is_rec, is_refl, is_unsafe, flatten_u64(nested), block_addr), } } @@ -319,11 +323,12 @@ def convert := ⟦ match input { ConvertInput.Mk(ctx, kind) => match kind { - ConvertKind.CKDefn(d) => convert_definition(d, ctx), + ConvertKind.CKDefn(d, hint) => convert_definition(d, ctx, hint), ConvertKind.CKAxio(a) => convert_axiom(a, ctx), ConvertKind.CKQuot(q) => convert_quotient(q, ctx), - ConvertKind.CKRecr(r, rule_ctor_idxs) => convert_recursor(r, ctx, rule_ctor_idxs), - ConvertKind.CKIndc(ind, ctor_idxs) => convert_inductive(ind, ctx, ctor_idxs), + ConvertKind.CKRecr(r, rule_ctor_idxs, block_addr) => + convert_recursor(r, ctx, rule_ctor_idxs, block_addr), + ConvertKind.CKIndc(ind, ctor_idxs, block_addr) => convert_inductive(ind, ctx, ctor_idxs, block_addr), ConvertKind.CKCtor(c, induct_idx) => convert_constructor(c, ctx, induct_idx), }, } diff --git a/Ix/IxVM/Core.lean b/Ix/IxVM/Core.lean index ab656396..00bdc0d2 100644 --- a/Ix/IxVM/Core.lean +++ b/Ix/IxVM/Core.lean @@ -87,6 +87,20 @@ def core := ⟦ ListNode.Cons(head, rest) => store(ListNode.Cons(head, list_snoc(rest, v))), } } + + -- O(N) reverse via accumulator. Used by hot-path builders that + -- accumulate via cons (O(1)) then reverse once instead of O(N²) snoc. + fn list_reverse‹T›(list: List‹T›) -> List‹T› { + list_reverse_acc(list, store(ListNode.Nil)) + } + + fn list_reverse_acc‹T›(list: List‹T›, acc: List‹T›) -> List‹T› { + match load(list) { + ListNode.Nil => acc, + ListNode.Cons(head, rest) => + list_reverse_acc(rest, store(ListNode.Cons(head, acc))), + } + } ⟧ end IxVM diff --git a/Ix/IxVM/Ingress.lean b/Ix/IxVM/Ingress.lean index 975382bd..220e11a0 100644 --- a/Ix/IxVM/Ingress.lean +++ b/Ix/IxVM/Ingress.lean @@ -54,17 +54,46 @@ def ingress := ⟦ } } + -- Load reducibility hint G for a Defn at `addr`. Stored under suffixed + -- key `addr ++ [1]` (suffix tag 1 = metadata-tier). Encoding (mirror + -- Lean.ReducibilityHints): + -- 0 = Opaque + -- 1 + h = Regular(h) + -- 0xFFFFFFFF = Abbrev + -- If absent (no entry under suffixed key), defaults to 1 (Regular(0)). + fn load_constant_hint(addr: [G; 32]) -> G { + let [a0, a1, a2, a3, a4, a5, a6, a7, + a8, a9, a10, a11, a12, a13, a14, a15, + a16, a17, a18, a19, a20, a21, a22, a23, + a24, a25, a26, a27, a28, a29, a30, a31] = addr; + let key = [a0, a1, a2, a3, a4, a5, a6, a7, + a8, a9, a10, a11, a12, a13, a14, a15, + a16, a17, a18, a19, a20, a21, a22, a23, + a24, a25, a26, a27, a28, a29, a30, a31, 1]; + let (idx, len) = io_get_info(key); + match len { + 0 => 1, + _ => + let bytes = #read_byte_stream(idx, len); + match load(bytes) { + ListNode.Cons(b, _) => b, + ListNode.Nil => 1, + }, + } + } + -- Load a blob from IOBuffer by address, verify blake3, return raw bytes. - -- Blobs are stored under key [1] ++ addr to distinguish from constants. + -- Blobs are stored under key `addr ++ [0]` (suffix tag 0 = referenced + -- data) so they don't collide with constants stored at bare `addr`. fn load_verified_blob(addr: [G; 32]) -> ByteStream { let [a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20, a21, a22, a23, a24, a25, a26, a27, a28, a29, a30, a31] = addr; - let blob_key = [1, a0, a1, a2, a3, a4, a5, a6, a7, + let blob_key = [a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20, a21, a22, a23, - a24, a25, a26, a27, a28, a29, a30, a31]; + a24, a25, a26, a27, a28, a29, a30, a31, 0]; let (idx, len) = io_get_info(blob_key); let bytes = #read_byte_stream(idx, len); let h = blake3(bytes); @@ -312,10 +341,97 @@ def ingress := ⟦ -- Layout pass: compute block start positions and total kernel size -- ============================================================================ + -- Returns 1 if `mptr` is in the seen list. + fn is_mptr_seen(mptr: G, seen: List‹G›) -> G { + match load(seen) { + ListNode.Nil => 0, + ListNode.Cons(s, rest) => + match s - mptr { + 0 => 1, + _ => is_mptr_seen(mptr, rest), + }, + } + } + + -- Singleton-Indc Muts deduplication: structurally-identical singleton-Indc + -- Muts wrappers (per-Lean-constant content) collapse to one canonical + -- block to avoid emitting duplicate Ctor/Indc entries in `top` whose + -- cross-references (induct_idx ↔ ctor_indices) wouldn't be consistent + -- across the duplicates. Mirror Rust kernel's content-addressed dedup + -- where the same Indc content has one shared kernel position. + -- + -- Multi-member Muts (true mutual blocks) and non-singleton variants are + -- not deduped (extract_muts_members_ptr returns 0 for them). + fn extract_dedup_mptr(c: Constant) -> G { + -- Dedup singleton-Indc Muts wrappers (Lean's per-constant canonical + -- encoding). Key = full Constant.Mk ptr — Aiur store dedupes + -- structurally identical Constants, so two wrapper aliases for the + -- same logical Indc share this ptr. Inner-Indc-only dedup is too + -- coarse: IXON canonicalization makes UInt8.Indc and UInt32.Indc + -- collide at the Inductive.Mk level (literal width is a blob-ref + -- index, not inline), but their wrapper Constants differ via refs. + -- + -- Conservative: still gated on singleton-Indc Muts. Defn/Recr/multi- + -- member Muts skip dedup (caller layout handles them positionally). + match c { + Constant.Mk(info, _, _, _) => + match info { + ConstantInfo.Muts(members) => + match is_singleton_indc(members) { + 0 => 0, + 1 => ptr_val(store(c)), + }, + _ => 0, + }, + } + } + + fn is_singleton_indc(members: List‹MutConst›) -> G { + match load(members) { + ListNode.Cons(m, rest) => + match load(rest) { + ListNode.Nil => + match m { + MutConst.Indc(_) => 1, + _ => 0, + }, + _ => 0, + }, + _ => 0, + } + } + + -- Lookup canonical (first-occurrence) pos for an mptr in parallel + -- (seen_mptrs, seen_poses) lists. Returns 0 if not found (caller + -- guards via is_mptr_seen first). + fn first_pos_for_mptr(mptr: G, seen_mptrs: List‹G›, seen_poses: List‹G›) -> G { + match load(seen_mptrs) { + ListNode.Nil => 0, + ListNode.Cons(s, rest_m) => + match load(seen_poses) { + ListNode.Cons(q, rest_p) => + match s - mptr { + 0 => q, + _ => first_pos_for_mptr(mptr, rest_m, rest_p), + }, + }, + } + } + fn compute_layout( consts: List‹&Constant›, addrs: List‹[G; 32]›, pos: G + ) -> (List‹[G; 32]›, List‹G›, G) { + compute_layout_walk(consts, addrs, pos, store(ListNode.Nil), store(ListNode.Nil)) + } + + fn compute_layout_walk( + consts: List‹&Constant›, + addrs: List‹[G; 32]›, + pos: G, + seen_mptrs: List‹G›, + seen_poses: List‹G› ) -> (List‹[G; 32]›, List‹G›, G) { match load(consts) { ListNode.Nil => (store(ListNode.Nil), store(ListNode.Nil), pos), @@ -326,27 +442,54 @@ def ingress := ⟦ Constant.Mk(info, _, _, _) => match info { ConstantInfo.Muts(members) => - let size = block_kernel_size(members); - let (ba, bs, next) = compute_layout(rest_consts, rest_addrs, pos + size); - (store(ListNode.Cons(addr, ba)), - store(ListNode.Cons(pos, bs)), - next), + let mptr = extract_dedup_mptr(c); + let dup = match mptr { + 0 => 0, + _ => is_mptr_seen(mptr, seen_mptrs), + }; + match dup { + 1 => + -- Duplicate Muts (same content, different wrapper addr). + -- Don't advance pos; record this wrapper's addr → first + -- occurrence's pos so refs via Expr.Ref(this_wrapper_addr) + -- and IPrj/CPrj/RPrj/DPrj.block_addr=this_wrapper_addr + -- resolve to canonical pos. + let first_pos = first_pos_for_mptr(mptr, seen_mptrs, seen_poses); + let (ba, bs, next) = compute_layout_walk(rest_consts, rest_addrs, pos, seen_mptrs, seen_poses); + (store(ListNode.Cons(addr, ba)), + store(ListNode.Cons(first_pos, bs)), + next), + 0 => + let size = block_kernel_size(members); + let new_seen_m = match mptr { + 0 => seen_mptrs, + _ => store(ListNode.Cons(mptr, seen_mptrs)), + }; + let new_seen_p = match mptr { + 0 => seen_poses, + _ => store(ListNode.Cons(pos, seen_poses)), + }; + let (ba, bs, next) = compute_layout_walk(rest_consts, rest_addrs, pos + size, new_seen_m, new_seen_p); + (store(ListNode.Cons(addr, ba)), + store(ListNode.Cons(pos, bs)), + next), + }, ConstantInfo.IPrj(_) => - compute_layout(rest_consts, rest_addrs, pos), + compute_layout_walk(rest_consts, rest_addrs, pos, seen_mptrs, seen_poses), ConstantInfo.CPrj(_) => - compute_layout(rest_consts, rest_addrs, pos), + compute_layout_walk(rest_consts, rest_addrs, pos, seen_mptrs, seen_poses), ConstantInfo.RPrj(_) => - compute_layout(rest_consts, rest_addrs, pos), + compute_layout_walk(rest_consts, rest_addrs, pos, seen_mptrs, seen_poses), ConstantInfo.DPrj(_) => - compute_layout(rest_consts, rest_addrs, pos), + compute_layout_walk(rest_consts, rest_addrs, pos, seen_mptrs, seen_poses), ConstantInfo.Defn(_) => - compute_layout(rest_consts, rest_addrs, pos + 1), + compute_layout_walk(rest_consts, rest_addrs, pos + 1, seen_mptrs, seen_poses), ConstantInfo.Axio(_) => - compute_layout(rest_consts, rest_addrs, pos + 1), + compute_layout_walk(rest_consts, rest_addrs, pos + 1, seen_mptrs, seen_poses), ConstantInfo.Quot(_) => - compute_layout(rest_consts, rest_addrs, pos + 1), + compute_layout_walk(rest_consts, rest_addrs, pos + 1, seen_mptrs, seen_poses), ConstantInfo.Recr(_) => - compute_layout(rest_consts, rest_addrs, pos + 1), + compute_layout_walk(rest_consts, rest_addrs, pos + 1, seen_mptrs, seen_poses), }, }, }, @@ -363,6 +506,19 @@ def ingress := ⟦ block_addrs: List‹[G; 32]›, block_starts: List‹G›, pos: G + ) -> List‹G› { + build_pos_map_walk(consts, addrs, block_addrs, block_starts, pos, + store(ListNode.Nil), store(ListNode.Nil)) + } + + fn build_pos_map_walk( + consts: List‹&Constant›, + addrs: List‹[G; 32]›, + block_addrs: List‹[G; 32]›, + block_starts: List‹G›, + pos: G, + seen_mptrs: List‹G›, + seen_poses: List‹G› ) -> List‹G› { match load(consts) { ListNode.Nil => store(ListNode.Nil), @@ -373,8 +529,29 @@ def ingress := ⟦ Constant.Mk(info, _, _, _) => match info { ConstantInfo.Muts(members) => - let size = block_kernel_size(members); - store(ListNode.Cons(0, build_pos_map(rest_consts, rest_addrs, block_addrs, block_starts, pos + size))), + let mptr = extract_dedup_mptr(c); + let dup = match mptr { + 0 => 0, + _ => is_mptr_seen(mptr, seen_mptrs), + }; + match dup { + 1 => + let first_pos = first_pos_for_mptr(mptr, seen_mptrs, seen_poses); + store(ListNode.Cons(first_pos, + build_pos_map_walk(rest_consts, rest_addrs, block_addrs, block_starts, pos, seen_mptrs, seen_poses))), + 0 => + let size = block_kernel_size(members); + let new_seen_m = match mptr { + 0 => seen_mptrs, + _ => store(ListNode.Cons(mptr, seen_mptrs)), + }; + let new_seen_p = match mptr { + 0 => seen_poses, + _ => store(ListNode.Cons(pos, seen_poses)), + }; + store(ListNode.Cons(pos, + build_pos_map_walk(rest_consts, rest_addrs, block_addrs, block_starts, pos + size, new_seen_m, new_seen_p))), + }, ConstantInfo.IPrj(prj) => match prj { InductiveProj.Mk(idx, block_addr) => @@ -386,7 +563,7 @@ def ingress := ⟦ ConstantInfo.Muts(members) => let off = member_offset(members, flatten_u64(idx)); store(ListNode.Cons(block_start + off, - build_pos_map(rest_consts, rest_addrs, block_addrs, block_starts, pos))), + build_pos_map_walk(rest_consts, rest_addrs, block_addrs, block_starts, pos, seen_mptrs, seen_poses))), }, }, }, @@ -401,7 +578,7 @@ def ingress := ⟦ ConstantInfo.Muts(members) => let mem_off = member_offset(members, flatten_u64(idx)); store(ListNode.Cons(block_start + mem_off + 1 + flatten_u64(cidx), - build_pos_map(rest_consts, rest_addrs, block_addrs, block_starts, pos))), + build_pos_map_walk(rest_consts, rest_addrs, block_addrs, block_starts, pos, seen_mptrs, seen_poses))), }, }, }, @@ -416,7 +593,7 @@ def ingress := ⟦ ConstantInfo.Muts(members) => let off = member_offset(members, flatten_u64(idx)); store(ListNode.Cons(block_start + off, - build_pos_map(rest_consts, rest_addrs, block_addrs, block_starts, pos))), + build_pos_map_walk(rest_consts, rest_addrs, block_addrs, block_starts, pos, seen_mptrs, seen_poses))), }, }, }, @@ -431,13 +608,13 @@ def ingress := ⟦ ConstantInfo.Muts(members) => let off = member_offset(members, flatten_u64(idx)); store(ListNode.Cons(block_start + off, - build_pos_map(rest_consts, rest_addrs, block_addrs, block_starts, pos))), + build_pos_map_walk(rest_consts, rest_addrs, block_addrs, block_starts, pos, seen_mptrs, seen_poses))), }, }, }, _ => store(ListNode.Cons(pos, - build_pos_map(rest_consts, rest_addrs, block_addrs, block_starts, pos + 1))), + build_pos_map_walk(rest_consts, rest_addrs, block_addrs, block_starts, pos + 1, seen_mptrs, seen_poses))), }, }, }, @@ -457,13 +634,161 @@ def ingress := ⟦ } } - fn build_recur_idxs(members: List‹MutConst›, block_start: G, member_idx: G) -> List‹G› { + -- Mirror of Rust kernel canonicalization (`ingress_compiled_names` / + -- `resolve_all`): IXON can encode the same logical inductive via multiple + -- wrapper addresses (singleton-Muts vs IPrj-rewrapping vs bundle-Muts). + -- Aiur expands each wrapper into its own positional slots, so the same + -- inductive lands at distinct kernel positions; refs traveling different + -- IXON paths land on different positions and break the `Proj` / `Const` + -- identity invariants assumed by infer / def_eq. + -- + -- Dedup key: the `members` `List` Ptr. Aiur `store` content- + -- dedupes structurally identical lists, so two Muts wrappers with the + -- same member content (e.g. `[Indc(Nat)]`) share a Ptr regardless of the + -- outer Constant.Mk wrapper's refs / sharing / univs differences. + fn extract_muts_members_ptr(c: &Constant) -> G { + -- Same dedup semantic as extract_dedup_mptr: singleton-Indc Muts only, + -- keyed on full Constant ptr (so wrappers around different logical + -- Indcs that happen to share IXON-canonical Indc.Mk content stay + -- distinct via differing refs). + match load(c) { + Constant.Mk(info, _, _, _) => + match info { + ConstantInfo.Muts(members) => + match is_singleton_indc(members) { + 0 => 0, + 1 => ptr_val(c), + }, + _ => 0, + }, + } + } + + fn find_canon_pos_for_mptr(mptr: G, seen_mptrs: List‹G›, + seen_poses: List‹G›, default_pos: G) -> G { + match load(seen_mptrs) { + ListNode.Nil => default_pos, + ListNode.Cons(s, rest_m) => + match load(seen_poses) { + ListNode.Cons(q, rest_q) => + match s - mptr { + 0 => q, + _ => find_canon_pos_for_mptr(mptr, rest_m, rest_q, default_pos), + }, + }, + } + } + + fn canonicalize_pos_map_walk(consts: List‹&Constant›, pos_map: List‹G›, + seen_mptrs: List‹G›, seen_poses: List‹G›) -> List‹G› { + match load(consts) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(c, rest_c) => + match load(pos_map) { + ListNode.Cons(p, rest_p) => + let mptr = extract_muts_members_ptr(c); + let canon_pos = match mptr { + 0 => p, + _ => find_canon_pos_for_mptr(mptr, seen_mptrs, seen_poses, p), + }; + let new_seen_mptrs = match mptr { + 0 => seen_mptrs, + _ => store(ListNode.Cons(mptr, seen_mptrs)), + }; + let new_seen_poses = match mptr { + 0 => seen_poses, + _ => store(ListNode.Cons(canon_pos, seen_poses)), + }; + store(ListNode.Cons(canon_pos, + canonicalize_pos_map_walk(rest_c, rest_p, new_seen_mptrs, new_seen_poses))), + }, + } + } + + fn canonicalize_pos_map(consts: List‹&Constant›, pos_map: List‹G›) -> List‹G› { + canonicalize_pos_map_walk(consts, pos_map, store(ListNode.Nil), store(ListNode.Nil)) + } + + -- Companion to `canonicalize_pos_map`: produces a `canonical_addr_map` + -- parallel to `all_addrs`. Each entry records the FIRST address that + -- yielded this Muts content. Used to canonicalize `block_addr` fields + -- baked into Inductives by Aiur's positional convert step (without + -- this, two distinct wrapper addrs produce Inductives with structurally + -- different 10th fields — defeating store-Ptr equality). + fn find_canon_addr_for_mptr(mptr: G, seen_mptrs: List‹G›, + seen_addrs: List‹[G; 32]›, + default_addr: [G; 32]) -> [G; 32] { + match load(seen_mptrs) { + ListNode.Nil => default_addr, + ListNode.Cons(s, rest_m) => + match load(seen_addrs) { + ListNode.Cons(a, rest_a) => + match s - mptr { + 0 => a, + _ => find_canon_addr_for_mptr(mptr, rest_m, rest_a, default_addr), + }, + }, + } + } + + fn canonicalize_addr_map_walk(addrs: List‹[G; 32]›, consts: List‹&Constant›, + seen_mptrs: List‹G›, + seen_addrs: List‹[G; 32]›) -> List‹[G; 32]› { + match load(addrs) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(addr, rest_a) => + match load(consts) { + ListNode.Cons(c, rest_c) => + let mptr = extract_muts_members_ptr(c); + let canon_addr = match mptr { + 0 => addr, + _ => find_canon_addr_for_mptr(mptr, seen_mptrs, seen_addrs, addr), + }; + let new_seen_mptrs = match mptr { + 0 => seen_mptrs, + _ => store(ListNode.Cons(mptr, seen_mptrs)), + }; + let new_seen_addrs = match mptr { + 0 => seen_addrs, + _ => store(ListNode.Cons(canon_addr, seen_addrs)), + }; + store(ListNode.Cons(canon_addr, + canonicalize_addr_map_walk(rest_a, rest_c, new_seen_mptrs, new_seen_addrs))), + }, + } + } + + fn canonicalize_addr_map(addrs: List‹[G; 32]›, consts: List‹&Constant›) -> List‹[G; 32]› { + canonicalize_addr_map_walk(addrs, consts, store(ListNode.Nil), store(ListNode.Nil)) + } + + fn lookup_canon_addr(target: [G; 32], all_addrs: List‹[G; 32]›, + canon_addrs: List‹[G; 32]›) -> [G; 32] { + match load(all_addrs) { + ListNode.Nil => target, + ListNode.Cons(addr, rest_a) => + match load(canon_addrs) { + ListNode.Cons(canon, rest_c) => + match address_eq(target, addr) { + 1 => canon, + 0 => lookup_canon_addr(target, rest_a, rest_c), + }, + }, + } + } + + -- Build kernel positions of each member's Indc within the block expansion. + -- pos[i] = block_start + sum(member_kernel_size(members[0..i])). + fn build_recur_idxs(members: List‹MutConst›, block_start: G, _member_idx: G) -> List‹G› { + build_recur_idxs_walk(members, block_start) + } + + fn build_recur_idxs_walk(members: List‹MutConst›, cur_pos: G) -> List‹G› { match load(members) { ListNode.Nil => store(ListNode.Nil), - ListNode.Cons(_, rest) => - let off = member_offset(members, member_idx); - store(ListNode.Cons(block_start + off, - build_recur_idxs(rest, block_start, member_idx + 1))), + ListNode.Cons(mc, rest) => + store(ListNode.Cons(cur_pos, + build_recur_idxs_walk(rest, cur_pos + member_kernel_size(mc)))), } } @@ -476,7 +801,11 @@ def ingress := ⟦ } } - fn build_rule_ctor_idxs(members: List‹MutConst›, block_start: G, member_idx: G) -> List‹G› { + fn build_rule_ctor_idxs(members: List‹MutConst›, block_start: G, _member_idx: G) -> List‹G› { + build_rule_ctor_idxs_walk(members, block_start) + } + + fn build_rule_ctor_idxs_walk(members: List‹MutConst›, cur_pos: G) -> List‹G› { match load(members) { ListNode.Nil => store(ListNode.Nil), ListNode.Cons(mc, rest) => @@ -485,15 +814,15 @@ def ingress := ⟦ match ind { Inductive.Mk(_, _, _, _, _, _, _, _, ctors) => let num_ctors = list_length(ctors); - let induct_pos = block_start + member_offset(members, member_idx); - let this_ctors = build_ctor_idxs(num_ctors, induct_pos, 0); - let rest_ctors = build_rule_ctor_idxs(rest, block_start, member_idx + 1); + let this_ctors = build_ctor_idxs(num_ctors, cur_pos, 0); + let rest_ctors = build_rule_ctor_idxs_walk(rest, + cur_pos + 1 + num_ctors); list_concat(this_ctors, rest_ctors), }, MutConst.Defn(_) => - build_rule_ctor_idxs(rest, block_start, member_idx + 1), + build_rule_ctor_idxs_walk(rest, cur_pos + 1), MutConst.Recr(_) => - build_rule_ctor_idxs(rest, block_start, member_idx + 1), + build_rule_ctor_idxs_walk(rest, cur_pos + 1), }, } } @@ -502,6 +831,139 @@ def ingress := ⟦ -- ConvertInput construction: expand Muts blocks into kernel constants -- ============================================================================ + -- Returns 1 if `members` contains at least one MutConst.Indc, else 0. + fn members_have_indc(members: List‹MutConst›) -> G { + match load(members) { + ListNode.Nil => 0, + ListNode.Cons(mc, rest) => + match mc { + MutConst.Indc(_) => 1, + _ => members_have_indc(rest), + }, + } + } + + -- Deref Expr.Share via the constant's sharing list. + fn deref_share(e: Expr, sharing: List‹&Expr›) -> Expr { + match e { + Expr.Share(idx) => deref_share(load(list_lookup_u64(sharing, idx)), sharing), + _ => e, + } + } + + -- Walk a recursor's typ skipping `n` leading Alls; return body Expr. + fn peel_n_alls_expr(e: Expr, n: G, sharing: List‹&Expr›) -> Expr { + match n { + 0 => deref_share(e, sharing), + _ => + match deref_share(e, sharing) { + Expr.All(_, body_ref) => peel_n_alls_expr(load(body_ref), n - 1, sharing), + _ => e, + }, + } + } + + -- Take an App-spine expression's head. + fn collect_app_spine_expr_head(e: Expr, sharing: List‹&Expr›) -> Expr { + match deref_share(e, sharing) { + Expr.App(f_ref, _) => collect_app_spine_expr_head(load(f_ref), sharing), + other => other, + } + } + + -- Extract the inductive's address from a recursor's typ. + -- Walks past `n_skip = params + motives + minors + indices` foralls, + -- then takes the next forall's domain (major's type), peels the App-spine, + -- and reads the head Ref's address from the recursor's `refs` list. + -- Returns `[0;32]` if the head isn't a Ref (e.g. mutual self-rec via Rec). + fn rec_typ_to_inductive_addr(typ: Expr, n_skip: G, refs: List‹[G; 32]›, + sharing: List‹&Expr›) -> [G; 32] { + let after_skip = peel_n_alls_expr(typ, n_skip, sharing); + match after_skip { + Expr.All(major_ty_ref, _) => + let head = collect_app_spine_expr_head(load(major_ty_ref), sharing); + match head { + Expr.Ref(ref_idx_bytes, _) => list_lookup(refs, flatten_u64(ref_idx_bytes)), + _ => [0; 32], + }, + _ => [0; 32], + } + } + + -- For aux-only Recr blocks (Muts containing only Recrs/Defns, e.g. produced + -- by `compile_aux_block` in src/ix/compile/mutual.rs), the rule_ctor_idxs + -- must come from the *original* inductive block referenced by the enclosing + -- Constant's refs, not from `members` (which has no Indc). Resolve the + -- block by extracting the inductive's address from the recursor's typ + -- (rather than heuristically matching ctor counts among refs, which fails + -- when multiple in-scope inductives share the same number of ctors). + fn build_aux_recr_ctor_idxs( + recr: Recursor, + refs: List‹[G; 32]›, + sharing: List‹&Expr›, + all_addrs: List‹[G; 32]›, + block_addrs: List‹[G; 32]›, + block_starts: List‹G› + ) -> List‹G› { + match recr { + Recursor.Mk(_, _, _, params, indices, motives, minors, &typ, _) => + let n_skip = ((flatten_u64(params) + flatten_u64(motives)) + + flatten_u64(minors)) + flatten_u64(indices); + let ind_addr = rec_typ_to_inductive_addr(typ, n_skip, refs, sharing); + let ind_const = load_verified_constant(ind_addr); + match ind_const { + Constant.Mk(info, _, _, _) => + match info { + ConstantInfo.IPrj(prj) => + match prj { + InductiveProj.Mk(member_idx, block_addr) => + let block_const = load_verified_constant(block_addr); + match block_const { + Constant.Mk(bi, _, _, _) => + match bi { + ConstantInfo.Muts(other_members) => + let bs = lookup_block_start(block_addr, block_addrs, block_starts); + -- Mutual block: each recursor's rules cover only + -- its OWN inductive's ctors. Slice the global + -- rule_ctor_idxs to just this member's ctors. + extract_member_ctor_idxs(other_members, bs, + flatten_u64(member_idx)), + }, + }, + }, + }, + }, + } + } + + -- Extract kernel ctor positions for member at `target_idx` in `members`. + fn extract_member_ctor_idxs(members: List‹MutConst›, block_start: G, + target_idx: G) -> List‹G› { + extract_member_ctor_idxs_walk(members, block_start, target_idx, 0) + } + + fn extract_member_ctor_idxs_walk(members: List‹MutConst›, cur_pos: G, + target_idx: G, idx: G) -> List‹G› { + match load(members) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(mc, rest) => + let mc_size = member_kernel_size(mc); + match eq_zero(idx - target_idx) { + 1 => + match mc { + MutConst.Indc(ind) => + match ind { + Inductive.Mk(_, _, _, _, _, _, _, _, ctors) => + build_ctor_idxs(list_length(ctors), cur_pos, 0), + }, + _ => store(ListNode.Nil), + }, + 0 => extract_member_ctor_idxs_walk(rest, cur_pos + mc_size, + target_idx, idx + 1), + }, + } + } + -- Expand a single MutConst member into ConvertInputs. -- For Indc: emits 1 Induct + N Ctors. -- For Recr: emits 1 Rec. @@ -511,7 +973,12 @@ def ingress := ⟦ ctx: ConvertCtx, members: List‹MutConst›, block_start: G, - member_idx: G + member_idx: G, + refs: List‹[G; 32]›, + all_addrs: List‹[G; 32]›, + block_addrs: List‹[G; 32]›, + block_starts: List‹G›, + block_addr: [G; 32] ) -> List‹&ConvertInput› { match mc { MutConst.Indc(ind) => @@ -520,16 +987,28 @@ def ingress := ⟦ let num_ctors = list_length(ctors); let induct_pos = block_start + member_offset(members, member_idx); let ctor_idxs = build_ctor_idxs(num_ctors, induct_pos, 0); - let indc_input = ConvertInput.Mk(ctx, ConvertKind.CKIndc(ind, ctor_idxs)); + let indc_input = ConvertInput.Mk(ctx, ConvertKind.CKIndc(ind, ctor_idxs, block_addr)); let ctor_inputs = expand_ctors(ctors, ctx, induct_pos); store(ListNode.Cons(store(indc_input), ctor_inputs)), }, MutConst.Recr(recr) => - let rule_ctor_idxs = build_rule_ctor_idxs(members, block_start, 0); - let input = ConvertInput.Mk(ctx, ConvertKind.CKRecr(recr, rule_ctor_idxs)); - store(ListNode.Cons(store(input), store(ListNode.Nil))), + match members_have_indc(members) { + 1 => + let rule_ctor_idxs = build_rule_ctor_idxs(members, block_start, 0); + let input = ConvertInput.Mk(ctx, ConvertKind.CKRecr(recr, rule_ctor_idxs, block_addr)); + store(ListNode.Cons(store(input), store(ListNode.Nil))), + 0 => + let sharing = match ctx { ConvertCtx.Mk(s, _, _, _, _) => s, }; + let rule_ctor_idxs = + build_aux_recr_ctor_idxs(recr, refs, sharing, all_addrs, block_addrs, block_starts); + let input = ConvertInput.Mk(ctx, ConvertKind.CKRecr(recr, rule_ctor_idxs, block_addr)); + store(ListNode.Cons(store(input), store(ListNode.Nil))), + }, MutConst.Defn(defn) => - let input = ConvertInput.Mk(ctx, ConvertKind.CKDefn(defn)); + -- Muts-block defs default to Regular(0) (hint=1). Per-member hints + -- aren't currently plumbed; standalone Defns get their actual hint + -- via load_constant_hint in build_convert_inputs. + let input = ConvertInput.Mk(ctx, ConvertKind.CKDefn(defn, 1)); store(ListNode.Cons(store(input), store(ListNode.Nil))), } } @@ -548,13 +1027,20 @@ def ingress := ⟦ ctx: ConvertCtx, all_members: List‹MutConst›, block_start: G, - member_idx: G + member_idx: G, + refs: List‹[G; 32]›, + all_addrs: List‹[G; 32]›, + block_addrs: List‹[G; 32]›, + block_starts: List‹G›, + block_addr: [G; 32] ) -> List‹&ConvertInput› { match load(members) { ListNode.Nil => store(ListNode.Nil), ListNode.Cons(mc, rest) => - let this = expand_member(mc, ctx, all_members, block_start, member_idx); - let more = expand_members(rest, ctx, all_members, block_start, member_idx + 1); + let this = expand_member(mc, ctx, all_members, block_start, member_idx, + refs, all_addrs, block_addrs, block_starts, block_addr); + let more = expand_members(rest, ctx, all_members, block_start, member_idx + 1, + refs, all_addrs, block_addrs, block_starts, block_addr); list_concat(this, more), } } @@ -563,76 +1049,127 @@ def ingress := ⟦ -- Muts blocks are expanded into their members. -- Projections are skipped (handled via block expansion). -- Standalone constants are converted directly. + -- Unpack head + tail of an addrs list (parallel walker for build_convert_inputs). + fn unpack_head_addr(addrs: List‹[G; 32]›) -> ([G; 32], List‹[G; 32]›) { + match load(addrs) { + ListNode.Cons(a, r) => (a, r), + } + } + fn build_convert_inputs( consts: List‹&Constant›, + cur_addrs: List‹[G; 32]›, all_addrs: List‹[G; 32]›, pos_map: List‹G›, + canon_addrs: List‹[G; 32]›, block_addrs: List‹[G; 32]›, block_starts: List‹G›, pos: G + ) -> List‹&ConvertInput› { + build_convert_inputs_walk(consts, cur_addrs, all_addrs, pos_map, + canon_addrs, block_addrs, block_starts, pos, + store(ListNode.Nil)) + } + + fn build_convert_inputs_walk( + consts: List‹&Constant›, + cur_addrs: List‹[G; 32]›, + all_addrs: List‹[G; 32]›, + pos_map: List‹G›, + canon_addrs: List‹[G; 32]›, + block_addrs: List‹[G; 32]›, + block_starts: List‹G›, + pos: G, + seen_mptrs: List‹G› ) -> List‹&ConvertInput› { match load(consts) { ListNode.Nil => store(ListNode.Nil), ListNode.Cons(&c, rest) => - match c { - Constant.Mk(info, sharing, refs, univs) => - match info { - ConstantInfo.Muts(members) => - let size = block_kernel_size(members); - let ref_idxs = build_ref_idxs_mapped(refs, all_addrs, pos_map); - let lit_blobs = build_lit_blobs(refs, all_addrs); - let recur_idxs = build_recur_idxs(members, pos, 0); - let ctx = ConvertCtx.Mk(sharing, ref_idxs, recur_idxs, lit_blobs, univs); - let expanded = expand_members(members, ctx, members, pos, 0); - let more = build_convert_inputs(rest, all_addrs, pos_map, block_addrs, block_starts, pos + size); - list_concat(expanded, more), - ConstantInfo.IPrj(_) => - build_convert_inputs(rest, all_addrs, pos_map, block_addrs, block_starts, pos), - ConstantInfo.CPrj(_) => - build_convert_inputs(rest, all_addrs, pos_map, block_addrs, block_starts, pos), - ConstantInfo.RPrj(_) => - build_convert_inputs(rest, all_addrs, pos_map, block_addrs, block_starts, pos), - ConstantInfo.DPrj(_) => - build_convert_inputs(rest, all_addrs, pos_map, block_addrs, block_starts, pos), - ConstantInfo.Defn(defn) => - let ref_idxs = build_ref_idxs_mapped(refs, all_addrs, pos_map); - let lit_blobs = build_lit_blobs(refs, all_addrs); - let recur_idxs = store(ListNode.Cons(pos, store(ListNode.Nil))); - let ctx = ConvertCtx.Mk(sharing, ref_idxs, recur_idxs, lit_blobs, univs); - let input = ConvertInput.Mk(ctx, ConvertKind.CKDefn(defn)); - store(ListNode.Cons(store(input), - build_convert_inputs(rest, all_addrs, pos_map, block_addrs, block_starts, pos + 1))), - ConstantInfo.Axio(axio) => - let ref_idxs = build_ref_idxs_mapped(refs, all_addrs, pos_map); - let lit_blobs = build_lit_blobs(refs, all_addrs); - let ctx = ConvertCtx.Mk(sharing, ref_idxs, store(ListNode.Nil), lit_blobs, univs); - let input = ConvertInput.Mk(ctx, ConvertKind.CKAxio(axio)); - store(ListNode.Cons(store(input), - build_convert_inputs(rest, all_addrs, pos_map, block_addrs, block_starts, pos + 1))), - ConstantInfo.Quot(quot) => - let ref_idxs = build_ref_idxs_mapped(refs, all_addrs, pos_map); - let lit_blobs = build_lit_blobs(refs, all_addrs); - let ctx = ConvertCtx.Mk(sharing, ref_idxs, store(ListNode.Nil), lit_blobs, univs); - let input = ConvertInput.Mk(ctx, ConvertKind.CKQuot(quot)); - store(ListNode.Cons(store(input), - build_convert_inputs(rest, all_addrs, pos_map, block_addrs, block_starts, pos + 1))), - ConstantInfo.Recr(recr) => - let ref_idxs = build_ref_idxs_mapped(refs, all_addrs, pos_map); - let lit_blobs = build_lit_blobs(refs, all_addrs); - let nrules = recr_rule_count(recr); - let block_addr = find_matching_block_addr(refs, all_addrs, nrules); - let block_const = load_verified_constant(block_addr); - match block_const { - Constant.Mk(block_info, _, _, _) => - match block_info { - ConstantInfo.Muts(members) => - let recur_idxs = store(ListNode.Cons(pos, store(ListNode.Nil))); - let bs = lookup_block_start(block_addr, block_addrs, block_starts); - let rule_ctor_idxs = build_rule_ctor_idxs(members, bs, 0); + match unpack_head_addr(cur_addrs) { + (head_addr, rest_addrs) => + match c { + Constant.Mk(info, sharing, refs, univs) => + match info { + ConstantInfo.Muts(members) => + let mptr = extract_dedup_mptr(c); + let dup = match mptr { + 0 => 0, + _ => is_mptr_seen(mptr, seen_mptrs), + }; + match dup { + 1 => + -- Duplicate Muts: skip emission (canonical Muts already + -- emitted). Don't advance pos. Refs to this wrapper + -- resolve to canonical pos via pos_map dedup. + build_convert_inputs_walk(rest, rest_addrs, all_addrs, pos_map, + canon_addrs, block_addrs, block_starts, pos, seen_mptrs), + 0 => + let new_seen = match mptr { + 0 => seen_mptrs, + _ => store(ListNode.Cons(mptr, seen_mptrs)), + }; + let size = block_kernel_size(members); + let canon_block_start = lookup_addr_pos(head_addr, all_addrs, pos_map); + let canon_addr = lookup_canon_addr(head_addr, all_addrs, canon_addrs); + let ref_idxs = build_ref_idxs_mapped(refs, all_addrs, pos_map); + let lit_blobs = build_lit_blobs(refs, all_addrs); + let recur_idxs = build_recur_idxs(members, canon_block_start, 0); let ctx = ConvertCtx.Mk(sharing, ref_idxs, recur_idxs, lit_blobs, univs); - let input = ConvertInput.Mk(ctx, ConvertKind.CKRecr(recr, rule_ctor_idxs)); - store(ListNode.Cons(store(input), - build_convert_inputs(rest, all_addrs, pos_map, block_addrs, block_starts, pos + 1))), + let expanded = expand_members(members, ctx, members, canon_block_start, 0, + refs, all_addrs, block_addrs, block_starts, canon_addr); + let more = build_convert_inputs_walk(rest, rest_addrs, all_addrs, pos_map, + canon_addrs, block_addrs, block_starts, pos + size, new_seen); + list_concat(expanded, more), + }, + ConstantInfo.IPrj(_) => + build_convert_inputs_walk(rest, rest_addrs, all_addrs, pos_map, canon_addrs, block_addrs, block_starts, pos, seen_mptrs), + ConstantInfo.CPrj(_) => + build_convert_inputs_walk(rest, rest_addrs, all_addrs, pos_map, canon_addrs, block_addrs, block_starts, pos, seen_mptrs), + ConstantInfo.RPrj(_) => + build_convert_inputs_walk(rest, rest_addrs, all_addrs, pos_map, canon_addrs, block_addrs, block_starts, pos, seen_mptrs), + ConstantInfo.DPrj(_) => + build_convert_inputs_walk(rest, rest_addrs, all_addrs, pos_map, canon_addrs, block_addrs, block_starts, pos, seen_mptrs), + ConstantInfo.Defn(defn) => + let ref_idxs = build_ref_idxs_mapped(refs, all_addrs, pos_map); + let lit_blobs = build_lit_blobs(refs, all_addrs); + let recur_idxs = store(ListNode.Cons(pos, store(ListNode.Nil))); + let ctx = ConvertCtx.Mk(sharing, ref_idxs, recur_idxs, lit_blobs, univs); + let hint = load_constant_hint(head_addr); + let input = ConvertInput.Mk(ctx, ConvertKind.CKDefn(defn, hint)); + store(ListNode.Cons(store(input), + build_convert_inputs_walk(rest, rest_addrs, all_addrs, pos_map, canon_addrs, block_addrs, block_starts, pos + 1, seen_mptrs))), + ConstantInfo.Axio(axio) => + let ref_idxs = build_ref_idxs_mapped(refs, all_addrs, pos_map); + let lit_blobs = build_lit_blobs(refs, all_addrs); + let ctx = ConvertCtx.Mk(sharing, ref_idxs, store(ListNode.Nil), lit_blobs, univs); + let input = ConvertInput.Mk(ctx, ConvertKind.CKAxio(axio)); + store(ListNode.Cons(store(input), + build_convert_inputs_walk(rest, rest_addrs, all_addrs, pos_map, canon_addrs, block_addrs, block_starts, pos + 1, seen_mptrs))), + ConstantInfo.Quot(quot) => + let ref_idxs = build_ref_idxs_mapped(refs, all_addrs, pos_map); + let lit_blobs = build_lit_blobs(refs, all_addrs); + let ctx = ConvertCtx.Mk(sharing, ref_idxs, store(ListNode.Nil), lit_blobs, univs); + let input = ConvertInput.Mk(ctx, ConvertKind.CKQuot(quot)); + store(ListNode.Cons(store(input), + build_convert_inputs_walk(rest, rest_addrs, all_addrs, pos_map, canon_addrs, block_addrs, block_starts, pos + 1, seen_mptrs))), + ConstantInfo.Recr(recr) => + let ref_idxs = build_ref_idxs_mapped(refs, all_addrs, pos_map); + let lit_blobs = build_lit_blobs(refs, all_addrs); + let nrules = recr_rule_count(recr); + let block_addr = find_matching_block_addr(refs, all_addrs, nrules); + let block_const = load_verified_constant(block_addr); + match block_const { + Constant.Mk(block_info, _, _, _) => + match block_info { + ConstantInfo.Muts(members) => + let recur_idxs = store(ListNode.Cons(pos, store(ListNode.Nil))); + let bs = lookup_block_start(block_addr, block_addrs, block_starts); + let rule_ctor_idxs = build_rule_ctor_idxs(members, bs, 0); + let ctx = ConvertCtx.Mk(sharing, ref_idxs, recur_idxs, lit_blobs, univs); + let input = ConvertInput.Mk(ctx, ConvertKind.CKRecr(recr, rule_ctor_idxs, block_addr)); + store(ListNode.Cons(store(input), + build_convert_inputs_walk(rest, rest_addrs, all_addrs, pos_map, canon_addrs, block_addrs, block_starts, pos + 1, seen_mptrs))), + }, }, }, }, @@ -729,45 +1266,376 @@ def ingress := ⟦ let (all_addrs, all_consts) = load_with_deps( target_addr, store(ListNode.Nil), store(ListNode.Nil), store(ListNode.Nil)); let (block_addrs, block_starts, _total) = compute_layout(all_consts, all_addrs, 0); - let pos_map = build_pos_map(all_consts, all_addrs, block_addrs, block_starts, 0); - let inputs = build_convert_inputs(all_consts, all_addrs, pos_map, block_addrs, block_starts, 0); + let pos_map_naive = build_pos_map(all_consts, all_addrs, block_addrs, block_starts, 0); + let pos_map = canonicalize_pos_map(all_consts, pos_map_naive); + let canon_addrs = canonicalize_addr_map(all_addrs, all_consts); + let inputs = build_convert_inputs(all_consts, all_addrs, all_addrs, pos_map, canon_addrs, block_addrs, block_starts, 0); convert_all(inputs) } - -- Look up a constant's position by its blake3 address. - -- Returns 0 - 1 (sentinel) if the address is not found. - fn find_addr_pos(target: [G; 32], all_addrs: List‹[G; 32]›, pos_map: List‹G›) -> G { + -- Build a List‹[G; 32]› parallel to k_consts: addrs[i] = blake3 address + -- of the kernel const at position i. Walks (all_addrs, pos_map) and for + -- each kernel position emits the addr that resolves to it. + -- Address-keyed dispatch: primitives compared by address, not by + -- precomputed positional index. + fn build_addrs_aligned(i: G, total: G, + all_addrs: List‹[G; 32]›, + all_consts: List‹&Constant›, + pos_map: List‹G›) -> List‹[G; 32]› { + match i - total { + 0 => store(ListNode.Nil), + _ => + let addr = find_best_addr_at_pos(i, all_addrs, all_consts, pos_map); + store(ListNode.Cons(addr, build_addrs_aligned(i + 1, total, all_addrs, all_consts, pos_map))), + } + } + + -- Returns 1 if `c` is a projection constant (IPrj/CPrj/RPrj/DPrj). + -- Used to prioritize per-member primitive addresses over the parent + -- Muts block's content-hash when both share the same kernel position. + fn is_proj_const(c: &Constant) -> G { + match load(c) { + Constant.Mk(info, _, _, _) => + match info { + ConstantInfo.IPrj(_) => 1, + ConstantInfo.CPrj(_) => 1, + ConstantInfo.RPrj(_) => 1, + ConstantInfo.DPrj(_) => 1, + _ => 0, + }, + } + } + + -- First pass: find a projection-constant entry whose pos_map = `target`. + -- Per-member primitive addrs (e.g. `nat_addr`) live on IPrj entries; + -- the parent Muts block has the BLOCK content-hash, not the member's. + -- So we prefer the IPrj-derived addr at a shared pos. + fn find_prj_addr_at_pos(target: G, all_addrs: List‹[G; 32]›, + all_consts: List‹&Constant›, + pos_map: List‹G›) -> (G, [G; 32]) { + match load(all_addrs) { + ListNode.Nil => (0, [0; 32]), + ListNode.Cons(addr, rest_a) => + match load(all_consts) { + ListNode.Cons(c, rest_c) => + match load(pos_map) { + ListNode.Cons(pos, rest_p) => + let pos_match = eq_zero(pos - target); + let prj = is_proj_const(c); + match pos_match * prj { + 1 => (1, addr), + _ => find_prj_addr_at_pos(target, rest_a, rest_c, rest_p), + }, + }, + }, + } + } + + -- Find the address in all_addrs whose pos_map entry equals `target`. + -- Returns all-zero `[G; 32]` if not found — happens for kernel + -- positions that are only reached via within-block peer refs + -- (Expr.Rec) and never loaded as a standalone ref. Primitive + -- dispatch via `address_eq` against hardcoded non-zero addresses + -- treats zero-addr as "no primitive here", falling through. + fn find_addr_at_pos(target: G, all_addrs: List‹[G; 32]›, pos_map: List‹G›) -> [G; 32] { match load(all_addrs) { - ListNode.Nil => 0 - 1, + ListNode.Nil => [0; 32], ListNode.Cons(addr, rest_addrs) => match load(pos_map) { ListNode.Cons(pos, rest_pos) => - let eq = address_eq(target, addr); - match eq { - 1 => pos, - 0 => find_addr_pos(target, rest_addrs, rest_pos), + match pos - target { + 0 => addr, + _ => find_addr_at_pos(target, rest_addrs, rest_pos), + }, + }, + } + } + + -- Wrapper: prefer Prj-derived addr at shared pos, fall back to any. + fn find_best_addr_at_pos(target: G, all_addrs: List‹[G; 32]›, + all_consts: List‹&Constant›, + pos_map: List‹G›) -> [G; 32] { + match find_prj_addr_at_pos(target, all_addrs, all_consts, pos_map) { + (1, addr) => addr, + (0, _) => find_addr_at_pos(target, all_addrs, pos_map), + } + } + + -- Returns `(k_consts, addrs)` where `addrs[i]` is the blake3 address of + -- the kernel const at position `i`. Primitive detection downstream + -- compares addresses via `address_eq` against hardcoded constants + -- in `Primitive.lean`. + -- Build override list (ctor_pos → ctor_addr) by walking every loaded + -- IPrj. For each, find the inductive's ctor count from the parent + -- block and synthesize each ctor's CPrj content-hash via in-Aiur + -- `put_constant` + `blake3`. No IO buffer side channel needed: every + -- input (idx, block_addr, cidx) is either taken from a `load_verified_*` + -- result or a deterministic loop counter, so the resulting addresses + -- are derived from already-trusted data. + fn build_ctor_overrides(all_consts: List‹&Constant›, all_addrs: List‹[G; 32]›, + block_addrs: List‹[G; 32]›, block_starts: List‹G›) + -> List‹(G, [G; 32])› { + match load(all_consts) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(&c, rest_c) => + match load(all_addrs) { + ListNode.Cons(_, rest_a) => + match c { + Constant.Mk(info, _, _, _) => + match info { + ConstantInfo.IPrj(prj) => + match prj { + InductiveProj.Mk(idx, block_addr) => + let block_start = lookup_block_start(block_addr, block_addrs, block_starts); + let block_const = load_verified_constant(block_addr); + match block_const { + Constant.Mk(bi, _, _, _) => + match bi { + ConstantInfo.Muts(members) => + let mem_off = member_offset(members, flatten_u64(idx)); + let base_pos = block_start + mem_off + 1; + let n_ctors = inductive_ctor_count_at(members, flatten_u64(idx)); + let new_pairs = build_ctor_pairs_computed(idx, block_addr, base_pos, n_ctors, 0); + list_concat(new_pairs, + build_ctor_overrides(rest_c, rest_a, block_addrs, block_starts)), + _ => + build_ctor_overrides(rest_c, rest_a, block_addrs, block_starts), + }, + }, + }, + _ => + build_ctor_overrides(rest_c, rest_a, block_addrs, block_starts), + }, + }, + }, + } + } + + -- Number of constructors of the Inductive at `target_idx` within the + -- given Muts members. Returns 0 if the indexed member isn't an Inductive. + fn inductive_ctor_count_at(members: List‹MutConst›, target_idx: G) -> G { + inductive_ctor_count_walk(members, target_idx, 0) + } + + fn inductive_ctor_count_walk(members: List‹MutConst›, target_idx: G, i: G) -> G { + match load(members) { + ListNode.Nil => 0, + ListNode.Cons(mc, rest) => + match i - target_idx { + 0 => + match mc { + MutConst.Indc(ind) => + match ind { + Inductive.Mk(_, _, _, _, _, _, _, _, ctors) => + list_length(ctors), + }, + _ => 0, }, + _ => inductive_ctor_count_walk(rest, target_idx, i + 1), }, } } - -- Transitively loads all dependencies, converts to kernel types, and - -- resolves primitive type indices (Nat, String) by hardcoded blake3 address. - -- Returns (constants, nat_idx, str_idx). - fn ingress_with_primitives(target_addr: [G; 32]) -> (List‹&KConstantInfo›, G, G) { + fn build_ctor_pairs_computed(idx: [G; 8], block: [G; 32], + base_pos: G, n_ctors: G, cidx: G) + -> List‹(G, [G; 32])› { + match n_ctors - cidx { + 0 => store(ListNode.Nil), + _ => + let addr = cprj_content_addr(idx, cidx, block); + store(ListNode.Cons((base_pos + cidx, addr), + build_ctor_pairs_computed(idx, block, base_pos, n_ctors, cidx + 1))), + } + } + + -- Compute the CPrj's blake3 content-hash from `(idx, cidx, block)` by + -- constructing the same `Constant{ info := CPrj{...}, ... }` shape Lean + -- compile uses, serializing it in-Aiur, and hashing. No external trust + -- needed — every input is derived from a `load_verified_*` result or a + -- loop counter. + fn cprj_content_addr(idx: [G; 8], cidx: G, block: [G; 32]) -> [G; 32] { + let prj = ConstructorProj.Mk(idx, [cidx, 0, 0, 0, 0, 0, 0, 0], block); + let info = ConstantInfo.CPrj(prj); + let cnst = Constant.Mk(info, store(ListNode.Nil), + store(ListNode.Nil), + store(ListNode.Nil)); + let bytes = put_constant(cnst, store(ListNode.Nil)); + let h = blake3(bytes); + [h[0][0], h[0][1], h[0][2], h[0][3], + h[1][0], h[1][1], h[1][2], h[1][3], + h[2][0], h[2][1], h[2][2], h[2][3], + h[3][0], h[3][1], h[3][2], h[3][3], + h[4][0], h[4][1], h[4][2], h[4][3], + h[5][0], h[5][1], h[5][2], h[5][3], + h[6][0], h[6][1], h[6][2], h[6][3], + h[7][0], h[7][1], h[7][2], h[7][3]] + } + + + -- Walk addrs at increasing positions; if an override exists for the + -- current position, replace the entry. Lets us inject ctor addresses + -- into the per-position address list without restructuring the rest. + fn apply_ctor_overrides(addrs: List‹[G; 32]›, + overrides: List‹(G, [G; 32])›, pos: G) + -> List‹[G; 32]› { + match load(addrs) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(addr, rest) => + let new_addr = lookup_override(overrides, pos, addr); + store(ListNode.Cons(new_addr, + apply_ctor_overrides(rest, overrides, pos + 1))), + } + } + + fn lookup_override(overrides: List‹(G, [G; 32])›, pos: G, + default: [G; 32]) -> [G; 32] { + match load(overrides) { + ListNode.Nil => default, + ListNode.Cons(p, rest) => + match p { + (op, oaddr) => + match op - pos { + 0 => oaddr, + _ => lookup_override(rest, pos, default), + }, + }, + } + } + + fn ingress_with_primitives(target_addr: [G; 32]) -> (List‹&KConstantInfo›, List‹[G; 32]›) { let (all_addrs, all_consts) = load_with_deps( target_addr, store(ListNode.Nil), store(ListNode.Nil), store(ListNode.Nil)); - let (block_addrs, block_starts, _total) = compute_layout(all_consts, all_addrs, 0); - let pos_map = build_pos_map(all_consts, all_addrs, block_addrs, block_starts, 0); - let inputs = build_convert_inputs(all_consts, all_addrs, pos_map, block_addrs, block_starts, 0); + let (block_addrs, block_starts, total) = compute_layout(all_consts, all_addrs, 0); + let pos_map_naive = build_pos_map(all_consts, all_addrs, block_addrs, block_starts, 0); + -- Canonicalize duplicate Muts wrappers (same members-Ptr) so refs + -- converge AND emitted KConstantInfos share content via store dedup. + let pos_map = canonicalize_pos_map(all_consts, pos_map_naive); + let canon_addrs = canonicalize_addr_map(all_addrs, all_consts); + let inputs = build_convert_inputs(all_consts, all_addrs, all_addrs, pos_map, canon_addrs, block_addrs, block_starts, 0); let k_consts = convert_all(inputs); - let nat_idx = find_addr_pos( - [252, 14, 30, 145, 47, 45, 127, 18, 4, 154, 91, 49, 93, 118, 238, 194, 149, 98, 227, 77, 195, 158, 188, 162, 82, 135, 174, 88, 128, 125, 177, 55], - all_addrs, pos_map); - let str_idx = find_addr_pos( - [30, 88, 121, 25, 226, 100, 26, 91, 42, 106, 111, 127, 245, 153, 122, 104, 84, 186, 86, 190, 136, 220, 9, 88, 195, 67, 193, 38, 238, 117, 253, 155], - all_addrs, pos_map); - (k_consts, nat_idx, str_idx) + let addrs = build_addrs_aligned(0, total, all_addrs, all_consts, pos_map); + -- Patch ctor positions: parent Inductives don't surface their ctors' + -- CPrj addresses via Lean's compile (non-aux ctors aren't stored in + -- env.consts). We surface them via the `[3] ++ ipr_addr` IO-buffer + -- side channel and inject them into addrs at the right positions. + let overrides = build_ctor_overrides(all_consts, all_addrs, block_addrs, block_starts); + let addrs = apply_ctor_overrides(addrs, overrides, 0); + -- Append synthetic primitive entries with their hardcoded addresses. + -- The Aiur kernel's index-based `KExprNode.Const` references need a + -- top position for every primitive that internal expansions + -- (e.g. `str_lit_to_ctor`) construct. When the target's transitive + -- closure doesn't load a primitive, the synthetic stub at the end + -- provides a discoverable position. Each stub is an + -- `Axiom Sort 0` that type-checks trivially. Real loaded primitives + -- still appear earlier in `addrs` so `find_addr_idx_safe` returns + -- their true position; the stub is only consulted when the real + -- one is absent. + let (prim_consts, prim_addrs_list) = synthetic_primitive_entries(); + let k_consts = list_concat(k_consts, prim_consts); + let addrs = list_concat(addrs, prim_addrs_list); + (k_consts, addrs) + } + + -- Synthetic primitive entries: every hardcoded `*_addr()` from + -- `Ix.IxVM.Kernel.Primitive`, paired with a stub `Axiom Sort 0`. + -- Order doesn't matter since lookup is by address. Mirrors the full + -- `Primitives` set in `src/ix/kernel/primitive.rs`. When the + -- target's transitive closure already loads a real primitive, that + -- entry appears earlier in `addrs` and `find_addr_idx_safe` returns + -- its true position; the stub is only consulted otherwise. + fn synthetic_primitive_entries() -> (List‹&KConstantInfo›, List‹[G; 32]›) { + let addrs = synthetic_primitive_addrs(); + let stub_ty = store(KExprNode.Srt(store(KLevel.Zero))); + let stub = store(KConstantInfo.Axiom(0, stub_ty, 0)); + let consts = list_repeat_stub(stub, list_addr_length(addrs)); + (consts, addrs) + } + + fn synthetic_primitive_addrs() -> List‹[G; 32]› { + store(ListNode.Cons(quot_type_addr(), + store(ListNode.Cons(quot_ctor_addr(), + store(ListNode.Cons(quot_lift_addr(), + store(ListNode.Cons(quot_ind_addr(), + store(ListNode.Cons(bit_vec_addr(), + store(ListNode.Cons(bit_vec_to_nat_addr(), + store(ListNode.Cons(bit_vec_of_nat_addr(), + store(ListNode.Cons(bit_vec_ult_addr(), + store(ListNode.Cons(decidable_decide_addr(), + store(ListNode.Cons(lt_lt_addr(), + store(ListNode.Cons(bool_type_addr(), + store(ListNode.Cons(eq_addr(), + store(ListNode.Cons(eq_refl_addr(), + store(ListNode.Cons(nat_dec_le_addr(), + store(ListNode.Cons(nat_dec_eq_addr(), + store(ListNode.Cons(nat_dec_lt_addr(), + store(ListNode.Cons(int_dec_eq_addr(), + store(ListNode.Cons(int_dec_le_addr(), + store(ListNode.Cons(int_dec_lt_addr(), + store(ListNode.Cons(int_of_nat_addr(), + store(ListNode.Cons(int_neg_succ_addr(), + store(ListNode.Cons(fin_addr(), + store(ListNode.Cons(decidable_rec_addr(), + store(ListNode.Cons(decidable_is_true_addr(), + store(ListNode.Cons(decidable_is_false_addr(), + store(ListNode.Cons(nat_le_of_ble_eq_true_addr(), + store(ListNode.Cons(nat_not_le_of_not_ble_eq_true_addr(), + store(ListNode.Cons(nat_eq_of_beq_eq_true_addr(), + store(ListNode.Cons(nat_ne_of_beq_eq_false_addr(), + store(ListNode.Cons(reduce_bool_addr(), + store(ListNode.Cons(reduce_nat_addr(), + store(ListNode.Cons(system_platform_num_bits_addr(), + store(ListNode.Cons(system_platform_get_num_bits_addr(), + store(ListNode.Cons(subtype_val_addr(), + store(ListNode.Cons(punit_size_of_1_addr(), + store(ListNode.Cons(size_of_size_of_addr(), + store(ListNode.Cons(punit_addr(), + store(ListNode.Cons(unit_addr(), + store(ListNode.Cons(nat_addr(), + store(ListNode.Cons(nat_zero_addr(), + store(ListNode.Cons(nat_succ_addr(), + store(ListNode.Cons(nat_pred_addr(), + store(ListNode.Cons(nat_add_addr(), + store(ListNode.Cons(nat_sub_addr(), + store(ListNode.Cons(nat_mul_addr(), + store(ListNode.Cons(nat_pow_addr(), + store(ListNode.Cons(nat_gcd_addr(), + store(ListNode.Cons(nat_mod_addr(), + store(ListNode.Cons(nat_div_addr(), + store(ListNode.Cons(nat_land_addr(), + store(ListNode.Cons(nat_lor_addr(), + store(ListNode.Cons(nat_xor_addr(), + store(ListNode.Cons(nat_shift_left_addr(), + store(ListNode.Cons(nat_shift_right_addr(), + store(ListNode.Cons(nat_beq_addr(), + store(ListNode.Cons(nat_ble_addr(), + store(ListNode.Cons(str_addr(), + store(ListNode.Cons(string_utf8_byte_size_addr(), + store(ListNode.Cons(string_back_addr(), + store(ListNode.Cons(string_legacy_back_addr(), + store(ListNode.Cons(string_to_byte_array_addr(), + store(ListNode.Cons(byte_array_empty_addr(), + store(ListNode.Cons(char_of_nat_addr(), + store(ListNode.Cons(char_type_addr(), + store(ListNode.Cons(string_of_list_addr(), + store(ListNode.Cons(list_nil_addr(), + store(ListNode.Cons(list_cons_addr(), + store(ListNode.Cons(bool_true_addr(), + store(ListNode.Cons(bool_false_addr(), + store(ListNode.Nil))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) + } + + fn list_addr_length(xs: List‹[G; 32]›) -> G { + match load(xs) { + ListNode.Nil => 0, + ListNode.Cons(_, rest) => list_addr_length(rest) + 1, + } + } + + fn list_repeat_stub(stub: &KConstantInfo, n: G) -> List‹&KConstantInfo› { + match n { + 0 => store(ListNode.Nil), + _ => store(ListNode.Cons(stub, list_repeat_stub(stub, n - 1))), + } } ⟧ diff --git a/Ix/IxVM/KERNEL.md b/Ix/IxVM/KERNEL.md deleted file mode 100644 index 814abd4c..00000000 --- a/Ix/IxVM/KERNEL.md +++ /dev/null @@ -1,1149 +0,0 @@ -# How a Lean 4 Kernel Typechecker Works - -This document explains, from first principles, how to typecheck Lean 4 expressions. -It focuses on the **environment-based** (NbE / Krivine machine) approach used by our -three kernel implementations, contrasting it with the **substitution-based** approach -used by `lean4lean` and `nanoda_lib`. - -### The Three Implementations - -| Implementation | Language | File(s) | Notes | -|---|---|---|---| -| **Lean reference** | Lean 4 | `Ix/Kernel/*.lean` | Full-featured, ST-based thunks, pointer caching | -| **Rust reference** | Rust | `src/ix/kernel/*.rs` | Full-featured, RefCell thunks, extensive caching | -| **Aiur circuit** | Aiur (zkDSL) | `Ix/IxVM/Kernel.lean` | Minimal, no mutation, function-call caching | - -The Lean and Rust implementations are feature-complete reference kernels. The Aiur -implementation is a work-in-progress circuit targeting zero-knowledge proof generation, -with some constraints (no mutation and no dynamic indexing) but with aggressive -function-call caching that provides call-by-need semantics for free. - ---- - -## Table of Contents - -1. [The Big Picture](#1-the-big-picture) -2. [Expressions](#2-expressions) -3. [Universe Levels](#3-universe-levels) -4. [The Typing Judgments](#4-the-typing-judgments) -5. [Type Inference](#5-type-inference) -6. [Evaluation: From Expressions to Values](#6-evaluation-from-expressions-to-values) -7. [Weak Head Normal Form (WHNF)](#7-weak-head-normal-form-whnf) -8. [Definitional Equality](#8-definitional-equality) -9. [Declarations and the Environment](#9-declarations-and-the-environment) -10. [Inductive Types and Recursors](#10-inductive-types-and-recursors) -11. [Special Reductions](#11-special-reductions) -12. [Substitution-Based vs Environment-Based](#12-substitution-based-vs-environment-based) -13. [Caching and Performance](#13-caching-and-performance) -14. [Implementation Comparison](#14-implementation-comparison) -15. [Aiur TODOs and Disparities](#15-aiur-todos-and-disparities) - ---- - -## 1. The Big Picture - -A **kernel** (or **type checker**) is the trusted core of a proof assistant. Its job is -to verify that every definition and theorem in a Lean file is well-typed. If the kernel -accepts it, you can trust it — no matter how complicated the tactics or elaboration that -produced it. - -The kernel does NOT: -- Parse syntax or run tactics -- Do elaboration, unification, or typeclass resolution -- Handle user-facing error messages - -The kernel DOES: -- **Infer types** of expressions -- **Check definitional equality** of two expressions (are they "the same" by computation?) -- **Validate declarations** (definitions, theorems, inductives) one at a time - -The core loop is: for each new declaration, check that its type is well-formed and (for -definitions/theorems) that its value has the claimed type. "Has the claimed type" means -the inferred type and the declared type are **definitionally equal**. - -### The Three Core Operations - -Everything reduces to three mutually recursive operations: - -``` -infer(e) : Expr → Type -- what is the type of e? -whnf(e) : Expr → Expr -- reduce e to weak head normal form -isDefEq(a, b) : Expr × Expr → Bool -- are a and b definitionally equal? -``` - -These call each other. `infer` calls `isDefEq` to check that an argument's type matches -a function's domain. `isDefEq` calls `whnf` to peel away computation. `whnf` may call -`infer` to determine if something is a proposition (for proof irrelevance). - -**Where to find these:** - -| Operation | Lean | Rust | Aiur | -|-----------|------|------|------| -| `eval` | `Infer.lean` `eval` | `eval.rs` `eval` | `Kernel.lean` `k_eval` | -| `whnf` | `Infer.lean` `whnfVal` | `whnf.rs` `whnf_val` | `Kernel.lean` `k_whnf` | -| `isDefEq` | `Infer.lean` `isDefEq` | `def_eq.rs` `is_def_eq` | `Kernel.lean` `k_is_def_eq` | -| `infer` | `Infer.lean` `infer` | `infer.rs` `infer` | `Kernel.lean` `k_infer` | -| `check` | `Infer.lean` `check` | `infer.rs` `check` | `Kernel.lean` `k_check` | - ---- - -## 2. Expressions - -Lean's expression language is a dependently-typed lambda calculus. Here are the -constructors: - -| Constructor | Notation | Meaning | -|-------------|----------|---------| -| `bvar i` | `#i` | Bound variable (de Bruijn index `i`) | -| `sort u` | `Sort u` | A universe / type of types | -| `const n [u₁..uₖ]` | `@List.map.{u,v}` | Named constant with universe arguments | -| `app f a` | `f a` | Function application | -| `lam x : A => b` | `fun x : A => b` | Lambda abstraction | -| `forallE x : A => B` | `(x : A) → B` | Dependent function type (Pi type) | -| `letE x : A := v; b` | `let x : A := v; b` | Let binding | -| `lit l` | `42`, `"hi"` | Nat or String literal | -| `proj T i s` | `s.i` | Structure field projection | - -In the Aiur kernel, expressions are the `KExpr` enum (defined in `KernelTypes.lean`). -Binder info and names are stripped since the kernel operates on anonymous -(content-addressed) constants. - -### De Bruijn Indices - -Bound variables are nameless — instead of `fun x => fun y => x`, we write -`fun _ => fun _ => #1`. The index counts how many binders you cross to reach the -binding site: `#0` is the innermost binder, `#1` the next one out, etc. - -``` -fun (A : Type) => fun (x : A) => x -= lam (Sort 1) (lam #0 #0) - ^^^^^^^^ ^^ ^^ - binds #1 | refers to x (0 binders crossed) - refers to A (1 binder crossed) -``` - ---- - -## 3. Universe Levels - -Every type lives in a universe. `Nat : Type 0`, `Type 0 : Type 1`, `Type 1 : Type 2`, -and so on. Lean represents these symbolically: - -| Level | Meaning | -|-------|---------| -| `zero` | 0 (also called `Prop` when used as `Sort 0`) | -| `succ u` | u + 1 | -| `max u v` | max(u, v) | -| `imax u v` | if v = 0 then 0 else max(u, v) | -| `param i` | Universe parameter (polymorphism) | - -### Why `imax`? - -`imax` is the key to **impredicativity** of `Prop`. Consider the type of a function -`(x : A) → B`. Its universe is `imax uA uB` where `A : Sort uA` and `B : Sort uB`. - -If `B : Prop` (i.e., `uB = 0`), then `imax uA 0 = 0`, so the function type is also -in `Prop` — regardless of how large `A` is. This is what makes propositions -"impredicative": you can quantify over arbitrarily large types and still land in `Prop`. - -If `B` is NOT in `Prop`, then `imax uA uB = max uA uB`, the standard predicative rule. - -### Level Comparison - -Two levels are equal if they evaluate to the same natural number under every assignment -of parameters. The Aiur kernel uses `level_equal(a, b) = level_leq(a, b) ∧ level_leq(b, a)`, -where `level_leq` is sound and complete. - -#### Semantics - -A *level assignment* σ maps parameter indices to natural numbers. Every level `l` -evaluates to a natural number ⟦l⟧σ: - -- ⟦Zero⟧σ = 0 -- ⟦Param(i)⟧σ = σ(i) -- ⟦Succ(l)⟧σ = 1 + ⟦l⟧σ -- ⟦Max(a, b)⟧σ = max(⟦a⟧σ, ⟦b⟧σ) -- ⟦IMax(a, b)⟧σ = if ⟦b⟧σ = 0 then 0 else max(⟦a⟧σ, ⟦b⟧σ) - -We write `a ≤ b` to mean ⟦a⟧σ ≤ ⟦b⟧σ for all σ, and `a = b` to mean `a ≤ b ∧ b ≤ a`. -Let σ₀ denote the assignment mapping every parameter to 0. - -#### Reduced levels - -The functions `level_max` and `level_imax` produce levels in *reduced form*. -The key invariant: - -> **(R)** If `IMax(a, b)` appears in a reduced level, then `level_is_not_zero(b) = 0`. - -This holds because `level_imax(a, b)` returns `Zero` when `b = Zero`, returns -`level_max(a, b)` when `b = Succ(_)` or `level_is_not_zero(b) = 1`, and only -produces an `IMax` node otherwise. - -All levels entering `level_leq` are reduced: the initial levels come from -`level_inst_params` / `level_reduce` (which build via `level_max`/`level_imax`), -and the case-split substitutions use `level_subst_reduce` (which also builds via -`level_max`/`level_imax`). - -#### Monotonicity - -**Lemma (Monotonicity).** All level expressions are monotone in their parameters: -if σ₁(i) ≤ σ₂(i) for all i, then ⟦l⟧σ₁ ≤ ⟦l⟧σ₂. - -*Proof.* By induction on `l`. The Zero, Param, Succ, and Max cases are immediate. -For IMax(a, b): if ⟦b⟧σ₁ = 0, then ⟦IMax(a,b)⟧σ₁ = 0 ≤ ⟦IMax(a,b)⟧σ₂. -If ⟦b⟧σ₁ > 0, then by IH ⟦b⟧σ₂ ≥ ⟦b⟧σ₁ > 0, so -⟦IMax(a,b)⟧σ₁ = max(⟦a⟧σ₁, ⟦b⟧σ₁) ≤ max(⟦a⟧σ₂, ⟦b⟧σ₂) = ⟦IMax(a,b)⟧σ₂. ∎ - -#### Zero witness - -**Lemma (Zero Witness).** `level_is_not_zero(l) = 0` if and only if ⟦l⟧₀ = 0 -(where σ₀(i) = 0 for all i). Equivalently, `level_is_not_zero(l) = 1` if and only -if ⟦l⟧σ ≥ 1 for all σ. - -*Proof.* (⇒) By induction: Zero and Param evaluate to 0 under σ₀. Succ returns 1, -so this case is excluded. Max(a,b) with both children returning 0 gives -max(⟦a⟧₀, ⟦b⟧₀) = max(0, 0) = 0. IMax(a,b) with `level_is_not_zero(b) = 0` -gives ⟦b⟧₀ = 0 by IH, so ⟦IMax(a,b)⟧₀ = 0. - -(⇐) If `level_is_not_zero(l) = 1`, by induction: Succ(x) ≥ 1 always. -Max(a,b) with at least one child having `level_is_not_zero = 1`: that child is ≥ 1 -for all σ (IH), so max ≥ 1. IMax(a,b) with `level_is_not_zero(b) = 1`: -⟦b⟧σ ≥ 1 for all σ (IH), so IMax = max(a,b) ≥ b ≥ 1. ∎ - -**Corollary.** For a reduced `IMax(a, b)` (invariant R), ⟦IMax(a, b)⟧₀ = 0. - -#### Case-split soundness - -The case-split technique substitutes a parameter `p` with `Zero` and `Succ(Param(p))`. -This is sound and complete for universal quantification over σ(p): - -- Every σ has σ(p) = 0 or σ(p) ≥ 1. -- When σ(p) = 0: captured by the `p → Zero` substitution. -- When σ(p) ≥ 1: write σ(p) = 1 + σ'(p). Then ⟦l[p ↦ Succ(Param(p))]⟧σ' = ⟦l⟧σ, - so the `p → Succ(Param(p))` substitution captures all σ(p) ≥ 1. - -Hence `∀σ. ⟦a⟧σ ≤ ⟦b⟧σ` iff both `∀σ. ⟦a[p↦0]⟧σ ≤ ⟦b[p↦0]⟧σ` and -`∀σ. ⟦a[p↦S(p)]⟧σ ≤ ⟦b[p↦S(p)]⟧σ`. - -#### Soundness and completeness of `level_leq` - -**Theorem.** For reduced levels `a` and `b`, `level_leq(a, b) = 1` if and only if -`a ≤ b` (i.e., ⟦a⟧σ ≤ ⟦b⟧σ for all σ). - -*Proof.* By case analysis on `level_leq`. In each case we show the return value is 1 -iff the inequality holds universally. - -**Case `a = Zero`:** Returns 1. Correct: 0 ≤ ⟦b⟧σ for all σ. - -**Case `a = Max(a1, a2)`:** Returns `level_leq(a1, b) * level_leq(a2, b)`. -max(⟦a1⟧σ, ⟦a2⟧σ) ≤ ⟦b⟧σ iff ⟦a1⟧σ ≤ ⟦b⟧σ and ⟦a2⟧σ ≤ ⟦b⟧σ. ∎ - -**Case `a = Succ(Max(x, y))`:** Distributes to `level_leq(Succ(x), b) * level_leq(Succ(y), b)`. -Correct: 1 + max(⟦x⟧σ, ⟦y⟧σ) = max(1 + ⟦x⟧σ, 1 + ⟦y⟧σ), reducing to the Max case. ∎ - -**Case `a = Succ(a1)`, `a1` not Max, `b = Succ(b1)`:** Returns `level_leq(a1, b1)`. -1 + ⟦a1⟧σ ≤ 1 + ⟦b1⟧σ iff ⟦a1⟧σ ≤ ⟦b1⟧σ. ∎ - -**Case `a = Succ(a1)`, `a1` not Max, `b = Zero` or `Param(j)` or `IMax(_, _)`:** -Returns 0. Correct by Zero Witness: ⟦b⟧₀ = 0 (b = Zero is immediate; Param(j) gives -σ₀(j) = 0; IMax is 0 under σ₀ by the Corollary), but ⟦Succ(a1)⟧₀ ≥ 1. ∎ - -**Case `a = Succ(a1)`, `a1` not Max, `b = Max(b1, b2)`:** First tries -`level_leq(a, b1)` and `level_leq(a, b2)`. If either returns 1, the result is sound -(a ≤ bi implies a ≤ max(b1, b2)). If both return 0 and `b` has no params, returns 0 -(b is a concrete number; the try-each-branch is complete for concrete values). If both -return 0 and `b` has params, case-splits on a param `p` from `b`. This is sound and -complete by Case-Split Soundness above — after substitution, `level_subst_reduce` -re-reduces the result, resolving any IMax whose conditioning variable was `p`. -The recursion terminates because each case-split strictly reduces the number of free -parameters. After all IMax nodes are resolved (finitely many case-splits), the levels -are tropical polynomials (max-plus over Succ chains and Params), for which the -try-each-branch heuristic IS complete: - -> *Tropical completeness:* For tropical polynomials (no IMax), `Succ(a1) ≤ Max(b1, b2)` -> for all σ implies `Succ(a1) ≤ b1` for all σ or `Succ(a1) ≤ b2` for all σ. -> -> *Proof sketch:* Since ⟦Succ(a1)⟧₀ ≥ 1, we have max(⟦b1⟧₀, ⟦b2⟧₀) ≥ 1, so -> WLOG ⟦b1⟧₀ ≥ 1, hence `level_is_not_zero(b1) = 1` (Zero Witness). Since b1 is a -> tropical polynomial with `level_is_not_zero = 1`, it is Succ or Max (not IMax, not -> Param, not Zero). Being a tropical polynomial, b1 is of the form max(σ(pᵢ) + cᵢ) -> with all cᵢ ≥ 0. For any Param(q) appearing in a with Succ-offset k: ⟦a⟧σ grows as -> σ(q) + k, and for a ≤ Max(b1, b2) to hold universally as σ(q) → ∞, some branch must -> contain a term σ(q) + c with c ≥ k. Since b1 is tropical and always ≥ 1 (no IMax -> zeroing), all its terms are unconditional — any term σ(q) + c in b1 contributes -> b1 ≥ σ(q) + c for ALL σ. So if b1 contains the dominating terms for a, then a ≤ b1. ∎ - -**Case `a = Param(i)`, `b = Param(j)`:** Returns `eq_zero(i - j)`, i.e., 1 iff i = j. -Correct: σ(i) ≤ σ(j) for all σ iff i = j (otherwise set σ(i) > σ(j)). ∎ - -**Case `a = Param(i)`, `b = Succ(b1)`:** Returns `level_leq(Param(i), b1)`, -i.e., reduces `σ(i) ≤ 1 + ⟦b1⟧σ` to `σ(i) ≤ ⟦b1⟧σ`. - -*Soundness:* If σ(i) ≤ ⟦b1⟧σ then σ(i) ≤ 1 + ⟦b1⟧σ. ∎ - -*Completeness:* Suppose σ(i) ≤ 1 + ⟦b1⟧σ for all σ. Fix all parameters except i and -define f(n) = ⟦b1⟧(σ[i ↦ n]). By monotonicity, f is non-decreasing. The premise gives -n ≤ 1 + f(n) for all n ≥ 0. We need n ≤ f(n) for all n. - -For n > 0: every IMax in b1 whose second argument depends on Param(i) is resolved -(since n > 0 makes any monotone expression involving Param(i) positive, and -`level_subst_reduce` normalizes away such IMax nodes). So for n > 0, f(n) includes -an unconditional term n + c for some c ≥ 0 (from each Param(i) path in b1). -If such a term exists, f(n) ≥ n + c ≥ n. If no such term exists, f is eventually -constant, and n ≤ 1 + C fails for large n — contradicting the premise. - -For n = 0: f(0) ≥ 0 trivially. ∎ - -**Case `a = Param(i)`, `b = Max(b1, b2)`:** Tries each branch. Sound (a ≤ bi -implies a ≤ max(b1, b2)). Complete: at σ₀, σ₀(i) = 0 ≤ max(⟦b1⟧₀, ⟦b2⟧₀), -which holds trivially — so the argument is subtler. Since σ(i) is unbounded, at least -one bk must contain Param(i) + c (c ≥ 0) unconditionally (not zeroed by IMax). That -branch satisfies bk ≥ σ(i), so Param(i) ≤ bk. If the only Param(i) terms are inside -IMax nodes, then at σ₀ (where IMax zeroes them), b = max(const₁, const₂), and -σ(i) ≤ max(const₁, const₂) fails for large σ(i) — contradicting the premise. ∎ - -**Case `a = Param(i)`, `b = IMax(b1, b2)`:** Case-splits on a param from b2. -Sound and complete by Case-Split Soundness. ∎ - -**Case `a = Param(i)`, `b = Zero`:** Returns 0. Correct: σ(i) ≤ 0 fails for σ(i) = 1. ∎ - -**Case `a = IMax(a1, a2)`, `level_is_not_zero(a2) = 1`:** Treats as Max(a1, a2): -returns `level_leq(a1, b) * level_leq(a2, b)`. Correct: a2 ≥ 1 for all σ (Zero Witness), -so IMax(a1, a2) = max(a1, a2). ∎ - -**Case `a = IMax(a1, a2)`, `level_is_not_zero(a2) = 0`:** Case-splits on a param -from a2. Sound and complete by Case-Split Soundness. Note: `level_is_not_zero(a2) = 0` -and `a` is reduced, so a2 has at least one param (otherwise a2 is Zero and -`level_imax` would have reduced the IMax away). ∎ - -#### Termination - -Each case either reduces the structural size of the levels (Succ peeling, Max -distribution) or reduces the number of free parameters (case-split substitution). -Since both measures are bounded, the recursion terminates. - -### Level Instantiation - -When a polymorphic constant `C.{u, v}` is used at specific levels `[l₁, l₂]`, all -`Param(i)` in the type are replaced with the corresponding level. This happens in all -three implementations via an `inst_levels` / `expr_inst_levels` function that walks the -expression tree. The Aiur kernel simultaneously reduces the substituted levels to normal -form (`level_max`, `level_imax`). - ---- - -## 4. The Typing Judgments - -The kernel implements two core judgments: - -**Type inference**: Γ ⊢ e : T -> "In context Γ, expression e has type T." - -**Definitional equality**: Γ ⊢ a ≡ b -> "In context Γ, expressions a and b are computationally equal." - -These are NOT the same as propositional equality (`a = b` as a Lean `Prop`). -Definitional equality is a judgment the kernel makes silently — it's the equality -that lets you write `2 + 2` where `4` is expected without an explicit proof. - -### What Generates Definitional Equality? - -| Rule | Example | -|------|---------| -| **β-reduction** | `(fun x => x + 1) 3 ≡ 3 + 1` | -| **δ-reduction** | `Nat.succ 0 ≡ 1` (unfolding a definition) | -| **ζ-reduction** | `let x := 5; x + 1 ≡ 5 + 1` | -| **ι-reduction** | Pattern matching on a constructor | -| **η-expansion** | `f ≡ fun x => f x` (for functions) | -| **Proof irrelevance** | `p ≡ q` whenever `p q : P` and `P : Prop` | -| **Structure η** | `s ≡ ⟨s.1, s.2, ...⟩` for structures | -| **Nat/String literals** | `2 ≡ Nat.succ (Nat.succ Nat.zero)` | - ---- - -## 5. Type Inference - -Given an expression, compute its type. Each constructor has a straightforward rule: - -### Sort -``` -infer(Sort u) = Sort (succ u) -``` -The type of `Type u` is `Type (u+1)`. - -### Bound Variable -``` -infer(#i) = Γ[i] -- look up the i-th binding in context -``` -In the Lean/Rust kernels, the context is an array indexed by de Bruijn level -(`depth - 1 - i`). In Aiur, the `types` list is indexed directly by de Bruijn index -(front = most recent binder). - -### Constant -``` -infer(const c [u₁..uₖ]) = instantiate(env[c].type, [u₁..uₖ]) -``` -Look up the constant in the global environment, substitute its universe parameters. -The Lean/Rust kernels also validate universe arity and safety (unsafe/partial) here. -The Aiur kernel asserts level count matches but doesn't check safety. - -### Application -``` -infer(f a) where: - infer(f) = (x : A) → B -- must be a Pi type (after WHNF) - infer(a) = A' - isDefEq(A, A') -- argument type must match domain - result = B[x := a] -- substitute a into the codomain -``` -This is the critical step where `isDefEq` gets called during inference. In the -environment-based approach, "B[x := a]" is implemented as `eval(body, env ++ [a_val])` -— an O(1) environment push rather than a tree walk. - -### Lambda -``` -infer(fun (x : A) => b) where: - infer(A) must be a Sort -- domain must be a type - extend context with (x : A) - infer(b) = B -- infer body type in extended context - result = (x : A) → B -- Pi type -``` -The Lean/Rust kernels extend the context via `with_binder(dom, |tc| tc.infer(body))`. -The Aiur kernel passes extended `types` and `env` lists explicitly, introducing an -`FVar(depth)` for the new variable. - -### Pi / ForallE -``` -infer((x : A) → B) where: - infer(A) = Sort u -- domain must be a type - extend context with (x : A) - infer(B) = Sort v -- codomain must be a type - result = Sort (imax u v) -- note: imax, not max -``` - -### Let -``` -infer(let x : A := v; b) where: - infer(v) = A' - isDefEq(A, A') -- check value has declared type - result = infer(b[x := v]) -- substitute and infer body -``` -In the Lean/Rust kernels, let bindings use `with_let_binder` which stores the value so -it can be used during delta-like reduction. In the Aiur kernel, the value is eagerly -pushed into the environment. - -### Projection -``` -infer(proj T i s) where: - infer(s) = T args... -- s must have the structure type - result = the type of the i-th field, with params instantiated -``` -All three implementations share the same strategy: look up the inductive's single -constructor, instantiate its type with the universe levels, walk past the parameters -(substituting from the inductive's spine), then walk past preceding fields -(substituting `Proj(T, j, s)` for field `j < i`), and extract the domain of the -resulting Pi type. - -### Literal -``` -infer(42) = Nat -infer("hi") = String -``` - -### Bidirectional Checking - -All three implementations use **bidirectional type checking** (`check`): when checking -a lambda against an expected Pi type, the expected codomain is pushed through the -lambda body, avoiding an expensive `infer` + `isDefEq`. This is implemented as -`k_check` in Aiur, `check` in Lean/Rust. - ---- - -## 6. Evaluation: From Expressions to Values - -This is where the environment-based and substitution-based approaches **diverge -fundamentally**. - -### The Substitution-Based Approach (lean4lean, nanoda_lib) - -In a substitution-based kernel, beta reduction physically rewrites the expression: - -``` -(fun x => body) arg → body[x := arg] -``` - -The `body[x := arg]` operation walks the entire body AST, replacing every occurrence of -`x` (i.e., `#0`) with `arg`, and adjusting de Bruijn indices as it goes. This is -O(|body|) per beta step. - -### The Environment-Based Approach (Our Kernels) - -Instead of rewriting expressions, we **evaluate** them into a **semantic domain** of -**values** (`Val`). The key idea is **closures**: a lambda doesn't substitute — it -captures its environment and waits. - -#### The Value Type - -Values are the "evaluated form" of expressions: - -``` -Val = - | lam(dom, body_expr, env) -- closure: body is still an Expr - | pi(dom, body_expr, env) -- pi-closure - | sort(level) -- universe - | neutral(head, spine) -- stuck term (fvar or const) - | ctor(name, levels, spine) -- constructor with args - | lit(literal) -- nat/string literal - | proj(type, idx, struct, spine) -- stuck projection - | thunk(expr, env) -- [Aiur only] unevaluated closure -``` - -The Aiur kernel adds a `Thunk` variant to `KVal` since it cannot use mutable references -for call-by-need. Instead, Aiur's function-call caching ensures that -`k_eval(expr, env, top)` called with the same arguments returns the cached result. - -A **closure** is `(body_expr, env)` where `env` is an array/list of `Val`. To apply a -closure to an argument `v`, you evaluate `body_expr` in `env ++ [v]`. No substitution -walk — just a push. **O(1) beta reduction.** - -A **neutral** is a term stuck on something that can't be reduced further — either a -free variable (`fvar`) or an unresolved constant. Neutrals accumulate a **spine** of -arguments that couldn't be applied. - -#### The `eval` Function - -`eval` takes an `Expr` and an environment (array of `Val`) and produces a `Val`: - -``` -eval(#i, env) = - if i < |env| then env[|env| - 1 - i] -- look up in environment - else mkFVar(...) -- free variable - -eval(Sort u, env) = sort(u) - -eval(const c [us], env) = - if c is a constructor then ctor(c, us, []) - else neutral(const(c, us), []) -- NO eager unfolding - -eval(app f a, env) = - let vf = eval(f, env) - let arg = suspend(a, env) -- suspend: immediate or thunk - apply_val(vf, arg) -- force only if needed (in apply_val) - -eval(lam A b, env) = - let dom = eval(A, env) - lam(dom, b, env) -- capture closure - -eval(forallE A B, env) = - let dom = eval(A, env) - pi(dom, B, env) -- capture closure - -eval(let A v b, env) = - let val = eval(v, env) - eval(b, env ++ [val]) -- eager zeta-reduction - -eval(proj T i s, env) = - let sv = eval(s, env) - if sv is ctor then extract field i (and force it) - else create stuck proj -``` - -**Key design choice**: `eval` does NOT unfold definitions. A `const` always evaluates -to either a `ctor` (for constructors) or a `neutral(const(...), [])`. Definition -unfolding is deferred to WHNF. This is the "lazy" approach — constants are only -unfolded when the kernel actually needs to look inside them. - - -#### De Bruijn Levels vs Indices - -In the environment-based approach, **free variables** use de Bruijn **levels** (counting -from the bottom of the context), not indices (counting from the top). This is crucial: - -- **Index** `#i` = "the variable `i` binders above me" — changes when you go under a binder -- **Level** `fvar(k)` = "the k-th variable ever introduced" — stable, never changes - -When we push a new binding into the context at depth `d`, we create `fvar(d)`. Since -levels count up from the bottom, they never need adjustment when we enter new binders. -This eliminates the shifting/lifting operations that plague substitution-based approaches. - -#### Thunk Representation - -| Impl | Thunk type | Memoization | -|------|-----------|-------------| -| Lean | `ThunkId` (Nat index into `ST.Ref` table) | Explicit: mutates ref on force | -| Rust | `Rc>` | Explicit: mutates cell on force | -| Aiur | `KVal.Thunk(&KExpr, &KValEnv)` | Implicit: Aiur caches `k_eval(e, env, top)` calls | - -In the Lean kernel, thunks are managed via `TypecheckM` which maintains an array of -`ST.Ref (Option (Val m))`. In the Rust kernel, `ThunkEntry` is either -`Unevaluated { expr, env }` or `Evaluated(Val)`. In Aiur, there is no mutation, but -the runtime's aggressive function-call caching means `k_force(thunk, top)` = -`k_eval(expr, env, top)` is automatically memoized. - ---- - -## 7. Weak Head Normal Form (WHNF) - -**Weak head normal form** means: reduce enough to see the outermost constructor. -We don't reduce under binders or inside arguments — just enough to know what shape -the expression has. - -Examples: -``` -whnf(Nat.add 2 3) = 5 -- δ + primitive -whnf(fun x => x + 1) = fun x => x + 1 -- already WHNF (lambda) -whnf(let x := 5; x) = 5 -- ζ-reduction (done in eval) -whnf(Nat.rec m z s (Nat.succ n)) = s n (...) -- ι-reduction -``` - -### The WHNF Loop - -All three implementations structure WHNF as a loop: - -1. **Force thunks** (Aiur: `KVal.Thunk` → `k_eval` → continue) -2. **Projection reduction**: if `proj(T, i, struct, spine)` and `struct` WHNFs to a - constructor, extract field `i`, apply spine, and continue -3. **Iota reduction**: if `const(rec, spine)` and the major premise WHNFs to a - constructor, fire the matching recursor rule -4. **Delta reduction**: if `const(defn, spine)` and the definition is unfoldable, - evaluate its body, apply the spine, and continue -5. **Quotient reduction**: `Quot.lift f h (Quot.mk r a) → f a` -6. **Nat primitives** (Lean/Rust only): `Nat.add (lit 3) (lit 4) → lit 7` -7. Otherwise: return (already in WHNF) - -The Lean/Rust kernels separate structural WHNF (`whnfCoreVal`/`whnf_core`) from delta -unfolding (`deltaStepVal`/`delta_step`), with the outer loop (`whnfVal`/`whnf_val`) -alternating between them. The Aiur kernel combines everything into a single `k_whnf` -function. The Lean/Rust kernels also cache WHNF results by pointer identity. - -### Delta Reduction (Unfolding Definitions) - -When we see a constant like `Nat.add`, we can **unfold** it to its definition. The -definition body is an `Expr`; we `eval` it (substituting universe parameters) and then -apply the accumulated spine of arguments. - -Not all constants should be eagerly unfolded. Lean assigns **reducibility hints**: -- **Abbreviation**: Always unfold (e.g., type aliases) -- **Regular(n)**: Unfold with priority `n` (higher = unfold later) -- **Opaque**: Never unfold (axioms, opaque defs) - -The Lean/Rust kernels use hints to decide **which side to unfold first** during lazy -delta in `isDefEq` (see §8). During WHNF, they unfold any non-opaque definition. -The Aiur kernel does the same in `k_whnf`. - -### Iota Reduction (Recursor on Constructor) - -When a recursor meets a constructor it can pattern-match: - -``` -Nat.rec motive zero_case succ_case (Nat.succ n) - → succ_case n (Nat.rec motive zero_case succ_case n) -``` - -The kernel detects that the **major premise** (the thing being matched on) is a -constructor, picks the corresponding **minor premise** (branch), and applies it to the -constructor's fields and (for recursive args) the recursive result. - -The recursor's spine is laid out as: -``` -[params..., motives..., minors..., indices..., major] -``` - -The major premise index is `nparams + nmotives + nminors + nindices`. After matching, -the result is `rhs_val` applied to `[params, motives, minors, ctor_fields, remaining]`. - -All three implementations share this structure in `try_iota` / `tryIotaReduction`. -The Aiur kernel additionally handles: -- **Nat literals**: `Lit(0)` matches the zero rule; `Lit(n+1)` matches the succ rule - with `Lit(n)` as the field -- **K-reduction**: for proof-irrelevant inductives (k_flag set), the minor premise is - returned directly without inspecting the major premise's constructor - -### Zeta Reduction (Let Bindings) - -Let bindings are reduced eagerly during `eval` — the value is evaluated and pushed into -the environment. There is no `Val.let` constructor. This is simpler and avoids the need -to handle let-bindings in WHNF or definitional equality. - ---- - -## 8. Definitional Equality - -The most complex part of the kernel. Given two values, determine if they are -definitionally equal. - -### The Algorithm (Layered) - -Definitional equality uses a layered approach, trying cheap checks first: - -``` -isDefEq(a, b) = - -- Layer 0: Trivial - if a and b are pointer-equal → true [Lean/Rust only] - if cached as equal → true [Lean/Rust only] - - -- Layer 1: Quick syntactic - if both sorts, compare levels - if both literals, compare values - - -- Layer 2: Reduce to WHNF - a' = whnf(a) - b' = whnf(b) - - -- Layer 3: Proof irrelevance - if both are proofs of Props, compare their types - - -- Layer 4: Structural comparison (isDefEqCore) - if same head constant and levels, compare spines pairwise - if both fvar at same level, compare spines - if both ctor at same index, compare spines - if both pi, compare domains and codomains (under binder) - if both lambda, compare bodies (under binder) - if both proj with same type/idx, compare structs and spines - if one is lambda (eta): compare body with (other applied to fvar) - if one/both are consts: try lazy delta - - -- Layer 5: Fallback rules - try structure eta: s ≡ ⟨s.1, s.2, ...⟩ - try unit-like: if type has exactly one nullary ctor [Lean/Rust only] - - -- Failed - return false -``` - -### Lazy Delta Unfolding - -A key design choice: don't unfold everything to normal form. Instead, unfold -**one step at a time**, alternating between the two sides based on reducibility hints. - -If both sides have the same head constant `f`, first try comparing their arguments -directly (congruence). Only if that fails, unfold `f` on both sides. - -If the sides have different head constants, unfold the one with the **smaller** -reducibility hint (i.e., the one that's "more reducible"). This heuristic tends to -converge quickly. - -**Implementation differences:** -- **Lean/Rust** (`lazyDelta`/`lazy_delta`): Alternates unfolding based on - `ReducibilityHints` comparison. Has a `MAX_LAZY_DELTA_ITERS` limit (10,002 in Rust). - Unfolds one side at a time. -- **Aiur** (`k_lazy_delta`): Unfolds **both** sides via `try_delta_unfold`, checks if - either changed, and retries. No hint-based alternation, no iteration limit. Simpler - but less directed. - -### Proof Irrelevance - -If `a : P` and `b : P` and `P : Prop`, then `a ≡ b` — all proofs of the same -proposition are definitionally equal. The kernel checks this by inferring the type of -the value and asking if that type lives in `Sort 0`. - -**Implementation differences:** -- **Lean/Rust**: `infer_type_of_val` handles all Val forms: FVar (has its type stored), - Const/Ctor (looks up typed constant and walks Pi spine), Lam/Pi/Proj (quotes back to - Expr and infers). Catches errors and returns `false` gracefully. -- **Aiur**: `k_infer_val_type` handles Srt, Lit, Const, Ctor, Proj. Returns `Sort 1` - as a sentinel for FVar/Lam/Pi (which are never Prop). This means proof irrelevance - cannot trigger for FVar-headed values in the Aiur kernel, which is conservative but - safe. - -### Eta Expansion - -For functions: `f ≡ fun x => f x` always holds. If one side is a lambda and the other -isn't, eta-expand the other side and compare the bodies. - -For structures (single-constructor inductives): if `S` has constructor `mk`, then -`s ≡ S.mk s.1 s.2 ...` always holds. If comparing two values of a structure type -and they don't match, try eta-expanding both to constructor form. - -All three implementations handle both function eta and structure eta. The Aiur kernel's -`try_eta_struct_one` inlines the struct-like check (single constructor) and field -comparison into one function, avoiding redundant constant lookups. - -### Nat Literal vs Constructor Equality - -A special case: `Lit(0)` must be definitionally equal to `Nat.zero`, and `Lit(n+1)` -must equal `Nat.succ (Lit(n))`. All three implementations handle this in `isDefEqCore` -via a `nat_lit_eq_ctor` helper that checks the constructor's inductive is Nat, then -compares field counts and recursively checks the predecessor. - -### Equiv Manager (Union-Find Cache) - -The Lean and Rust kernels cache definitional equality results using a **union-find** -data structure keyed on pointer identity. When `isDefEq(a, b) = true`, they merge `a` -and `b` into the same equivalence class. Future comparisons involving either can -short-circuit by checking if they share a root. The Aiur kernel has no equiv manager -(Aiur's function-call caching provides some equivalent benefit). - ---- - -## 9. Declarations and the Environment - -The kernel processes declarations one at a time. Each declaration is added to the -**environment** — a map from names (or addresses) to their definitions and types. - -### Declaration Kinds - -| Kind | Has value? | Unfoldable? | Example | -|------|-----------|-------------|---------| -| **Axiom** | No | No | `Classical.choice` | -| **Definition** | Yes | Yes (with hints) | `Nat.add` | -| **Theorem** | Yes | Yes (needed for proof checking) | `Nat.add_comm` | -| **Opaque** | Yes | No | `native_decide` impl | -| **Inductive** | Generated | N/A | `Nat`, `List` | -| **Constructor** | Generated | N/A | `Nat.zero`, `Nat.succ` | -| **Recursor** | Generated | Has ι-rules | `Nat.rec` | -| **Quotient** | Special | Special rules | `Quot`, `Quot.mk`, `Quot.lift`, `Quot.ind` | - -### Checking a Definition - -``` -checkDefinition(name, type, value, univParams) = - 1. Check that `type` is well-typed: infer(type) must be a Sort - 2. Check that `value` has the declared type: check(value, type) - 3. (For safe defs) Ensure no unsafe constants are referenced - 4. Add to environment -``` - -The Lean/Rust kernels use `check` (bidirectional) for step 2 rather than -`infer` + `isDefEq`. The Aiur kernel also uses `k_check` for Defn/Thm/Opaque. - -### Checking a Theorem - -Same as a definition. Theorem values (proofs) are unfoldable during WHNF and delta -reduction — this is necessary because proof terms may need to reduce during type -checking (e.g., when checking that two proof terms are definitionally equal, or when -a proof is used as an argument whose type must match a Pi domain). - ---- - -## 10. Inductive Types and Recursors - -Inductive types are the most complex declarations. When you write: - -```lean -inductive Nat where - | zero : Nat - | succ : Nat → Nat -``` - -The kernel generates and validates: -1. The **inductive type** itself (`Nat`) -2. Each **constructor** (`Nat.zero`, `Nat.succ`) -3. The **recursor** (`Nat.rec`) — the elimination principle - -### The Recursor - -`Nat.rec` has type: -``` -Nat.rec : {motive : Nat → Sort u} → - motive Nat.zero → - ((n : Nat) → motive n → motive (Nat.succ n)) → - (n : Nat) → motive n -``` - -It takes: -- A **motive**: what type you're producing, as a function of the Nat -- A case for **zero** -- A case for **succ** (which receives the predecessor AND the recursive result) -- The **major premise**: the Nat being matched on - -### Iota Rules - -Each constructor gets a reduction rule: -``` -Nat.rec motive hz hs Nat.zero → hz -Nat.rec motive hz hs (Nat.succ n) → hs n (Nat.rec motive hz hs n) -``` - -The kernel validates that these rules are well-typed. - -### Mutual and Nested Inductives - -Mutual inductives (several types defined simultaneously) share a single recursor. -Nested inductives (an inductive that references itself inside another type constructor, -like `Expr` containing `List Expr`) require specialization — the kernel creates -temporary specialized versions for validation. - -The Lean/Rust kernels handle mutual/nested inductives via `check_ind_block`. The Aiur -kernel does not validate inductive blocks — it trusts that the inductives, constructors, -and recursors provided in the environment are well-formed. - ---- - -## 11. Special Reductions - -### Nat Primitives - -Instead of unfolding `Nat.add 1000000 1000000` by a million successor steps, the kernel -has built-in support for Nat arithmetic. When both arguments are **literals**, it -computes the result directly: - -``` -Nat.add (lit 3) (lit 4) → lit 7 -- O(1), not O(n) unfolding -``` - -Supported: `add`, `sub`, `mul`, `div`, `mod`, `pow`, `gcd`, `beq`, `ble`, `land`, -`lor`, `xor`, `shiftLeft`, `shiftRight`. - -The Lean and Rust kernels implement these in their WHNF loops. The Aiur kernel does -**not** implement nat primitives — it relies entirely on iota reduction (Nat.rec) for -Nat computation. This works but is exponentially slower for large numbers. - -### String Primitives - -String literals can be compared and manipulated. When needed for definitional equality, -a string literal is expanded to its `List Char` constructor form. - -The Lean/Rust kernels handle this. The Aiur kernel does not support string operations -(the `str_idx` is set to a sentinel value since String is not in the dependency -closure of the currently tested constants). - -### Quotient Types - -Lean has built-in support for quotient types: -- `Quot`: quotient type former -- `Quot.mk`: inject into quotient -- `Quot.lift`: eliminate from quotient (must respect the equivalence) -- `Quot.ind`: induction principle - -These have special reduction rules: -``` -Quot.lift f h (Quot.mk r a) → f a -``` - -All three implementations handle quotient reduction in WHNF. The Aiur kernel implements -`k_try_quot_reduction` which handles both `Quot.lift` (reduce_size=6, f_pos=3) and -`Quot.ind` (reduce_size=5, f_pos=3). - ---- - -## 12. Substitution-Based vs Environment-Based - -Here's a concrete comparison: - -### Beta Reduction - -**Substitution** (lean4lean, nanoda_lib): -``` -(fun x => body) arg - → body[#0 := arg] - → walk entire body tree, replace #0 with arg, shift other indices - → O(|body|) work -``` - -**Environment** (our kernels): -``` -eval(app (lam A body) arg, env) - → let va = eval(arg, env) - → eval(body, env ++ [va]) - → O(1) work (just an array push) -``` - -### Going Under Binders (in isDefEq) - -**Substitution**: To compare `fun x => a` with `fun x => b`, substitute a fresh -variable for `x` in both, then compare. Creating the fresh variable and substituting -it is O(|body|). - -**Environment**: To compare `lam(domA, bodyA, envA)` with `lam(domB, bodyB, envB)`, -create a fresh `fvar(depth)`, push it onto both environments, eval both bodies, and -compare the resulting values. The "substitution" is just `env_push(env, fvar)`. - -### Trade-offs - -| Aspect | Substitution | Environment | -|--------|-------------|-------------| -| Beta reduction | O(\|body\|) | O(1) | -| Representing values | Expressions (familiar) | Values (new domain) | -| Sharing/caching | Expression-level | Pointer-identity on Vals | -| Implementation complexity | Simpler | More complex (thunks, closures) | -| Memory | May duplicate work | Thunks add overhead, but memoize | -| Readback | Not needed | Needed for some operations | - -### Readback (Quote) - -Sometimes the environment-based kernel needs to convert a `Val` back to an `Expr` — -this is called **readback** or **quotation**. It's needed when, e.g., we want to -instantiate universe parameters in an expression stored in the environment (which is -still an `Expr`), or when building the Pi type for a lambda's inferred type. - -The readback converts de Bruijn levels back to indices: -``` -quote(fvar(level), depth) = bvar(depth - level - 1) -quote(lam(dom, body, env), depth) = - let x = fvar(depth) - let body_val = eval(body, env ++ [x]) - lam(quote(dom, depth), quote(body_val, depth + 1)) -``` - -In Aiur, `k_quote` also handles `KVal.Thunk` by forcing it first. - ---- - -## 13. Caching and Performance - -### Lean/Rust Kernels - -The Lean and Rust kernels use aggressive caching at multiple levels: - -| Cache | Key | Value | Purpose | -|-------|-----|-------|---------| -| **Inference** | `(Expr, context_ptrs)` | `(TypedExpr, Val)` | Avoid re-inferring shared subexpressions | -| **WHNF** | `Val` (pointer id) | `Val` | Skip re-reducing already-WHNF values | -| **DefEq success** | `(ptr_a, ptr_b)` | `bool` | Skip re-checking known equalities | -| **DefEq failure** | `(ptr_a, ptr_b)` | `bool` | Skip re-checking known non-equalities | -| **Equiv manager** | union-find on `ptr` | equivalence class | Transitive equality: a≡b ∧ b≡c ⇒ a≡c | -| **Typed constants** | `MetaId` | `TypedConst` | Never re-check a constant | -| **Thunk memoization** | thunk identity | `Val` | Never re-evaluate a forced thunk | - -### Aiur Kernel - -The Aiur kernel has **no explicit caches**. Instead, it relies on the Aiur runtime's -function-call caching (see `src/aiur/execute.rs`): every function call with the same -arguments returns the cached result. This means: - -- `k_eval(expr, env, top)` with the same `expr`, `env`, `top` is automatically memoized -- `k_force(thunk, top)` = `k_eval(e, env, top)` benefits from the same caching -- `k_whnf(v, top)` on the same `v` is cached -- Even `k_is_def_eq(a, b, ...)` is cached when called with pointer-equal arguments - -This makes the most naive Fibonacci implementation efficient in Aiur, and similarly -makes thunk re-evaluation free. The trade-off is that cache keys are structural -(not pointer-based), which may be slower for large values but is always correct. - -### Heartbeats - -The Lean/Rust kernels use a monotonic counter incremented on each major operation. -If it exceeds a limit, the kernel aborts. The Aiur kernel has no heartbeat mechanism — -termination relies on the well-foundedness of the input declarations. - ---- - -## 14. Implementation Comparison - -### Feature Matrix - -| Feature | Lean | Rust | Aiur | -|---------|------|------|------| -| Lazy eval (thunks in spines) | ✅ ST.Ref | ✅ RefCell | ✅ KVal.Thunk | -| Delta unfolding (WHNF) | ✅ | ✅ | ✅ | -| Iota reduction (recursor) | ✅ | ✅ | ✅ | -| K-reduction (Prop recursors) | ✅ | ✅ | ✅ | -| Nat literal iota | ✅ | ✅ | ✅ | -| Quotient reduction | ✅ | ✅ | ✅ | -| Nat primitives (add, mul, ...) | ✅ | ✅ | ❌ | -| String primitives | ✅ | ✅ | ❌ | -| Proof irrelevance | ✅ full | ✅ full | ⚠️ partial (FVar sentinel) | -| Function eta | ✅ | ✅ | ✅ | -| Struct eta | ✅ | ✅ | ✅ | -| Unit-like types | ✅ | ✅ | ❌ | -| Lazy delta (hint-based) | ✅ | ✅ | ⚠️ unfolds both sides | -| Equiv manager (union-find) | ✅ | ✅ | ❌ (Aiur caching instead) | -| WHNF cache | ✅ | ✅ | ❌ (Aiur caching instead) | -| Inference cache | ✅ | ✅ | ❌ (Aiur caching instead) | -| Inductive block validation | ✅ | ✅ | ❌ (trusts input) | -| Safety checking (unsafe/partial) | ✅ | ✅ | ❌ | -| Error diagnostics | ✅ | ✅ | ❌ (assert only) | -| Delta step limit | ✅ | ✅ | ❌ | -| Bidirectional checking | ✅ | ✅ | ✅ | - -### Context Management - -| Aspect | Lean/Rust | Aiur | -|--------|-----------|------| -| Type context | `types: Vec` indexed by level | `types: KValList` indexed by de Bruijn index | -| Value environment | `Env = Rc>` (COW) | `KValEnv` (cons-list) | -| Let-bound values | Separate `let_values: Vec>` | Pushed directly into env | -| Depth tracking | `depth()` method on TypeChecker | Explicit `depth: [G; 8]` parameter | -| Binder entry | `with_binder(dom, \|tc\| ...)` | Explicit `KValList.Cons(dom, types)` + `KValEnv.Cons(fvar, env)` | - ---- - -## 15. Aiur TODOs and Disparities - -### Missing Features (by priority) - -1. **Nat primitives**: Built-in computation for `Nat.add`, `Nat.sub`, `Nat.mul`, etc. - on literals. Without these, any theorem involving concrete arithmetic must unfold - through `Nat.rec`, which is exponential for large numbers. The Lean/Rust kernels - detect these by matching the constant's address against a `Primitives` table - discovered during environment setup. - -3. **Unit-like types**: In `isDefEq`, if both values have a type with exactly one - nullary constructor, they are definitionally equal. The Lean/Rust kernels check this - as a fallback after struct eta fails. - -4. **Lazy delta with hint comparison**: The current `k_lazy_delta` unfolds both sides - simultaneously. The Lean/Rust kernels unfold one side at a time, choosing based on - `ReducibilityHints` comparison. This is more efficient when one side is "more - reducible" than the other. - -5. **String support**: String literals and primitives. Needed for any theorem that - involves `String`. - -### Potential Issues - -1. **No delta step limit**: If two definitions are mutually recursive (shouldn't happen - in well-typed code, but the kernel should be robust), `k_lazy_delta` → - `try_delta_unfold` → `k_is_def_eq` → `k_lazy_delta` could diverge. The Lean/Rust - kernels have `MAX_LAZY_DELTA_ITERS` / `MAX_DELTA_STEPS` limits. - -2. **Proof irrelevance for FVar**: `k_infer_val_type` returns `Sort 1` (non-Prop) for - `FVar`. This means if `x : P` where `P : Prop` and `x` is a free variable, proof - irrelevance won't trigger. This is conservative (never unsound) but incomplete. - The Lean/Rust kernels store the type in the FVar head and can inspect it directly. - ---- - -## Appendix: Reading the Code - -### Aiur implementation (`Ix/IxVM/`) - -| File | What it does | -|------|-------------| -| `KernelTypes.lean` | `KExpr`, `KVal` (with `Thunk`), `KLevel`, `KConstantInfo`, all enums | -| `Kernel.lean` | All kernel logic: `k_eval`, `k_whnf`, `k_is_def_eq`, `k_infer`, `k_check` | -| `Convert.lean` | Ixon format → `KConstantInfo` conversion | -| `Ingress.lean` | Content-addressed constant loading from IO | - -### Lean implementation (`Ix/Kernel/`) - -| File | What it does | -|------|-------------| -| `Types.lean` | Core AST: `Expr`, `Level`, `ConstantInfo`, `Env`, `MetaMode` | -| `Value.lean` | `Val`, `Head`, closure/thunk types | -| `Infer.lean` | The big mutual block: `eval`, `whnf`, `isDefEq`, `infer` | -| `TypecheckM.lean` | Monad stack, thunk table management, runner | -| `Level.lean` | Universe level normalization and comparison | -| `Primitive.lean` | Validation of built-in Nat/Bool/Quot primitives | -| `Helpers.lean` | Nat extraction, projection reduction | -| `EquivManager.lean` | Union-find for def-eq caching | -| `Quote.lean` | Val → Expr readback | -| `ExprUtils.lean` | Level substitution, bvar shifting | -| `Datatypes.lean` | `TypedConst`, `TypeInfo` wrappers | -| `Convert.lean` | Ixon format → kernel types | - -### Rust implementation (`src/ix/kernel/`) - -| File | What it does | -|------|-------------| -| `types.rs` | `KExpr`, `KLevel`, `KConstantInfo`, `KEnv`, `Primitives` | -| `value.rs` | `Val`, `Head`, `Thunk`, `Env` (COW) | -| `eval.rs` | Krivine machine: `eval`, `apply_val_thunk`, `force_thunk` | -| `whnf.rs` | WHNF with delta/iota/quot/nat reductions | -| `infer.rs` | Type inference and checking | -| `def_eq.rs` | Definitional equality | -| `check.rs` | Per-declaration validation | -| `tc.rs` | `TypeChecker` state, context, caches | -| `level.rs` | Universe level operations | -| `helpers.rs` | Nat/projection helpers | -| `equiv.rs` | Union-find | -| `quote.rs` | Readback | -| `primitive.rs` | Primitive validation | -| `error.rs` | Error types | - -### Reference implementations - -| Codebase | Language | Approach | Notes | -|----------|----------|----------|-------| -| `lean4lean/` | Lean 4 | Substitution | Verified (has correctness proofs) | -| `nanoda_lib/` | Rust | Substitution | Clean, well-documented | diff --git a/Ix/IxVM/Kernel.lean b/Ix/IxVM/Kernel.lean deleted file mode 100644 index fbe1f65c..00000000 --- a/Ix/IxVM/Kernel.lean +++ /dev/null @@ -1,1852 +0,0 @@ -module -public import Ix.Aiur.Meta - -/-! -# Aiur Kernel — Lean 4 Type Checker Circuit - -A complete Lean 4 kernel type checker written in Aiur, a DSL for zero-knowledge -proof circuits. Verifies that every definition and theorem in a Lean environment -is well-typed. - -## Architecture - -The kernel uses **Normalization by Evaluation (NbE)**: expressions (`KExpr`) are -evaluated into semantic values (`KVal`) using closures and environments, giving -O(1) beta reduction instead of O(|body|) substitution walks. Free variables use -**de Bruijn levels** (stable under binder entry) rather than indices. - -The core operations: -- `k_eval`: evaluate an expression to a WHNF value (eager delta on Defn/Thm, - iota and quotient reduction fire from `k_apply` when a Rec/Quot value's - spine reaches the exact required arg count) -- `k_force`: force a `Thunk` value, returning a WHNF value (identity otherwise) -- `k_infer` / `k_check`: infer types and bidirectionally check against expected types -- `k_is_def_eq`: check definitional equality (proof irrelevance, eta, structural) - -A value is either WHNF or a `Thunk(expr, env)` suspension. `k_eval` always -returns WHNF. `k_force` drives a Thunk to WHNF. There is no separate WHNF -function: definitions unfold during evaluation; Rec/Quot reductions fire as -soon as their spine has the exact required arg count (over-application is -assumed not to occur, so a stuck Rec/Quot is always under-applied). - -## Aiur Constraints - -Aiur circuits have no mutation, no dynamic indexing, and no non-tail matches. -The Aiur runtime's function-call caching provides call-by-need semantics: -calling `k_eval(expr, env, top)` with the same arguments returns the cached -result. - -## Implemented Features - -| Feature | Status | -|----------------------------------|--------| -| Lazy eval (thunks in spines) | ✅ | -| Eager delta unfolding (Defn/Thm) | ✅ | -| Iota reduction (recursor) | ✅ | -| K-reduction (Prop recursors) | ✅ | -| Nat literal iota | ✅ | -| Quotient reduction | ✅ | -| Function eta | ✅ | -| Struct eta | ✅ | -| Bidirectional checking | ✅ | -| Level comparison (sound+complete)| ✅ | -| Unsafe opaque skip | ✅ | - -## Known Limitations - -| Feature | Status | -|----------------------------------|--------------------------------------| -| Nat primitives (add, mul, ...) | ❌ uses iota (exponential for large) | -| String primitives | ❌ | -| Inductive block validation | ❌ trusts input | -| Delta step limit | ❌ | - -## File Organization - -Types are in `KernelTypes.lean` (`KExpr`, `KVal`, `KLevel`, `KConstantInfo`). -Ixon ↔ kernel conversion is in `Convert.lean`. Content-addressed constant -loading is in `Ingress.lean`. This file contains all kernel logic. --/ - -public section - -namespace IxVM - -def kernel := ⟦ - -- ============================================================================ - -- List operations - -- ============================================================================ - - -- Look up a value in a value environment by de Bruijn index - -- Find recursor rule by constructor index - fn rec_rule_try_find(rules: List‹KRecRule›, ctor_idx: G) -> Option‹KRecRule› { - match load(rules) { - ListNode.Nil => Option.None, - ListNode.Cons(rule, rest) => - match rule { - KRecRule.Mk(idx, nf, rhs) => - match idx - ctor_idx { - 0 => Option.Some(KRecRule.Mk(idx, nf, rhs)), - _ => rec_rule_try_find(rest, ctor_idx), - }, - }, - } - } - - fn rec_rule_find(rules: List‹KRecRule›, ctor_idx: G) -> KRecRule { - match load(rules) { - ListNode.Cons(rule, rest) => - match rule { - KRecRule.Mk(idx, nf, rhs) => - match idx - ctor_idx { - 0 => KRecRule.Mk(idx, nf, rhs), - _ => rec_rule_find(rest, ctor_idx), - }, - }, - } - } - - -- Extract the ctor_idx from the first rule in a List‹KRecRule› - fn rec_rule_first_ctor(rules: List‹KRecRule›) -> G { - match load(rules) { - ListNode.Cons(rule, _) => - match rule { - KRecRule.Mk(ctor_idx, _, _) => ctor_idx, - }, - } - } - - -- ============================================================================ - -- Constant info accessors - -- ============================================================================ - - -- Extract the type expression from any constant info variant - fn const_type(ci: KConstantInfo) -> KExpr { - match ci { - KConstantInfo.Axiom(_, ty, _) => ty, - KConstantInfo.Defn(_, ty, _, _) => ty, - KConstantInfo.Thm(_, ty, _) => ty, - KConstantInfo.Opaque(_, ty, _, _) => ty, - KConstantInfo.Quot(_, ty, _) => ty, - KConstantInfo.Induct(_, ty, _, _, _, _, _, _) => ty, - KConstantInfo.Ctor(_, ty, _, _, _, _, _) => ty, - KConstantInfo.Rec(_, ty, _, _, _, _, _, _, _) => ty, - } - } - - -- Extract the number of universe level parameters from any constant info variant - fn const_num_levels(ci: KConstantInfo) -> G { - match ci { - KConstantInfo.Axiom(n, _, _) => n, - KConstantInfo.Defn(n, _, _, _) => n, - KConstantInfo.Thm(n, _, _) => n, - KConstantInfo.Opaque(n, _, _, _) => n, - KConstantInfo.Quot(n, _, _) => n, - KConstantInfo.Induct(n, _, _, _, _, _, _, _) => n, - KConstantInfo.Ctor(n, _, _, _, _, _, _) => n, - KConstantInfo.Rec(n, _, _, _, _, _, _, _, _) => n, - } - } - - -- ============================================================================ - -- Level operations - -- - -- Universe levels are symbolic expressions (Zero, Succ, Max, IMax, Param) - -- evaluated under assignments σ : Param → ℕ. Two levels are equal iff they - -- agree under all assignments. IMax(a, b) = 0 when b = 0, else max(a, b); - -- this gives impredicativity of Prop. - -- - -- Levels are maintained in "reduced form" by level_max and level_imax: - -- an IMax(a, b) node only survives when b could be zero (level_is_not_zero = 0). - -- This invariant is key to the completeness of level_leq. - -- ============================================================================ - - -- Check if a level is definitely not zero (sound approximation) - fn level_is_not_zero(l: KLevel) -> G { - match l { - KLevel.Zero => 0, - KLevel.Param(_) => 0, - KLevel.Succ(_) => 1, - KLevel.Max(&a, &b) => match (level_is_not_zero(a), level_is_not_zero(b)) { - (0, 0) => 0, - _ => 1, - }, - KLevel.IMax(_, &b) => level_is_not_zero(b), - } - } - - -- Structural equality of levels (after reduction) - fn level_eq(a: KLevel, b: KLevel) -> G { - match a { - KLevel.Zero => - match b { - KLevel.Zero => 1, - _ => 0, - }, - KLevel.Succ(&a1) => - match b { - KLevel.Succ(&b1) => level_eq(a1, b1), - _ => 0, - }, - KLevel.Max(&a1, &a2) => - match b { - KLevel.Max(&b1, &b2) => level_eq(a1, b1) * level_eq(a2, b2), - _ => 0, - }, - KLevel.IMax(&a1, &a2) => - match b { - KLevel.IMax(&b1, &b2) => level_eq(a1, b1) * level_eq(a2, b2), - _ => 0, - }, - KLevel.Param(i) => - match b { - KLevel.Param(j) => eq_zero(i - j), - _ => 0, - }, - } - } - - -- Check if a level contains any Param - fn level_has_param(l: KLevel) -> G { - match l { - KLevel.Zero => 0, - KLevel.Param(_) => 1, - KLevel.Succ(&a) => level_has_param(a), - KLevel.Max(&a, &b) => - let ha = level_has_param(a); - match ha { - 1 => 1, - 0 => level_has_param(b), - }, - KLevel.IMax(&a, &b) => - let hb = level_has_param(b); - match hb { - 1 => 1, - 0 => level_has_param(a), - }, - } - } - - -- Find any Param index in a level. Precondition: level contains at least one Param. - fn level_any_param(l: KLevel) -> G { - match l { - KLevel.Param(i) => i, - KLevel.Succ(&a) => level_any_param(a), - KLevel.Max(&a, &b) => - let ha = level_has_param(a); - match ha { - 1 => level_any_param(a), - 0 => level_any_param(b), - }, - KLevel.IMax(&a, &b) => - let hb = level_has_param(b); - match hb { - 1 => level_any_param(b), - 0 => level_any_param(a), - }, - KLevel.Zero => 0, - } - } - - -- Substitute Param(p) with repl in a level, normalizing as we go - fn level_subst_reduce(l: KLevel, p: G, repl: KLevel) -> KLevel { - match l { - KLevel.Zero => KLevel.Zero, - KLevel.Param(i) => - match i - p { - 0 => repl, - _ => KLevel.Param(i), - }, - KLevel.Succ(&a) => - KLevel.Succ(store(level_subst_reduce(a, p, repl))), - KLevel.Max(&a, &b) => - level_max(level_subst_reduce(a, p, repl), level_subst_reduce(b, p, repl)), - KLevel.IMax(&a, &b) => - level_imax(level_subst_reduce(a, p, repl), level_subst_reduce(b, p, repl)), - } - } - - -- Check ⟦a⟧σ ≤ ⟦b⟧σ for all level assignments σ : Param → ℕ. - -- Returns 1 iff the inequality holds universally; 0 otherwise. - -- - -- Sound and complete for reduced levels. Proof sketch by case: - -- Zero ≤ b: trivially true (0 ≤ anything) - -- Max(a1,a2) ≤ b: iff a1 ≤ b ∧ a2 ≤ b - -- Succ(Max(x,y)) ≤ b: distribute: succ(max) = max(succ,succ) - -- Succ(a1) ≤ Succ(b1): peel both succs - -- Succ(a1) ≤ Zero/Param/IMax: false (Zero Witness: reduced IMax evaluates to 0 at σ₀) - -- Succ(a1) ≤ Max(b1,b2): try each branch; if both fail and b has params, case-split - -- to resolve IMax children (tropical completeness after resolution) - -- Param(i) ≤ Param(j): iff i = j - -- Param(i) ≤ Succ(b1): reduces to Param(i) ≤ b1 (complete by monotonicity argument) - -- Param(i) ≤ Max(b1,b2): try each branch (complete: Param tracks through some branch) - -- Param(i) ≤ IMax(b1,b2): case-split on a param in b2 - -- IMax(a1,a2) ≤ b: if a2 definitely nonzero, treat as Max; else case-split on a2 - -- - -- Case-splitting substitutes p → 0 and p → Succ(Param(p)), partitioning all assignments. - -- Each split strictly reduces free params, ensuring termination. - -- See KERNEL.md §3 "Level Comparison" for the full formal argument. - fn level_leq(a: KLevel, b: KLevel) -> G { - match a { - KLevel.Zero => 1, - -- max(a1, a2) <= b iff a1 <= b and a2 <= b - KLevel.Max(&a1, &a2) => - level_leq(a1, b) * level_leq(a2, b), - KLevel.Succ(&a1) => - match a1 { - -- Distribute Succ over Max: succ(max(x,y)) = max(succ(x), succ(y)) - KLevel.Max(&x, &y) => - level_leq(KLevel.Succ(store(x)), b) * level_leq(KLevel.Succ(store(y)), b), - _ => - match b { - KLevel.Succ(&b1) => level_leq(a1, b1), - KLevel.Max(&b1, &b2) => - let r1 = level_leq(a, b1); - match r1 { - 1 => 1, - 0 => - let r2 = level_leq(a, b2); - match r2 { - 1 => 1, - -- Neither branch alone dominates; case-split on a param in b - -- to resolve any IMax children (see INCOMPLETE.md) - 0 => - let bfull = KLevel.Max(store(b1), store(b2)); - let hp = level_has_param(bfull); - match hp { - 0 => 0, - _ => - let p = level_any_param(bfull); - let sp = KLevel.Succ(store(KLevel.Param(p))); - let a0 = level_subst_reduce(a, p, KLevel.Zero); - let b0 = level_subst_reduce(bfull, p, KLevel.Zero); - let a1s = level_subst_reduce(a, p, sp); - let b1s = level_subst_reduce(bfull, p, sp); - level_leq(a0, b0) * level_leq(a1s, b1s), - }, - }, - }, - _ => 0, - }, - }, - KLevel.Param(i) => - match b { - KLevel.Param(j) => eq_zero(i - j), - -- Param(i) <= Succ(X) iff Param(i) <= X (levels are integers, so tight) - KLevel.Succ(&b1) => level_leq(a, b1), - KLevel.Max(&b1, &b2) => - let r1 = level_leq(a, b1); - match r1 { - 1 => 1, - 0 => level_leq(a, b2), - }, - -- Param(i) <= IMax(b1, b2): case-split on a param in b2 - KLevel.IMax(&b1, &b2) => - let p = level_any_param(b2); - let sp = KLevel.Succ(store(KLevel.Param(p))); - let a0 = level_subst_reduce(a, p, KLevel.Zero); - let bfull = KLevel.IMax(store(b1), store(b2)); - let b0 = level_subst_reduce(bfull, p, KLevel.Zero); - let a1s = level_subst_reduce(a, p, sp); - let b1s = level_subst_reduce(bfull, p, sp); - level_leq(a0, b0) * level_leq(a1s, b1s), - KLevel.Zero => 0, - }, - KLevel.IMax(&a1, &a2) => - let not_zero = level_is_not_zero(a2); - match not_zero { - -- imax(a1, a2) where a2 is definitely not zero behaves as max(a1, a2) - 1 => level_leq(a1, b) * level_leq(a2, b), - -- Case-split: substitute a param from a2 with Zero and Succ(Param) - 0 => - let p = level_any_param(a2); - let sp = KLevel.Succ(store(KLevel.Param(p))); - let afull = KLevel.IMax(store(a1), store(a2)); - let a0 = level_subst_reduce(afull, p, KLevel.Zero); - let b0 = level_subst_reduce(b, p, KLevel.Zero); - let a1s = level_subst_reduce(afull, p, sp); - let b1s = level_subst_reduce(b, p, sp); - level_leq(a0, b0) * level_leq(a1s, b1s), - }, - } - } - - -- Semantic level equality: a <= b AND b <= a - fn level_equal(a: KLevel, b: KLevel) -> G { - level_leq(a, b) * level_leq(b, a) - } - - -- Reduce max(a, b) assuming a and b are already reduced - fn level_max(a: KLevel, b: KLevel) -> KLevel { - match a { - KLevel.Zero => b, - _ => - match b { - KLevel.Zero => a, - _ => - let eq = level_eq(a, b); - match eq { - 1 => a, - 0 => - match a { - KLevel.Succ(&a1) => - match b { - KLevel.Succ(&b1) => KLevel.Succ(store(level_max(a1, b1))), - _ => KLevel.Max(store(a), store(b)), - }, - _ => KLevel.Max(store(a), store(b)), - }, - }, - }, - } - } - - -- Reduce imax(a, b) assuming a and b are already reduced - fn level_imax(a: KLevel, b: KLevel) -> KLevel { - match b { - KLevel.Zero => KLevel.Zero, - KLevel.Succ(_) => level_max(a, b), - _ => - let not_zero = level_is_not_zero(b); - match not_zero { - 1 => level_max(a, b), - 0 => - match a { - KLevel.Zero => b, - _ => - let eq = level_eq(a, b); - match eq { - 1 => a, - 0 => KLevel.IMax(store(a), store(b)), - }, - }, - }, - } - } - - -- Reduce a level to normal form - fn level_reduce(l: KLevel) -> KLevel { - match l { - KLevel.Zero => KLevel.Zero, - KLevel.Param(i) => KLevel.Param(i), - KLevel.Succ(&u) => KLevel.Succ(store(level_reduce(u))), - KLevel.Max(&a, &b) => level_max(level_reduce(a), level_reduce(b)), - KLevel.IMax(&a, &b) => level_imax(level_reduce(a), level_reduce(b)), - } - } - - -- Substitute all Level.Param(i) -> params[i] in a level - fn level_inst_params(l: KLevel, params: List‹&KLevel›) -> KLevel { - match l { - KLevel.Zero => KLevel.Zero, - KLevel.Succ(&u) => KLevel.Succ(store(level_inst_params(u, params))), - KLevel.Max(&a, &b) => - level_max(level_inst_params(a, params), level_inst_params(b, params)), - KLevel.IMax(&a, &b) => - level_imax(level_inst_params(a, params), level_inst_params(b, params)), - KLevel.Param(i) => load(list_lookup(params, i)), - } - } - - -- ============================================================================ - -- Expression-level level substitution - -- ============================================================================ - - -- Substitute all Level.Param(i) -> params[i] in all levels within an expression - fn expr_inst_levels(e: KExpr, params: List‹&KLevel›) -> KExpr { - match load(e) { - KExprNode.BVar(i) => store(KExprNode.BVar(i)), - KExprNode.Srt(&l) => - store(KExprNode.Srt(store(level_inst_params(l, params)))), - KExprNode.Const(idx, lvls) => - store(KExprNode.Const(idx, level_list_inst(lvls, params))), - KExprNode.App(f, a) => - store(KExprNode.App(expr_inst_levels(f, params), expr_inst_levels(a, params))), - KExprNode.Lam(ty, body) => - store(KExprNode.Lam(expr_inst_levels(ty, params), expr_inst_levels(body, params))), - KExprNode.Forall(ty, body) => - store(KExprNode.Forall(expr_inst_levels(ty, params), expr_inst_levels(body, params))), - KExprNode.Let(ty, val, body) => - store(KExprNode.Let( - expr_inst_levels(ty, params), - expr_inst_levels(val, params), - expr_inst_levels(body, params))), - KExprNode.Lit(lit) => store(KExprNode.Lit(lit)), - KExprNode.Proj(tidx, fidx, e1) => - store(KExprNode.Proj(tidx, fidx, expr_inst_levels(e1, params))), - } - } - - -- Substitute level params in a level list - fn level_list_inst(lvls: List‹&KLevel›, params: List‹&KLevel›) -> List‹&KLevel› { - match load(lvls) { - ListNode.Nil => store(ListNode.Nil), - ListNode.Cons(&l, rest) => - store(ListNode.Cons( - store(level_inst_params(l, params)), - level_list_inst(rest, params))), - } - } - - -- ============================================================================ - -- Evaluation (NbE) - -- - -- Normalization by Evaluation: expressions (KExpr) are evaluated into semantic - -- values (KVal) using closures. A lambda captures its environment; applying it - -- pushes the argument, giving O(1) beta reduction. Defn/Thm constants unfold - -- eagerly during eval; other constants form neutrals whose spines accumulate - -- args via k_apply, which fires iota / quotient reduction when a Rec / Quot - -- spine reaches its exact required arg count. k_eval always returns WHNF. - -- Free variables use de Bruijn levels (stable under binder entry). - -- ============================================================================ - - -- Force a thunk: if it's a Thunk, evaluate it; otherwise return as-is - fn k_force(v: KVal, top: List‹&KConstantInfo›) -> KVal { - match load(v) { - KValNode.Thunk(e, env) => k_eval(e, env, top), - _ => v, - } - } - - -- Evaluate an expression to a value using Normalization by Evaluation (NbE) - fn k_eval(e: KExpr, env: KValEnv, top: List‹&KConstantInfo›) -> KVal { - match load(e) { - KExprNode.BVar(idx) => - k_force(list_lookup(env, idx), top), - - KExprNode.Srt(&l) => - store(KValNode.Srt(store(level_reduce(l)))), - - -- Eager delta: unfold Defn/Thm during eval. Other constants stay neutral - -- with empty spine; args accumulate via k_apply. - KExprNode.Const(idx, lvls) => - let ci = load(list_lookup(top, idx)); - match ci { - KConstantInfo.Defn(_, _, value, _) => - let body = expr_inst_levels(value, lvls); - k_eval(body, store(ListNode.Nil), top), - KConstantInfo.Thm(_, _, value) => - let body = expr_inst_levels(value, lvls); - k_eval(body, store(ListNode.Nil), top), - KConstantInfo.Ctor(_, _, _, _, nparams, _, _) => - store(KValNode.Ctor(idx, lvls, nparams, store(ListNode.Nil))), - KConstantInfo.Axiom(_, _, _) => - store(KValNode.Axiom(idx, lvls, store(ListNode.Nil))), - KConstantInfo.Opaque(_, _, _, _) => - store(KValNode.Opaque(idx, lvls, store(ListNode.Nil))), - KConstantInfo.Quot(_, _, _) => - store(KValNode.Quot(idx, lvls, store(ListNode.Nil))), - KConstantInfo.Induct(_, _, _, _, _, _, _, _) => - store(KValNode.Induct(idx, lvls, store(ListNode.Nil))), - KConstantInfo.Rec(_, _, _, _, _, _, _, _, _) => - store(KValNode.Rec(idx, lvls, store(ListNode.Nil))), - }, - - KExprNode.App(f, a) => - let vf = k_eval(f, env, top); - let arg = suspend(a, env); - k_apply(vf, arg, top), - - KExprNode.Lam(ty, body) => - let ty_val = suspend(ty, env); - store(KValNode.Lam(ty_val, body, env)), - - KExprNode.Forall(ty, body) => - let ty_val = suspend(ty, env); - store(KValNode.Pi(ty_val, body, env)), - - KExprNode.Let(_, val, body) => - let v = suspend(val, env); - let env2 = store(ListNode.Cons(v, env)); - k_eval(body, env2, top), - - KExprNode.Lit(lit) => - store(KValNode.Lit(lit)), - - KExprNode.Proj(tidx, fidx, e1) => - let v = k_eval(e1, env, top); - match load(v) { - KValNode.Ctor(_, _, nparams, spine) => - let field_idx = nparams + fidx; - let field = list_lookup(spine, field_idx); - k_force(field, top), - _ => - store(KValNode.Proj(tidx, fidx, v, store(ListNode.Nil))), - }, - } - } - - -- Suspend an expression: evaluate immediately for cheap/structural forms - -- (BVar lookup, Srt, Lit, Lam closure, Pi closure); otherwise defer to a thunk. - fn suspend(e: KExpr, env: KValEnv) -> KVal { - match load(e) { - KExprNode.BVar(idx) => - list_lookup(env, idx), - KExprNode.Srt(&l) => - store(KValNode.Srt(store(level_reduce(l)))), - KExprNode.Lit(lit) => - store(KValNode.Lit(lit)), - KExprNode.Lam(ty, body) => - let ty_val = suspend(ty, env); - store(KValNode.Lam(ty_val, body, env)), - KExprNode.Forall(ty, body) => - let ty_val = suspend(ty, env); - store(KValNode.Pi(ty_val, body, env)), - _ => - store(KValNode.Thunk(e, env)), - } - } - - -- Apply a value to an argument (lazy: arg may be a thunk) - fn k_apply(f: KVal, arg: KVal, top: List‹&KConstantInfo›) -> KVal { - match load(f) { - KValNode.Lam(_, body, env) => - let env2 = store(ListNode.Cons(arg, env)); - k_eval(body, env2, top), - - KValNode.Ctor(idx, lvls, nparams, spine) => - let spine2 = list_snoc(spine, arg); - store(KValNode.Ctor(idx, lvls, nparams, spine2)), - - KValNode.FVar(lvl, fvar_ty, spine) => - let spine2 = list_snoc(spine, arg); - store(KValNode.FVar(lvl, fvar_ty, spine2)), - - KValNode.Axiom(idx, lvls, spine) => - let spine2 = list_snoc(spine, arg); - store(KValNode.Axiom(idx, lvls, spine2)), - - KValNode.Defn(idx, lvls, spine) => - let spine2 = list_snoc(spine, arg); - store(KValNode.Defn(idx, lvls, spine2)), - - KValNode.Thm(idx, lvls, spine) => - let spine2 = list_snoc(spine, arg); - store(KValNode.Thm(idx, lvls, spine2)), - - KValNode.Opaque(idx, lvls, spine) => - let spine2 = list_snoc(spine, arg); - store(KValNode.Opaque(idx, lvls, spine2)), - - KValNode.Quot(idx, lvls, spine) => - let spine2 = list_snoc(spine, arg); - let ci = load(list_lookup(top, idx)); - match ci { - KConstantInfo.Quot(_, _, kind) => - match kind { - QuotKind.Lift => - k_try_quot_fire(idx, lvls, spine2, 6, 3, top), - QuotKind.Ind => - k_try_quot_fire(idx, lvls, spine2, 5, 3, top), - _ => - store(KValNode.Quot(idx, lvls, spine2)), - }, - }, - - KValNode.Induct(idx, lvls, spine) => - let spine2 = list_snoc(spine, arg); - store(KValNode.Induct(idx, lvls, spine2)), - - KValNode.Rec(idx, lvls, spine) => - let spine2 = list_snoc(spine, arg); - k_try_iota_fire(idx, lvls, spine2, top), - - KValNode.Proj(tidx, fidx, sv, spine) => - let spine2 = list_snoc(spine, arg); - store(KValNode.Proj(tidx, fidx, sv, spine2)), - - KValNode.Thunk(e, env) => - let v = k_eval(e, env, top); - k_apply(v, arg, top), - - } - } - - -- Apply a value to a list of arguments - fn k_apply_spine(f: KVal, spine: List‹KVal›, top: List‹&KConstantInfo›) -> KVal { - match load(spine) { - ListNode.Nil => f, - ListNode.Cons(v, rest) => - let f2 = k_apply(f, v, top); - k_apply_spine(f2, rest, top), - } - } - - -- ============================================================================ - -- Iota reduction (recursor on constructor) - -- - -- When a recursor meets the constructor it can pattern-match on, reduce: - -- Nat.rec motive hz hs (Nat.succ n) → hs n (Nat.rec motive hz hs n) - -- Also handles Nat literal iota: Lit(0) matches the zero constructor, - -- Lit(n+1) matches succ with Lit(n) as predecessor. - -- ============================================================================ - - -- Get induct_idx from a constructor's constant info - fn ctor_induct_idx(ctor_idx: G, top: List‹&KConstantInfo›) -> G { - let ctor_ci = load(list_lookup(top, ctor_idx)); - match ctor_ci { - KConstantInfo.Ctor(_, _, induct_idx, _, _, _, _) => induct_idx, - } - } - - -- Fire iota reduction at exact arg count (called from k_apply on a Rec value - -- whose spine just grew by one). Spine is exactly nparams+nmotives+nminors+ - -- nindices+1 elements when fired; under-application leaves the Rec stuck. - -- Assumes no over-application. - fn k_try_iota_fire(idx: G, lvls: List‹&KLevel›, spine: List‹KVal›, top: List‹&KConstantInfo›) -> KVal { - let ci = load(list_lookup(top, idx)); - match ci { - KConstantInfo.Rec(_, _, nparams, nindices, nmotives, nminors, rules, k_flag, _) => - let needed = nparams + nmotives + nminors + nindices + 1; - let spine_len = list_length(spine); - match spine_len - needed { - 0 => - let maj_idx = needed - 1; - let major_raw = list_lookup(spine, maj_idx); - let major = k_force(major_raw, top); - match load(major) { - KValNode.Ctor(ctor_idx, _, ctor_nparams, ctor_spine) => - let rule_found = rec_rule_try_find(rules, ctor_idx); - match rule_found { - Option.None => - store(KValNode.Rec(idx, lvls, spine)), - Option.Some(rule) => - match rule { - KRecRule.Mk(_, nfields, rhs) => - let rhs_inst = expr_inst_levels(rhs, lvls); - let rhs_val = k_eval(rhs_inst, store(ListNode.Nil), top); - let params_motives_minors = list_take(spine, nparams + nmotives + nminors); - let result = k_apply_spine(rhs_val, params_motives_minors, top); - let fields = list_drop(ctor_spine, ctor_nparams); - k_apply_spine(result, fields, top), - }, - }, - KValNode.Lit(lit) => - match lit { - KLiteral.Nat(n) => - -- Nat literal iota: Lit(0) → zero rule, Lit(n+1) → succ rule with Lit(n) - let first_ctor_idx = rec_rule_first_ctor(rules); - let induct_idx = ctor_induct_idx(first_ctor_idx, top); - let ind_ci = load(list_lookup(top, induct_idx)); - match ind_ci { - KConstantInfo.Induct(_, _, _, _, ctor_indices, _, _, _) => - let pmm_end = nparams + nmotives + nminors; - let is_zero = klimbs_is_zero(n); - match is_zero { - 1 => - let zero_ctor_idx = list_lookup(ctor_indices, 0); - let rule = rec_rule_find(rules, zero_ctor_idx); - match rule { - KRecRule.Mk(_, _, rhs) => - let rhs_inst = expr_inst_levels(rhs, lvls); - let rhs_val = k_eval(rhs_inst, store(ListNode.Nil), top); - let pmm = list_take(spine, pmm_end); - k_apply_spine(rhs_val, pmm, top), - }, - 0 => - let succ_ctor_idx = list_lookup(ctor_indices, 1); - let rule = rec_rule_find(rules, succ_ctor_idx); - match rule { - KRecRule.Mk(_, _, rhs) => - let rhs_inst = expr_inst_levels(rhs, lvls); - let rhs_val = k_eval(rhs_inst, store(ListNode.Nil), top); - let pmm = list_take(spine, pmm_end); - let result = k_apply_spine(rhs_val, pmm, top); - let pred = store(KValNode.Lit(KLiteral.Nat(klimbs_pred(n)))); - let ctor_fields = store(ListNode.Cons(pred, store(ListNode.Nil))); - k_apply_spine(result, ctor_fields, top), - }, - }, - }, - KLiteral.Str(_) => - store(KValNode.Rec(idx, lvls, spine)), - }, - _ => - -- K-reduction: for proof-irrelevant (Prop) inductives with k_flag set, - -- the minor premise alone is the result (motive, minor at nparams+nmotives). - match k_flag { - 0 => - store(KValNode.Rec(idx, lvls, spine)), - _ => - let minor_idx = nparams + nmotives; - list_lookup(spine, minor_idx), - }, - }, - _ => - store(KValNode.Rec(idx, lvls, spine)), - }, - } - } - - -- ============================================================================ - -- Quotient reduction - -- ============================================================================ - - -- Fire quotient reduction at exact arg count (called from k_apply on a Quot - -- value of kind Lift/Ind whose spine just grew by one). For Quot.lift the - -- spine is [α, r, β, f, h, ⟨Quot.mk r a⟩] (size 6, f_pos 3); for Quot.ind - -- the spine is [α, r, motive, f, ⟨Quot.mk r a⟩] (size 5, f_pos 3). - -- Reduces to f a. Assumes no over-application. - fn k_try_quot_fire(idx: G, lvls: List‹&KLevel›, spine: List‹KVal›, - reduce_size: G, f_pos: G, top: List‹&KConstantInfo›) -> KVal { - let spine_len = list_length(spine); - match spine_len - reduce_size { - 0 => - -- Exact arg count: try fire - let major_idx = reduce_size - 1; - let major_raw = list_lookup(spine, major_idx); - let major = k_force(major_raw, top); - match load(major) { - KValNode.Quot(mk_idx, _, mk_spine) => - let mk_ci = load(list_lookup(top, mk_idx)); - match mk_ci { - KConstantInfo.Quot(_, _, mk_kind) => - match mk_kind { - QuotKind.Ctor => - -- mk_spine should have >= 3 args: [α, r, a] - let mk_len = list_length(mk_spine); - match mk_len - 3 { - 0 => store(KValNode.Quot(idx, lvls, spine)), - _ => - let quot_val_idx = mk_len - 1; - let quot_val = list_lookup(mk_spine, quot_val_idx); - let f_val = k_force(list_lookup(spine, f_pos), top); - k_apply(f_val, quot_val, top), - }, - _ => store(KValNode.Quot(idx, lvls, spine)), - }, - _ => store(KValNode.Quot(idx, lvls, spine)), - }, - _ => store(KValNode.Quot(idx, lvls, spine)), - }, - _ => - store(KValNode.Quot(idx, lvls, spine)), - } - } - - -- ============================================================================ - -- Quotation (values back to expressions) - -- - -- Readback from the semantic domain: converts KVal back to KExpr. - -- Needed when instantiating universe parameters or building the Pi type - -- for a lambda's inferred type. Converts de Bruijn levels back to indices. - -- ============================================================================ - - -- Quote a value back into an expression (readback), converting free variables - -- to de Bruijn indices relative to the current depth - fn k_quote(v: KVal, depth: G, top: List‹&KConstantInfo›) -> KExpr { - match load(v) { - KValNode.Thunk(e, env) => - let val = k_eval(e, env, top); - k_quote(val, depth, top), - - KValNode.Srt(&l) => store(KExprNode.Srt(store(l))), - - KValNode.Lit(lit) => store(KExprNode.Lit(lit)), - - KValNode.Lam(dom, body, env) => - let dom_expr = k_quote(dom, depth, top); - let fvar = store(KValNode.FVar(depth, dom, store(ListNode.Nil))); - let env2 = store(ListNode.Cons(fvar, env)); - let body_val = k_eval(body, env2, top); - let body_expr = k_quote(body_val, depth + 1, top); - store(KExprNode.Lam(dom_expr, body_expr)), - - KValNode.Pi(dom, body, env) => - let dom_expr = k_quote(dom, depth, top); - let fvar = store(KValNode.FVar(depth, dom, store(ListNode.Nil))); - let env2 = store(ListNode.Cons(fvar, env)); - let body_val = k_eval(body, env2, top); - let body_expr = k_quote(body_val, depth + 1, top); - store(KExprNode.Forall(dom_expr, body_expr)), - - KValNode.Ctor(idx, lvls, _, spine) => - let base = store(KExprNode.Const(idx, lvls)); - quote_spine(base, spine, depth, top), - - KValNode.FVar(lvl, _, spine) => - let idx = (depth - 1) - lvl; - let base = store(KExprNode.BVar(idx)); - quote_spine(base, spine, depth, top), - - KValNode.Axiom(idx, lvls, spine) => - let base = store(KExprNode.Const(idx, lvls)); - quote_spine(base, spine, depth, top), - - KValNode.Defn(idx, lvls, spine) => - let base = store(KExprNode.Const(idx, lvls)); - quote_spine(base, spine, depth, top), - - KValNode.Thm(idx, lvls, spine) => - let base = store(KExprNode.Const(idx, lvls)); - quote_spine(base, spine, depth, top), - - KValNode.Opaque(idx, lvls, spine) => - let base = store(KExprNode.Const(idx, lvls)); - quote_spine(base, spine, depth, top), - - KValNode.Quot(idx, lvls, spine) => - let base = store(KExprNode.Const(idx, lvls)); - quote_spine(base, spine, depth, top), - - KValNode.Induct(idx, lvls, spine) => - let base = store(KExprNode.Const(idx, lvls)); - quote_spine(base, spine, depth, top), - - KValNode.Rec(idx, lvls, spine) => - let base = store(KExprNode.Const(idx, lvls)); - quote_spine(base, spine, depth, top), - - KValNode.Proj(tidx, fidx, sv, spine) => - let sv_expr = k_quote(sv, depth, top); - let base = store(KExprNode.Proj(tidx, fidx, sv_expr)); - quote_spine(base, spine, depth, top), - } - } - - -- Quote a spine of arguments, wrapping each in an EApp around the base expression - fn quote_spine(base: KExpr, spine: List‹KVal›, depth: G, top: List‹&KConstantInfo›) -> KExpr { - match load(spine) { - ListNode.Nil => base, - ListNode.Cons(v, rest) => - let arg_expr = k_quote(v, depth, top); - let app = store(KExprNode.App(base, arg_expr)); - quote_spine(app, rest, depth, top), - } - } - - -- ============================================================================ - -- Type inference - -- - -- Infer the type of an expression (k_infer) or check it against an expected - -- type (k_check). Bidirectional: when checking a lambda against a Pi type, - -- the expected codomain is pushed through the body, avoiding an expensive - -- infer + isDefEq. - -- ============================================================================ - - -- Infer the type of an expression under the given type and value environments. - -- nat_idx/str_idx are the constant indices for the Nat/String types (for literal typing). - fn k_infer(e: KExpr, types: List‹KVal›, env: KValEnv, depth: G, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) -> KVal { - match load(e) { - KExprNode.BVar(idx) => - list_lookup(types, idx), - - KExprNode.Srt(&l) => - store(KValNode.Srt(store(KLevel.Succ(store(l))))), - - KExprNode.Lit(lit) => - match lit { - KLiteral.Nat(_) => - store(KValNode.Induct(nat_idx, store(ListNode.Nil), store(ListNode.Nil))), - KLiteral.Str(_) => - store(KValNode.Induct(str_idx, store(ListNode.Nil), store(ListNode.Nil))), - }, - - KExprNode.Const(idx, lvls) => - let ci = load(list_lookup(top, idx)); - let expected = const_num_levels(ci); - let given = list_length(lvls); - let lvl_eq = eq_zero(expected - given); - assert_eq!(lvl_eq, 1); - let ty = const_type(ci); - let ty_inst = expr_inst_levels(ty, lvls); - k_eval(ty_inst, store(ListNode.Nil), top), - - KExprNode.App(f, a) => - let fn_type = k_infer(f, types, env, depth, top, nat_idx, str_idx); - let fn_type_whnf = k_force(fn_type, top); - - match load(fn_type_whnf) { - KValNode.Pi(dom, body, pi_env) => - let _ = k_check(a, dom, types, env, depth, top, nat_idx, str_idx); - let arg_val = suspend(a, env); - let pi_env2 = store(ListNode.Cons(arg_val, pi_env)); - k_eval(body, pi_env2, top), - }, - - KExprNode.Lam(ty, body) => - let _ = k_ensure_sort(ty, types, env, depth, top, nat_idx, str_idx); - let dom_val = k_eval(ty, env, top); - let fvar = store(KValNode.FVar(depth, dom_val, store(ListNode.Nil))); - let types2 = store(ListNode.Cons(dom_val, types)); - let env2 = store(ListNode.Cons(fvar, env)); - let body_type = k_infer(body, types2, env2, depth + 1, top, nat_idx, str_idx); - let body_type_expr = k_quote(body_type, depth + 1, top); - store(KValNode.Pi(dom_val, body_type_expr, env)), - - KExprNode.Forall(ty, body) => - let dom_level = k_ensure_sort(ty, types, env, depth, top, nat_idx, str_idx); - let dom_val = k_eval(ty, env, top); - let fvar = store(KValNode.FVar(depth, dom_val, store(ListNode.Nil))); - let types2 = store(ListNode.Cons(dom_val, types)); - let env2 = store(ListNode.Cons(fvar, env)); - let body_level = k_ensure_sort(body, types2, env2, depth + 1, top, nat_idx, str_idx); - let result_level = level_imax(dom_level, body_level); - store(KValNode.Srt(store(result_level))), - - KExprNode.Let(ty, val, body) => - let _ = k_ensure_sort(ty, types, env, depth, top, nat_idx, str_idx); - let ty_val = k_eval(ty, env, top); - let _ = k_check(val, ty_val, types, env, depth, top, nat_idx, str_idx); - let val_val = suspend(val, env); - let types2 = store(ListNode.Cons(ty_val, types)); - let env2 = store(ListNode.Cons(val_val, env)); - k_infer(body, types2, env2, depth + 1, top, nat_idx, str_idx), - - KExprNode.Proj(tidx, fidx, e1) => - -- Infer struct type and force to expose inductive head - let struct_type = k_infer(e1, types, env, depth, top, nat_idx, str_idx); - let struct_type_whnf = k_force(struct_type, top); - match load(struct_type_whnf) { - KValNode.Induct(induct_idx, levels, params_spine) => - -- Look up inductive to get its single constructor index - let ind_ci = load(list_lookup(top, induct_idx)); - match ind_ci { - KConstantInfo.Induct(_, _, _, _, ctor_indices, _, _, _) => - let ctor_idx = list_lookup(ctor_indices, 0); - -- Get the constructor type, instantiate levels, and eval - let ctor_ci = load(list_lookup(top, ctor_idx)); - let ctor_type_expr = const_type(ctor_ci); - let ctor_type_inst = expr_inst_levels(ctor_type_expr, levels); - let ctor_type_val = k_eval(ctor_type_inst, store(ListNode.Nil), top); - -- Walk past params using values from the inductive's spine - let after_params = walk_params(ctor_type_val, params_spine, top); - -- Walk past preceding fields using Proj values - let struct_val = suspend(e1, env); - let after_fields = walk_fields(after_params, tidx, 0, fidx, struct_val, top); - -- Extract the domain type at field fidx - let result_whnf = k_force(after_fields, top); - match load(result_whnf) { - KValNode.Pi(dom, _, _) => dom, - }, - }, - }, - } - } - - -- Bidirectional type checking: check term against expected type. - -- For Lambda against Pi, pushes the codomain through instead of independently inferring. - fn k_check(e: KExpr, expected: KVal, types: List‹KVal›, env: KValEnv, depth: G, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) { - match load(e) { - KExprNode.Lam(ty, body) => - let expected_whnf = k_force(expected, top); - match load(expected_whnf) { - KValNode.Pi(pi_dom, pi_body, pi_env) => - -- Check domain matches - let dom_val = k_eval(ty, env, top); - let dom_eq = k_is_def_eq(dom_val, pi_dom, depth, top, nat_idx, str_idx); - assert_eq!(dom_eq, 1); - -- Push Pi codomain through Lambda body - let fvar = store(KValNode.FVar(depth, pi_dom, store(ListNode.Nil))); - let types2 = store(ListNode.Cons(pi_dom, types)); - let env2 = store(ListNode.Cons(fvar, env)); - let pi_env2 = store(ListNode.Cons(fvar, pi_env)); - let expected_body = k_eval(pi_body, pi_env2, top); - k_check(body, expected_body, types2, env2, depth + 1, top, nat_idx, str_idx), - }, - _ => - -- Non-lambda: infer + isDefEq - let inferred = k_infer(e, types, env, depth, top, nat_idx, str_idx); - let eq = k_is_def_eq(inferred, expected, depth, top, nat_idx, str_idx); - assert_eq!(eq, 1);, - } - } - - -- Ensure a type expression evaluates to a Sort, returning the level - fn k_ensure_sort(e: KExpr, types: List‹KVal›, env: KValEnv, depth: G, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) -> KLevel { - let ty = k_infer(e, types, env, depth, top, nat_idx, str_idx); - let ty_whnf = k_force(ty, top); - match load(ty_whnf) { - KValNode.Srt(&l) => l, - } - } - - -- Walk past n Pi binders, substituting param values from the spine - fn walk_params(ct: KVal, params: List‹KVal›, top: List‹&KConstantInfo›) -> KVal { - match load(params) { - ListNode.Nil => ct, - ListNode.Cons(param_val, rest_params) => - let ct_whnf = k_force(ct, top); - match load(ct_whnf) { - KValNode.Pi(_, body, pi_env) => - let env2 = store(ListNode.Cons(param_val, pi_env)); - let next = k_eval(body, env2, top); - walk_params(next, rest_params, top), - }, - } - } - - -- Walk past n fields in a constructor type, substituting Proj values - fn walk_fields(ct: KVal, tidx: G, current_field: G, remaining: G, struct_val: KVal, top: List‹&KConstantInfo›) -> KVal { - match remaining { - 0 => ct, - _ => - let ct_whnf = k_force(ct, top); - match load(ct_whnf) { - KValNode.Pi(_, body, pi_env) => - let proj_val = store(KValNode.Proj(tidx, current_field, struct_val, store(ListNode.Nil))); - let env2 = store(ListNode.Cons(proj_val, pi_env)); - let next = k_eval(body, env2, top); - walk_fields(next, tidx, current_field + 1, remaining - 1, struct_val, top), - }, - } - } - - -- ============================================================================ - -- Proof irrelevance helpers - -- - -- If a : P and b : P where P : Prop (Sort 0), then a ≡ b. - -- k_infer_val_type is best-effort: returns Sort 1 sentinel for FVar/Lam/Proj, - -- so proof irrelevance won't trigger for free-variable-headed proofs. - -- Conservative (never unsound) but incomplete. - -- ============================================================================ - - -- Apply a spine of argument values to a type by walking through Pi-bindings - fn apply_spine_to_type(ty: KVal, spine: List‹KVal›, top: List‹&KConstantInfo›) -> KVal { - match load(spine) { - ListNode.Nil => ty, - ListNode.Cons(arg, rest) => - let ty_whnf = k_force(ty, top); - match load(ty_whnf) { - KValNode.Pi(_, body, pi_env) => - let env2 = store(ListNode.Cons(arg, pi_env)); - let next = k_eval(body, env2, top); - apply_spine_to_type(next, rest, top), - }, - } - } - - -- Infer the type of a value (best-effort, no error handling). - -- Returns Sort 1 as sentinel for cases we can't handle (FVar, Lam, Proj). - fn k_infer_val_type(v: KVal, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) -> KVal { - match load(v) { - KValNode.Thunk(e, env) => - let val = k_eval(e, env, top); - k_infer_val_type(val, top, nat_idx, str_idx), - KValNode.Srt(&l) => store(KValNode.Srt(store(KLevel.Succ(store(l))))), - KValNode.Lit(lit) => - match lit { - KLiteral.Nat(_) => store(KValNode.Induct(nat_idx, store(ListNode.Nil), store(ListNode.Nil))), - KLiteral.Str(_) => store(KValNode.Induct(str_idx, store(ListNode.Nil), store(ListNode.Nil))), - }, - KValNode.Axiom(idx, lvls, spine) => - let ci = load(list_lookup(top, idx)); - let ty = const_type(ci); - let ty_inst = expr_inst_levels(ty, lvls); - let ty_val = k_eval(ty_inst, store(ListNode.Nil), top); - apply_spine_to_type(ty_val, spine, top), - KValNode.Defn(idx, lvls, spine) => - let ci = load(list_lookup(top, idx)); - let ty = const_type(ci); - let ty_inst = expr_inst_levels(ty, lvls); - let ty_val = k_eval(ty_inst, store(ListNode.Nil), top); - apply_spine_to_type(ty_val, spine, top), - KValNode.Thm(idx, lvls, spine) => - let ci = load(list_lookup(top, idx)); - let ty = const_type(ci); - let ty_inst = expr_inst_levels(ty, lvls); - let ty_val = k_eval(ty_inst, store(ListNode.Nil), top); - apply_spine_to_type(ty_val, spine, top), - KValNode.Opaque(idx, lvls, spine) => - let ci = load(list_lookup(top, idx)); - let ty = const_type(ci); - let ty_inst = expr_inst_levels(ty, lvls); - let ty_val = k_eval(ty_inst, store(ListNode.Nil), top); - apply_spine_to_type(ty_val, spine, top), - KValNode.Quot(idx, lvls, spine) => - let ci = load(list_lookup(top, idx)); - let ty = const_type(ci); - let ty_inst = expr_inst_levels(ty, lvls); - let ty_val = k_eval(ty_inst, store(ListNode.Nil), top); - apply_spine_to_type(ty_val, spine, top), - KValNode.Induct(idx, lvls, spine) => - let ci = load(list_lookup(top, idx)); - let ty = const_type(ci); - let ty_inst = expr_inst_levels(ty, lvls); - let ty_val = k_eval(ty_inst, store(ListNode.Nil), top); - apply_spine_to_type(ty_val, spine, top), - KValNode.Rec(idx, lvls, spine) => - let ci = load(list_lookup(top, idx)); - let ty = const_type(ci); - let ty_inst = expr_inst_levels(ty, lvls); - let ty_val = k_eval(ty_inst, store(ListNode.Nil), top); - apply_spine_to_type(ty_val, spine, top), - KValNode.Ctor(idx, lvls, _, spine) => - let ci = load(list_lookup(top, idx)); - let ty = const_type(ci); - let ty_inst = expr_inst_levels(ty, lvls); - let ty_val = k_eval(ty_inst, store(ListNode.Nil), top); - apply_spine_to_type(ty_val, spine, top), - KValNode.Proj(tidx, fidx, sv, spine) => - let struct_type = k_infer_val_type(sv, top, nat_idx, str_idx); - let struct_type_whnf = k_force(struct_type, top); - match load(struct_type_whnf) { - KValNode.Induct(induct_idx, levels, params_spine) => - let ind_ci = load(list_lookup(top, induct_idx)); - match ind_ci { - KConstantInfo.Induct(_, _, _, _, ctor_indices, _, _, _) => - let ctor_idx = list_lookup(ctor_indices, 0); - let ctor_ci = load(list_lookup(top, ctor_idx)); - let ctor_type_expr = const_type(ctor_ci); - let ctor_type_inst = expr_inst_levels(ctor_type_expr, levels); - let ctor_type_val = k_eval(ctor_type_inst, store(ListNode.Nil), top); - let after_params = walk_params(ctor_type_val, params_spine, top); - let after_fields = walk_fields(after_params, tidx, 0, fidx, sv, top); - let result_whnf = k_force(after_fields, top); - match load(result_whnf) { - KValNode.Pi(dom, _, _) => apply_spine_to_type(dom, spine, top), - -- If not a Pi, return the type itself (could be the final result type) - _ => apply_spine_to_type(result_whnf, spine, top), - }, - -- Not an inductive, fall back to sentinel - _ => store(KValNode.Srt(store(KLevel.Succ(store(KLevel.Zero))))), - }, - -- If struct type can't be determined, fall back to sentinel - _ => store(KValNode.Srt(store(KLevel.Succ(store(KLevel.Zero))))), - }, - KValNode.FVar(_, fvar_type, spine) => - apply_spine_to_type(fvar_type, spine, top), - -- For Lam, Pi: return Sort 1 as sentinel (never Prop) - _ => store(KValNode.Srt(store(KLevel.Succ(store(KLevel.Zero))))), - } - } - - -- Check if a value is a proposition (its type is Sort 0 / Prop) - fn k_is_prop_val(v: KVal, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) -> G { - let ty = k_infer_val_type(v, top, nat_idx, str_idx); - let ty_whnf = k_force(ty, top); - match load(ty_whnf) { - KValNode.Srt(&l) => - match l { - KLevel.Zero => 1, - _ => 0, - }, - _ => 0, - } - } - - -- ============================================================================ - -- Struct eta helpers - -- - -- Structure eta: s ≡ ⟨s.1, s.2, ...⟩ for single-constructor types. - -- If one side is a Ctor of a struct-like inductive (1 constructor, no indices), - -- compare each field against Proj(i, other_side). - -- ============================================================================ - - -- Get num_fields from a constructor's constant info - fn ctor_num_fields(ctor_idx: G, top: List‹&KConstantInfo›) -> G { - let ctor_ci = load(list_lookup(top, ctor_idx)); - match ctor_ci { - KConstantInfo.Ctor(_, _, _, _, _, nfields, _) => nfields, - } - } - - -- Compare each field: Proj(tidx, i, t) vs spine[nparams + i] - fn eta_struct_fields(t: KVal, spine: List‹KVal›, nparams: G, tidx: G, current: G, remaining: G, depth: G, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) -> G { - match remaining { - 0 => 1, - _ => - let field_idx = nparams + current; - let field_val = list_lookup(spine, field_idx); - let proj_val = store(KValNode.Proj(tidx, current, t, store(ListNode.Nil))); - let eq = k_is_def_eq(proj_val, field_val, depth, top, nat_idx, str_idx); - match eq { - 0 => 0, - 1 => eta_struct_fields(t, spine, nparams, tidx, current + 1, remaining - 1, depth, top, nat_idx, str_idx), - }, - } - } - - -- Try struct eta: if s is a Ctor of a struct-like type, compare fields. - -- Inlines is_struct_like, ctor_induct_idx, ctor_num_fields to avoid redundant lookups. - fn try_eta_struct_one(t: KVal, s: KVal, depth: G, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) -> G { - match load(s) { - KValNode.Ctor(ctor_idx, _, nparams, spine) => - let ctor_ci = load(list_lookup(top, ctor_idx)); - match ctor_ci { - KConstantInfo.Ctor(_, _, induct_idx, _, _, num_fields, _) => - let ind_ci = load(list_lookup(top, induct_idx)); - match ind_ci { - KConstantInfo.Induct(_, _, _, _, ctor_indices, _, _, _) => - let num_ctors = list_length(ctor_indices); - match num_ctors - 1 { - 0 => - eta_struct_fields(t, spine, nparams, induct_idx, 0, num_fields, depth, top, nat_idx, str_idx), - _ => 0, - }, - _ => 0, - }, - _ => 0, - }, - _ => 0, - } - } - - -- Try struct eta in both directions - fn try_eta_struct(a: KVal, b: KVal, depth: G, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) -> G { - let r1 = try_eta_struct_one(a, b, depth, top, nat_idx, str_idx); - match r1 { - 1 => 1, - 0 => try_eta_struct_one(b, a, depth, top, nat_idx, str_idx), - } - } - - -- Unit-like type equality: if both values have a type with exactly one - -- nullary constructor and no indices, they are definitionally equal. - -- Examples: True, PUnit, PLift.up for propositions. - fn is_unit_like_type(ty: KVal, top: List‹&KConstantInfo›) -> G { - let ty_whnf = k_force(ty, top); - match load(ty_whnf) { - KValNode.Induct(induct_idx, _, _) => - let ci = load(list_lookup(top, induct_idx)); - match ci { - KConstantInfo.Induct(_, _, _, nindices, ctor_indices, _, _, _) => - let zero_indices = eq_zero(nindices); - let one_ctor = eq_zero(list_length(ctor_indices) - 1); - match zero_indices * one_ctor { - 0 => 0, - _ => - let ctor_idx = list_lookup(ctor_indices, 0); - let ctor_ci = load(list_lookup(top, ctor_idx)); - match ctor_ci { - KConstantInfo.Ctor(_, _, _, _, _, nfields, _) => - eq_zero(nfields), - _ => 0, - }, - }, - _ => 0, - }, - _ => 0, - } - } - - fn try_unit_like(a: KVal, b: KVal, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) -> G { - let a_type = k_infer_val_type(a, top, nat_idx, str_idx); - is_unit_like_type(a_type, top) - } - - -- ============================================================================ - -- Definitional equality - -- - -- The most complex part of the kernel. Uses a layered approach: - -- 1. Quick syntactic check (sorts, literals) - -- 2. Force both sides (drives any Thunk to its WHNF body) - -- 3. Proof irrelevance (type is Prop ⟹ equal by irrelevance) - -- 4. Structural comparison (k_is_def_eq_core) - -- 5. Struct eta (s ≡ ⟨s.1, s.2, ...⟩) - -- 6. Unit-like types (one nullary constructor ⟹ all values equal) - -- Delta unfolding happens eagerly in k_eval, so def-eq sees no unfoldable - -- constants and never needs to try a lazy-delta step. - -- ============================================================================ - - -- Check definitional equality of two values: first try a quick syntactic check, - -- then force both sides and compare structurally - fn k_is_def_eq(a: KVal, b: KVal, depth: G, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) -> G { - let not_eq_ptr = ptr_val(a) - ptr_val(b); - match (not_eq_ptr, a, b) { - (0, _, _) => 1, - (_, &KValNode.Srt(&la), &KValNode.Srt(&lb)) => level_equal(la, lb), - (_, &KValNode.Lit(la), &KValNode.Lit(lb)) => literal_eq(la, lb), - _ => - let a_whnf = k_force(a, top); - let b_whnf = k_force(b, top); - let not_eq_ptr = ptr_val(a_whnf) - ptr_val(b_whnf); - match (not_eq_ptr, a_whnf, b_whnf) { - (0, _, _) => 1, - (_, &KValNode.Srt(&la), &KValNode.Srt(&lb)) => level_equal(la, lb), - (_, &KValNode.Lit(la), &KValNode.Lit(lb)) => literal_eq(la, lb), - _ => - -- Proof irrelevance: a and b share the same type, so if that type is - -- Prop then both are proofs of the same proposition and are equal. - let a_type = k_infer_val_type(a_whnf, top, nat_idx, str_idx); - let a_is_prop = k_is_prop_val(a_type, top, nat_idx, str_idx); - match a_is_prop { - 1 => 1, - 0 => - let core_res = k_is_def_eq_core(a_whnf, b_whnf, depth, top, nat_idx, str_idx); - match core_res { - 0 => - let eta_res = try_eta_struct(a_whnf, b_whnf, depth, top, nat_idx, str_idx); - match eta_res { - 1 => 1, - 0 => try_unit_like(a_whnf, b_whnf, top, nat_idx, str_idx), - }, - 1 => 1, - }, - }, - }, - } - } - - -- ============================================================================ - -- KLimbs operations (bignum arithmetic on little-endian u64 limbs) - -- ============================================================================ - - -- Check if a KLimbs value is zero (Nil = zero) - fn klimbs_is_zero(limbs: KLimbs) -> G { - match load(limbs) { - ListNode.Nil => 1, - ListNode.Cons(_, _) => 0, - } - } - - -- Compare two KLimbs for equality (limb-by-limb) - fn klimbs_eq(a: KLimbs, b: KLimbs) -> G { - match load(a) { - ListNode.Nil => - match load(b) { - ListNode.Nil => 1, - _ => 0, - }, - ListNode.Cons(la, ra) => - match load(b) { - ListNode.Nil => 0, - ListNode.Cons(lb, rb) => - let eq = u64_eq(la, lb); - match eq { - 0 => 0, - 1 => klimbs_eq(ra, rb), - }, - }, - } - } - - -- Subtract 1 from a KLimbs bignum. Assumes non-zero input. - -- Works limb-by-limb: if limb is non-zero, decrement it; else borrow. - fn klimbs_pred(limbs: KLimbs) -> KLimbs { - match load(limbs) { - ListNode.Nil => store(ListNode.Nil), - ListNode.Cons(limb, rest) => - let is_zero = u64_is_zero(limb); - match is_zero { - 0 => - -- Non-zero limb: decrement it - let new_limb = relaxed_u64_pred(limb); - -- If this was the only limb and it became zero, return Nil - match load(rest) { - ListNode.Nil => - let new_zero = u64_is_zero(new_limb); - match new_zero { - 1 => store(ListNode.Nil), - 0 => store(ListNode.Cons(new_limb, store(ListNode.Nil))), - }, - _ => store(ListNode.Cons(new_limb, rest)), - }, - 1 => - -- Zero limb: borrow from next, this limb becomes 0xFF..FF - let new_rest = klimbs_pred(rest); - -- 0xFFFFFFFFFFFFFFFF = [255, 255, 255, 255, 255, 255, 255, 255] - let max_u64 = [255, 255, 255, 255, 255, 255, 255, 255]; - store(ListNode.Cons(max_u64, new_rest)), - }, - } - } - - -- Compare two ByteStreams for equality - fn bytestream_eq(a: ByteStream, b: ByteStream) -> G { - match load(a) { - ListNode.Nil => - match load(b) { - ListNode.Nil => 1, - _ => 0, - }, - ListNode.Cons(ba, ra) => - match load(b) { - ListNode.Nil => 0, - ListNode.Cons(bb, rb) => - match ba - bb { - 0 => bytestream_eq(ra, rb), - _ => 0, - }, - }, - } - } - - -- Check equality of two literals - fn literal_eq(a: KLiteral, b: KLiteral) -> G { - match a { - KLiteral.Nat(na) => - match b { - KLiteral.Nat(nb) => klimbs_eq(na, nb), - _ => 0, - }, - KLiteral.Str(sa) => - match b { - KLiteral.Str(sb) => bytestream_eq(sa, sb), - _ => 0, - }, - } - } - - -- Compare a Nat literal with a Nat constructor value - fn nat_lit_eq_ctor( - lit: KLiteral, ctor_idx: G, nparams: G, ctor_spine: List‹KVal›, - depth: G, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G - ) -> G { - match lit { - KLiteral.Nat(n) => - let induct_idx = ctor_induct_idx(ctor_idx, top); - match induct_idx - nat_idx { - 0 => - let nfields = ctor_num_fields(ctor_idx, top); - let is_zero = klimbs_is_zero(n); - match is_zero { - 1 => - -- Lit(0) == Ctor if ctor has 0 fields - eq_zero(nfields), - 0 => - -- Lit(n+1) == Ctor if ctor has 1 field and that field == Lit(n) - match nfields - 1 { - 0 => - let pred_val = list_lookup(ctor_spine, nparams); - let pred_lit = store(KValNode.Lit(KLiteral.Nat(klimbs_pred(n)))); - k_is_def_eq(pred_lit, pred_val, depth, top, nat_idx, str_idx), - _ => 0, - }, - }, - _ => 0, - }, - KLiteral.Str(_) => 0, - } - } - - -- Structural definitional equality after force - fn k_is_def_eq_core(a: KVal, b: KVal, depth: G, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) -> G { - match ptr_val(a) - ptr_val(b) { - 0 => 1, - _ => - match load(a) { - KValNode.Srt(&la) => - match load(b) { - KValNode.Srt(&lb) => level_equal(la, lb), - _ => 0, - }, - - KValNode.Lit(la) => - match load(b) { - KValNode.Lit(lb) => literal_eq(la, lb), - KValNode.Ctor(ctor_idx, _, nparams, ctor_spine) => - nat_lit_eq_ctor(la, ctor_idx, nparams, ctor_spine, depth, top, nat_idx, str_idx), - _ => 0, - }, - - KValNode.FVar(lvl_a, _, sp_a) => - match load(b) { - KValNode.FVar(lvl_b, _, sp_b) => - match lvl_a - lvl_b { - 0 => k_is_def_eq_spine(sp_a, sp_b, depth, top, nat_idx, str_idx), - _ => 0, - }, - _ => 0, - }, - - KValNode.Axiom(idx_a, lvls_a, sp_a) => - match load(b) { - KValNode.Axiom(idx_b, lvls_b, sp_b) => - match idx_a - idx_b { - 0 => - let lvls_eq = k_is_def_eq_levels(lvls_a, lvls_b); - match lvls_eq { - 1 => k_is_def_eq_spine(sp_a, sp_b, depth, top, nat_idx, str_idx), - 0 => 0, - }, - _ => 0, - }, - _ => 0, - }, - - KValNode.Defn(idx_a, lvls_a, sp_a) => - match load(b) { - KValNode.Defn(idx_b, lvls_b, sp_b) => - match idx_a - idx_b { - 0 => - let lvls_eq = k_is_def_eq_levels(lvls_a, lvls_b); - match lvls_eq { - 1 => k_is_def_eq_spine(sp_a, sp_b, depth, top, nat_idx, str_idx), - 0 => 0, - }, - _ => 0, - }, - _ => 0, - }, - - KValNode.Thm(idx_a, lvls_a, sp_a) => - match load(b) { - KValNode.Thm(idx_b, lvls_b, sp_b) => - match idx_a - idx_b { - 0 => - let lvls_eq = k_is_def_eq_levels(lvls_a, lvls_b); - match lvls_eq { - 1 => k_is_def_eq_spine(sp_a, sp_b, depth, top, nat_idx, str_idx), - 0 => 0, - }, - _ => 0, - }, - _ => 0, - }, - - KValNode.Opaque(idx_a, lvls_a, sp_a) => - match load(b) { - KValNode.Opaque(idx_b, lvls_b, sp_b) => - match idx_a - idx_b { - 0 => - let lvls_eq = k_is_def_eq_levels(lvls_a, lvls_b); - match lvls_eq { - 1 => k_is_def_eq_spine(sp_a, sp_b, depth, top, nat_idx, str_idx), - 0 => 0, - }, - _ => 0, - }, - _ => 0, - }, - - KValNode.Quot(idx_a, lvls_a, sp_a) => - match load(b) { - KValNode.Quot(idx_b, lvls_b, sp_b) => - match idx_a - idx_b { - 0 => - let lvls_eq = k_is_def_eq_levels(lvls_a, lvls_b); - match lvls_eq { - 1 => k_is_def_eq_spine(sp_a, sp_b, depth, top, nat_idx, str_idx), - 0 => 0, - }, - _ => 0, - }, - _ => 0, - }, - - KValNode.Induct(idx_a, lvls_a, sp_a) => - match load(b) { - KValNode.Induct(idx_b, lvls_b, sp_b) => - match idx_a - idx_b { - 0 => - let lvls_eq = k_is_def_eq_levels(lvls_a, lvls_b); - match lvls_eq { - 1 => k_is_def_eq_spine(sp_a, sp_b, depth, top, nat_idx, str_idx), - 0 => 0, - }, - _ => 0, - }, - _ => 0, - }, - - KValNode.Rec(idx_a, lvls_a, sp_a) => - match load(b) { - KValNode.Rec(idx_b, lvls_b, sp_b) => - match idx_a - idx_b { - 0 => - let lvls_eq = k_is_def_eq_levels(lvls_a, lvls_b); - match lvls_eq { - 1 => k_is_def_eq_spine(sp_a, sp_b, depth, top, nat_idx, str_idx), - 0 => 0, - }, - _ => 0, - }, - _ => 0, - }, - - KValNode.Ctor(idx_a, lvls_a, nparams_a, sp_a) => - match load(b) { - KValNode.Ctor(idx_b, lvls_b, _, sp_b) => - match idx_a - idx_b { - 0 => - let lvls_eq = k_is_def_eq_levels(lvls_a, lvls_b); - match lvls_eq { - 1 => k_is_def_eq_spine(sp_a, sp_b, depth, top, nat_idx, str_idx), - 0 => 0, - }, - _ => 0, - }, - KValNode.Lit(lb) => - nat_lit_eq_ctor(lb, idx_a, nparams_a, sp_a, depth, top, nat_idx, str_idx), - _ => 0, - }, - - KValNode.Lam(dom_a, body_a, env_a) => - match load(b) { - KValNode.Lam(dom_b, body_b, env_b) => - let dom_eq = k_is_def_eq(dom_a, dom_b, depth, top, nat_idx, str_idx); - match dom_eq { - 0 => 0, - 1 => - let fvar = store(KValNode.FVar(depth, dom_a, store(ListNode.Nil))); - let env_a2 = store(ListNode.Cons(fvar, env_a)); - let env_b2 = store(ListNode.Cons(fvar, env_b)); - let va = k_eval(body_a, env_a2, top); - let vb = k_eval(body_b, env_b2, top); - k_is_def_eq(va, vb, depth + 1, top, nat_idx, str_idx), - }, - _ => - -- Eta: lam vs non-lam - let fvar = store(KValNode.FVar(depth, dom_a, store(ListNode.Nil))); - let env_a2 = store(ListNode.Cons(fvar, env_a)); - let va = k_eval(body_a, env_a2, top); - let vb = k_apply(b, fvar, top); - k_is_def_eq(va, vb, depth + 1, top, nat_idx, str_idx), - }, - - KValNode.Pi(dom_a, body_a, env_a) => - match load(b) { - KValNode.Pi(dom_b, body_b, env_b) => - let dom_eq = k_is_def_eq(dom_a, dom_b, depth, top, nat_idx, str_idx); - match dom_eq { - 0 => 0, - 1 => - let fvar = store(KValNode.FVar(depth, dom_a, store(ListNode.Nil))); - let env_a2 = store(ListNode.Cons(fvar, env_a)); - let env_b2 = store(ListNode.Cons(fvar, env_b)); - let va = k_eval(body_a, env_a2, top); - let vb = k_eval(body_b, env_b2, top); - k_is_def_eq(va, vb, depth + 1, top, nat_idx, str_idx), - }, - _ => 0, - }, - - KValNode.Proj(tidx_a, fidx_a, sv_a, sp_a) => - match load(b) { - KValNode.Proj(tidx_b, fidx_b, sv_b, sp_b) => - let same_tf = eq_zero(tidx_a - tidx_b) * eq_zero(fidx_a - fidx_b); - match same_tf { - 1 => - let sv_eq = k_is_def_eq(sv_a, sv_b, depth, top, nat_idx, str_idx); - match sv_eq { - 1 => k_is_def_eq_spine(sp_a, sp_b, depth, top, nat_idx, str_idx), - 0 => 0, - }, - 0 => 0, - }, - _ => 0, - }, - - -- Eta: non-lam vs lam (symmetric case) - _ => - match load(b) { - KValNode.Lam(dom_b, body_b, env_b) => - let fvar = store(KValNode.FVar(depth, dom_b, store(ListNode.Nil))); - let va = k_apply(a, fvar, top); - let env_b2 = store(ListNode.Cons(fvar, env_b)); - let vb = k_eval(body_b, env_b2, top); - k_is_def_eq(va, vb, depth + 1, top, nat_idx, str_idx), - KValNode.Axiom(_, _, _) => - 0, - KValNode.Defn(_, _, _) => - 0, - KValNode.Thm(_, _, _) => - 0, - KValNode.Opaque(_, _, _) => - 0, - KValNode.Quot(_, _, _) => - 0, - KValNode.Induct(_, _, _) => - 0, - KValNode.Rec(_, _, _) => - 0, - _ => 0, - }, - }, - } - } - - -- Pointwise definitional equality of two value spines - fn k_is_def_eq_spine(a: List‹KVal›, b: List‹KVal›, depth: G, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) -> G { - match load(a) { - ListNode.Nil => - match load(b) { - ListNode.Nil => 1, - _ => 0, - }, - ListNode.Cons(va, ra) => - match load(b) { - ListNode.Nil => 0, - ListNode.Cons(vb, rb) => - let eq = k_is_def_eq(va, vb, depth, top, nat_idx, str_idx); - match eq { - 0 => 0, - 1 => k_is_def_eq_spine(ra, rb, depth, top, nat_idx, str_idx), - }, - }, - } - } - - -- Pointwise semantic equality of two level lists - fn k_is_def_eq_levels(a: List‹&KLevel›, b: List‹&KLevel›) -> G { - match load(a) { - ListNode.Nil => - match load(b) { - ListNode.Nil => 1, - _ => 0, - }, - ListNode.Cons(&la, ra) => - match load(b) { - ListNode.Nil => 0, - ListNode.Cons(&lb, rb) => - let eq = level_equal(la, lb); - match eq { - 0 => 0, - 1 => k_is_def_eq_levels(ra, rb), - }, - }, - } - } - - -- ============================================================================ - -- Declaration checking - -- - -- Verify each constant in the environment: its type must be a Sort, and its - -- value (if any) must have the declared type. Processes axioms, definitions, - -- theorems, opaques, quotients, inductives, constructors, and recursors. - -- ============================================================================ - - -- Type-check a single constant declaration against the environment. - -- nat_idx/str_idx are the constant indices for the Nat/String types. - fn k_check_const(ci: KConstantInfo, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G) { - match ci { - KConstantInfo.Axiom(_, ty, _) => - let _ = k_ensure_sort(ty, store(ListNode.Nil), store(ListNode.Nil), 0, top, nat_idx, str_idx), - - KConstantInfo.Defn(_, ty, value, _) => - let _ = k_ensure_sort(ty, store(ListNode.Nil), store(ListNode.Nil), 0, top, nat_idx, str_idx); - let ty_val = k_eval(ty, store(ListNode.Nil), top); - k_check(value, ty_val, store(ListNode.Nil), store(ListNode.Nil), 0, top, nat_idx, str_idx), - - KConstantInfo.Thm(_, ty, value) => - let _ = k_ensure_sort(ty, store(ListNode.Nil), store(ListNode.Nil), 0, top, nat_idx, str_idx); - let ty_val = k_eval(ty, store(ListNode.Nil), top); - k_check(value, ty_val, store(ListNode.Nil), store(ListNode.Nil), 0, top, nat_idx, str_idx), - - KConstantInfo.Opaque(_, ty, value, is_unsafe) => - let _ = k_ensure_sort(ty, store(ListNode.Nil), store(ListNode.Nil), 0, top, nat_idx, str_idx); - match is_unsafe { - 1 => (), - 0 => - let ty_val = k_eval(ty, store(ListNode.Nil), top); - k_check(value, ty_val, store(ListNode.Nil), store(ListNode.Nil), 0, top, nat_idx, str_idx), - }, - - KConstantInfo.Quot(_, ty, _) => - let _ = k_ensure_sort(ty, store(ListNode.Nil), store(ListNode.Nil), 0, top, nat_idx, str_idx), - - KConstantInfo.Induct(_, ty, _, _, _, _, _, _) => - let _ = k_ensure_sort(ty, store(ListNode.Nil), store(ListNode.Nil), 0, top, nat_idx, str_idx), - - KConstantInfo.Ctor(_, ty, _, _, _, _, _) => - let _ = k_ensure_sort(ty, store(ListNode.Nil), store(ListNode.Nil), 0, top, nat_idx, str_idx), - - KConstantInfo.Rec(_, ty, _, _, _, _, _, _, _) => - let _ = k_ensure_sort(ty, store(ListNode.Nil), store(ListNode.Nil), 0, top, nat_idx, str_idx), - } - } - - fn k_check_all_go(consts: List‹&KConstantInfo›, top: List‹&KConstantInfo›, nat_idx: G, str_idx: G, idx: G) { - match load(consts) { - ListNode.Nil => (), - ListNode.Cons(&ci, rest) => - let _ = k_check_const(ci, top, nat_idx, str_idx); - k_check_all_go(rest, top, nat_idx, str_idx, idx + 1), - } - } -⟧ - -end IxVM - -end diff --git a/Ix/IxVM/Kernel/CanonicalCheck.lean b/Ix/IxVM/Kernel/CanonicalCheck.lean new file mode 100644 index 00000000..9c867849 --- /dev/null +++ b/Ix/IxVM/Kernel/CanonicalCheck.lean @@ -0,0 +1,997 @@ +module +public import Ix.Aiur.Meta +public import Ix.IxVM.KernelTypes + +public section + +namespace IxVM + +/-! ## Canonical structural ordering of kernel constants + +Mirror: src/ix/kernel/canonical_check.rs. + +Provides total ordering helpers for `KLevel`, `KExpr`, `KRecRule`, and +`KConstantInfo` so the kernel can verify that mutual blocks ship their +members in canonical (alpha-collapsed, structurally sorted) order. Used +by `validate_canonical_block` adjacent-pair check and by nested-aux +ordering. + +Comparator returns G ∈ {0, 1, 2}: 0 = lt, 1 = eq, 2 = gt. Total order +on closed terms (FVars unsupported — Aiur kernel never produces them). +-/ + +def canonicalCheck := ⟦ + -- ============================================================================ + -- Tri-valued ordering combinators. + -- 0 = lt, 1 = eq, 2 = gt. + -- ============================================================================ + fn ord_cmp_g(a: G, b: G) -> G { + match a - b { + 0 => 1, + _ => + match u32_less_than(a, b) { + 1 => 0, + 0 => 2, + }, + } + } + + -- Lexicographic chain: first non-eq wins. + fn ord_then(a: G, b: G) -> G { + match a { + 1 => b, + _ => a, + } + } + + -- ============================================================================ + -- SOrd: (ordering, strong) tuple. Mirror of Rust's SOrd. ordering ∈ {0=lt, + -- 1=eq, 2=gt}; strong ∈ {0=weak, 1=strong}. compare_kexpr_ctx returns SOrd. + -- Block-local Const refs (resolved via KMutCtx) yield WEAK SOrd; everything + -- else is STRONG. validate_canonical_block_single_pass accepts strong-Less + -- directly, falls back to validate_by_refinement on weak-Less. + -- ============================================================================ + fn sord_lt_strong() -> (G, G) { (0, 1) } + fn sord_lt_weak() -> (G, G) { (0, 0) } + fn sord_eq_strong() -> (G, G) { (1, 1) } + fn sord_eq_weak() -> (G, G) { (1, 0) } + fn sord_gt_strong() -> (G, G) { (2, 1) } + + -- Lex chain on SOrd: if a is Equal, take b's ordering and strong=a.strong*b.strong. + fn sord_then(a: (G, G), b: (G, G)) -> (G, G) { + match a { + (1, sa) => + match b { + (bo, sb) => (bo, sa * sb), + }, + _ => a, + } + } + + -- Wrap a tri-valued G ord as strong SOrd. + fn sord_of_g(o: G) -> (G, G) { (o, 1) } + + -- ctx_class_idx: returns 1+pos if `idx` ∈ ctx (positional list), else 0. + -- Mirror: KMutCtx::get(addr) → Option. We use kernel positions + -- as both addresses and class indices (block members are consecutive in + -- the kernel `top` layout for our supported block shapes). + fn ctx_class_idx(idx: G, ctx: List‹G›, i: G) -> G { + match load(ctx) { + ListNode.Nil => 0, + ListNode.Cons(m, rest) => + match m - idx { + 0 => 1 + i, + _ => ctx_class_idx(idx, rest, i + 1), + }, + } + } + + -- Compare two Const idxs under a KMutCtx. Strong if both external or + -- one block-local + one external; weak if both block-local. + -- Mirror: canonical_check.rs:216-222. + fn ctx_cmp_idx(xid: G, yid: G, ctx: List‹G›) -> (G, G) { + let mx = ctx_class_idx(xid, ctx, 0); + let my = ctx_class_idx(yid, ctx, 0); + match mx { + 0 => + match my { + 0 => sord_of_g(ord_cmp_g(xid, yid)), + _ => sord_gt_strong(), + }, + _ => + match my { + 0 => sord_lt_strong(), + _ => (ord_cmp_g(mx, my), 0), + }, + } + } + + -- ============================================================================ + -- compare_kuniv + -- + -- Mirror: canonical_check.rs:130-154. Variant order Zero < Succ < Max < + -- IMax < Param. + -- ============================================================================ + fn compare_kuniv(x: KLevel, y: KLevel) -> G { + match x { + KLevel.Zero => + match y { + KLevel.Zero => 1, + _ => 0, + }, + KLevel.Succ(&xa) => + match y { + KLevel.Zero => 2, + KLevel.Succ(&ya) => compare_kuniv(xa, ya), + _ => 0, + }, + KLevel.Max(&xl, &xr) => + match y { + KLevel.Zero => 2, + KLevel.Succ(_) => 2, + KLevel.Max(&yl, &yr) => + ord_then(compare_kuniv(xl, yl), compare_kuniv(xr, yr)), + _ => 0, + }, + KLevel.IMax(&xl, &xr) => + match y { + KLevel.Zero => 2, + KLevel.Succ(_) => 2, + KLevel.Max(_, _) => 2, + KLevel.IMax(&yl, &yr) => + ord_then(compare_kuniv(xl, yl), compare_kuniv(xr, yr)), + KLevel.Param(_) => 0, + }, + KLevel.Param(xi) => + match y { + KLevel.Param(yi) => ord_cmp_g(xi, yi), + _ => 2, + }, + } + } + + fn compare_kuniv_list(xs: List‹&KLevel›, ys: List‹&KLevel›) -> G { + match load(xs) { + ListNode.Nil => + match load(ys) { + ListNode.Nil => 1, + _ => 0, + }, + ListNode.Cons(&xh, xt) => + match load(ys) { + ListNode.Nil => 2, + ListNode.Cons(&yh, yt) => + ord_then(compare_kuniv(xh, yh), compare_kuniv_list(xt, yt)), + }, + } + } + + -- SOrd-returning variant; universes are structural so always strong. + fn compare_kuniv_list_sord(xs: List‹&KLevel›, ys: List‹&KLevel›) -> (G, G) { + sord_of_g(compare_kuniv_list(xs, ys)) + } + + -- ============================================================================ + -- compare_kexpr + -- + -- Mirror: canonical_check.rs:167-275. Variant order BVar < Srt < Const < + -- App < Lam < Forall < Let < Lit < Proj. Alpha-blind: binders' types + -- compared, then bodies. (No fvars; Aiur kernel uses de Bruijn indices.) + -- ============================================================================ + fn compare_kexpr(x: KExpr, y: KExpr) -> G { + match ptr_val(x) - ptr_val(y) { + 0 => 1, + _ => compare_kexpr_node(load(x), load(y)), + } + } + + fn compare_kexpr_node(x: KExprNode, y: KExprNode) -> G { + match x { + KExprNode.BVar(xi) => + match y { + KExprNode.BVar(yi) => ord_cmp_g(xi, yi), + _ => 0, + }, + KExprNode.Srt(&xu) => + match y { + KExprNode.BVar(_) => 2, + KExprNode.Srt(&yu) => compare_kuniv(xu, yu), + _ => 0, + }, + KExprNode.Const(xid, xls) => + match y { + KExprNode.BVar(_) => 2, + KExprNode.Srt(_) => 2, + KExprNode.Const(yid, yls) => + ord_then(ord_cmp_g(xid, yid), compare_kuniv_list(xls, yls)), + _ => 0, + }, + KExprNode.App(xf, xa) => + match y { + KExprNode.BVar(_) => 2, + KExprNode.Srt(_) => 2, + KExprNode.Const(_, _) => 2, + KExprNode.App(yf, ya) => + ord_then(compare_kexpr(xf, yf), compare_kexpr(xa, ya)), + _ => 0, + }, + KExprNode.Lam(xt, xb) => + match y { + KExprNode.BVar(_) => 2, + KExprNode.Srt(_) => 2, + KExprNode.Const(_, _) => 2, + KExprNode.App(_, _) => 2, + KExprNode.Lam(yt, yb) => + ord_then(compare_kexpr(xt, yt), compare_kexpr(xb, yb)), + _ => 0, + }, + KExprNode.Forall(xt, xb) => + match y { + KExprNode.BVar(_) => 2, + KExprNode.Srt(_) => 2, + KExprNode.Const(_, _) => 2, + KExprNode.App(_, _) => 2, + KExprNode.Lam(_, _) => 2, + KExprNode.Forall(yt, yb) => + ord_then(compare_kexpr(xt, yt), compare_kexpr(xb, yb)), + _ => 0, + }, + KExprNode.Let(xt, xv, xb) => + match y { + KExprNode.BVar(_) => 2, + KExprNode.Srt(_) => 2, + KExprNode.Const(_, _) => 2, + KExprNode.App(_, _) => 2, + KExprNode.Lam(_, _) => 2, + KExprNode.Forall(_, _) => 2, + KExprNode.Let(yt, yv, yb) => + ord_then(compare_kexpr(xt, yt), + ord_then(compare_kexpr(xv, yv), compare_kexpr(xb, yb))), + _ => 0, + }, + KExprNode.Lit(xl) => + match y { + KExprNode.Proj(_, _, _) => 0, + KExprNode.Lit(yl) => compare_kliteral(xl, yl), + _ => 2, + }, + KExprNode.Proj(xt, xf, xe) => + match y { + KExprNode.Proj(yt, yf, ye) => + ord_then(ord_cmp_g(xt, yt), + ord_then(ord_cmp_g(xf, yf), compare_kexpr(xe, ye))), + _ => 2, + }, + } + } + + -- ============================================================================ + -- compare_kexpr_ctx: SOrd-returning, KMutCtx-aware variant. + -- Mirror: canonical_check.rs:167-280 compare_kexpr. + -- Const + Proj head ref use ctx_cmp_idx to resolve block-local refs. + -- Other arms thread ctx into recursive calls. + -- ============================================================================ + fn compare_kexpr_ctx(x: KExpr, y: KExpr, ctx: List‹G›) -> (G, G) { + match ptr_val(x) - ptr_val(y) { + 0 => sord_eq_strong(), + _ => compare_kexpr_node_ctx(load(x), load(y), ctx), + } + } + + fn compare_kexpr_node_ctx(x: KExprNode, y: KExprNode, ctx: List‹G›) -> (G, G) { + match x { + KExprNode.BVar(xi) => + match y { + KExprNode.BVar(yi) => sord_of_g(ord_cmp_g(xi, yi)), + _ => sord_lt_strong(), + }, + KExprNode.Srt(&xu) => + match y { + KExprNode.BVar(_) => sord_gt_strong(), + KExprNode.Srt(&yu) => sord_of_g(compare_kuniv(xu, yu)), + _ => sord_lt_strong(), + }, + KExprNode.Const(xid, xls) => + match y { + KExprNode.BVar(_) => sord_gt_strong(), + KExprNode.Srt(_) => sord_gt_strong(), + KExprNode.Const(yid, yls) => + sord_then(compare_kuniv_list_sord(xls, yls), + ctx_cmp_idx(xid, yid, ctx)), + _ => sord_lt_strong(), + }, + KExprNode.App(xf, xa) => + match y { + KExprNode.BVar(_) => sord_gt_strong(), + KExprNode.Srt(_) => sord_gt_strong(), + KExprNode.Const(_, _) => sord_gt_strong(), + KExprNode.App(yf, ya) => + sord_then(compare_kexpr_ctx(xf, yf, ctx), + compare_kexpr_ctx(xa, ya, ctx)), + _ => sord_lt_strong(), + }, + KExprNode.Lam(xt, xb) => + match y { + KExprNode.BVar(_) => sord_gt_strong(), + KExprNode.Srt(_) => sord_gt_strong(), + KExprNode.Const(_, _) => sord_gt_strong(), + KExprNode.App(_, _) => sord_gt_strong(), + KExprNode.Lam(yt, yb) => + sord_then(compare_kexpr_ctx(xt, yt, ctx), + compare_kexpr_ctx(xb, yb, ctx)), + _ => sord_lt_strong(), + }, + KExprNode.Forall(xt, xb) => + match y { + KExprNode.BVar(_) => sord_gt_strong(), + KExprNode.Srt(_) => sord_gt_strong(), + KExprNode.Const(_, _) => sord_gt_strong(), + KExprNode.App(_, _) => sord_gt_strong(), + KExprNode.Lam(_, _) => sord_gt_strong(), + KExprNode.Forall(yt, yb) => + sord_then(compare_kexpr_ctx(xt, yt, ctx), + compare_kexpr_ctx(xb, yb, ctx)), + _ => sord_lt_strong(), + }, + KExprNode.Let(xt, xv, xb) => + match y { + KExprNode.BVar(_) => sord_gt_strong(), + KExprNode.Srt(_) => sord_gt_strong(), + KExprNode.Const(_, _) => sord_gt_strong(), + KExprNode.App(_, _) => sord_gt_strong(), + KExprNode.Lam(_, _) => sord_gt_strong(), + KExprNode.Forall(_, _) => sord_gt_strong(), + KExprNode.Let(yt, yv, yb) => + sord_then(compare_kexpr_ctx(xt, yt, ctx), + sord_then(compare_kexpr_ctx(xv, yv, ctx), + compare_kexpr_ctx(xb, yb, ctx))), + _ => sord_lt_strong(), + }, + KExprNode.Lit(xl) => + match y { + KExprNode.Proj(_, _, _) => sord_lt_strong(), + KExprNode.Lit(yl) => sord_of_g(compare_kliteral(xl, yl)), + _ => sord_gt_strong(), + }, + KExprNode.Proj(xt, xf, xe) => + match y { + KExprNode.Proj(yt, yf, ye) => + sord_then(ctx_cmp_idx(xt, yt, ctx), + sord_then(sord_of_g(ord_cmp_g(xf, yf)), + compare_kexpr_ctx(xe, ye, ctx))), + _ => sord_gt_strong(), + }, + } + } + + -- KLiteral order: Nat < Str. Within each, lex on contents. + fn compare_kliteral(x: KLiteral, y: KLiteral) -> G { + match x { + KLiteral.Nat(xn) => + match y { + KLiteral.Nat(yn) => compare_klimbs(xn, yn), + KLiteral.Str(_) => 0, + }, + KLiteral.Str(xb) => + match y { + KLiteral.Nat(_) => 2, + KLiteral.Str(yb) => compare_byte_stream(xb, yb), + }, + } + } + + -- Compare KLimbs little-endian. Higher limbs more significant; longer + -- list = larger value. Walk from tail (most significant); with same + -- length, lex by limb. Simpler: post-normalize, compare lengths first, + -- then lex the equal-length tails with most-significant first. + fn compare_klimbs(x: KLimbs, y: KLimbs) -> G { + let xn = klimbs_normalize(x); + let yn = klimbs_normalize(y); + let lx = list_length(xn); + let ly = list_length(yn); + let len_ord = ord_cmp_g(lx, ly); + match len_ord { + 1 => compare_klimbs_tail(xn, yn), + _ => len_ord, + } + } + + -- Compare equal-length KLimbs lex from MOST significant. We have them + -- LE, so reverse-walk: recurse first, compare current after. + fn compare_klimbs_tail(x: KLimbs, y: KLimbs) -> G { + match load(x) { + ListNode.Nil => 1, + ListNode.Cons(xh, xt) => + match load(y) { + ListNode.Cons(yh, yt) => + let tail_ord = compare_klimbs_tail(xt, yt); + ord_then(tail_ord, compare_u64_lex(xh, yh)), + }, + } + } + + -- Compare two u64 limbs as integers (little-endian byte order). + fn compare_u64_lex(a: U64, b: U64) -> G { + let [a0, a1, a2, a3, a4, a5, a6, a7] = a; + let [b0, b1, b2, b3, b4, b5, b6, b7] = b; + -- Most significant byte first. + ord_then(ord_cmp_g(a7, b7), + ord_then(ord_cmp_g(a6, b6), + ord_then(ord_cmp_g(a5, b5), + ord_then(ord_cmp_g(a4, b4), + ord_then(ord_cmp_g(a3, b3), + ord_then(ord_cmp_g(a2, b2), + ord_then(ord_cmp_g(a1, b1), + ord_cmp_g(a0, b0)))))))) + } + + fn compare_byte_stream(x: ByteStream, y: ByteStream) -> G { + match load(x) { + ListNode.Nil => + match load(y) { + ListNode.Nil => 1, + _ => 0, + }, + ListNode.Cons(xh, xt) => + match load(y) { + ListNode.Nil => 2, + ListNode.Cons(yh, yt) => + ord_then(ord_cmp_g(xh, yh), compare_byte_stream(xt, yt)), + }, + } + } + + -- ============================================================================ + -- compare_krec_rule + -- + -- Mirror: canonical_check.rs:280-298. Triple lex. + -- ============================================================================ + fn compare_krec_rule(x: KRecRule, y: KRecRule) -> G { + match x { + KRecRule.Mk(xc, xn, xr) => + match y { + KRecRule.Mk(yc, yn, yr) => + ord_then(ord_cmp_g(xc, yc), + ord_then(ord_cmp_g(xn, yn), compare_kexpr(xr, yr))), + }, + } + } + + fn compare_krec_rule_list(xs: List‹KRecRule›, ys: List‹KRecRule›) -> G { + match load(xs) { + ListNode.Nil => + match load(ys) { + ListNode.Nil => 1, + _ => 0, + }, + ListNode.Cons(xh, xt) => + match load(ys) { + ListNode.Nil => 2, + ListNode.Cons(yh, yt) => + ord_then(compare_krec_rule(xh, yh), compare_krec_rule_list(xt, yt)), + }, + } + } + + -- ctx-aware variant: SOrd-returning, threads ctx into compare_kexpr. + fn compare_krec_rule_ctx(x: KRecRule, y: KRecRule, ctx: List‹G›) -> (G, G) { + match x { + KRecRule.Mk(xc, xn, xr) => + match y { + KRecRule.Mk(yc, yn, yr) => + sord_then(sord_of_g(ord_cmp_g(xc, yc)), + sord_then(sord_of_g(ord_cmp_g(xn, yn)), + compare_kexpr_ctx(xr, yr, ctx))), + }, + } + } + + fn compare_krec_rule_list_ctx(xs: List‹KRecRule›, ys: List‹KRecRule›, + ctx: List‹G›) -> (G, G) { + match load(xs) { + ListNode.Nil => + match load(ys) { + ListNode.Nil => sord_eq_strong(), + _ => sord_lt_strong(), + }, + ListNode.Cons(xh, xt) => + match load(ys) { + ListNode.Nil => sord_gt_strong(), + ListNode.Cons(yh, yt) => + sord_then(compare_krec_rule_ctx(xh, yh, ctx), + compare_krec_rule_list_ctx(xt, yt, ctx)), + }, + } + } + + fn compare_g_list(xs: List‹G›, ys: List‹G›) -> G { + match load(xs) { + ListNode.Nil => + match load(ys) { + ListNode.Nil => 1, + _ => 0, + }, + ListNode.Cons(xh, xt) => + match load(ys) { + ListNode.Nil => 2, + ListNode.Cons(yh, yt) => + ord_then(ord_cmp_g(xh, yh), compare_g_list(xt, yt)), + }, + } + } + + -- ============================================================================ + -- compare_kconst + -- + -- Mirror: canonical_check.rs:440-543 + kind ordinals. Variant order: + -- Defn=0, Thm=1, Opaque=2, Quot=3, Axiom=4, Induct=5, Ctor=6, Rec=7. + -- Tiebreak: per-variant payload comparison. + -- ============================================================================ + -- Mirror: src/ix/kernel/canonical_check.rs:440-449 kconst_kind_ord. + -- Defn=0, Indc=1, Recr=2, Ctor=3, Axio=4, Quot=5. (Thm/Opaque are + -- Defn-flavored; reuse Defn's slot ordering — only Defn/Indc/Recr are + -- block-eligible per Rust comment.) + fn kconst_kind_ord(c: KConstantInfo) -> G { + match c { + KConstantInfo.Defn(_, _, _, _, _) => 0, + KConstantInfo.Thm(_, _, _) => 0, + KConstantInfo.Opaque(_, _, _, _) => 0, + KConstantInfo.Induct(_, _, _, _, _, _, _, _, _, _) => 1, + KConstantInfo.Rec(_, _, _, _, _, _, _, _, _, _) => 2, + KConstantInfo.Ctor(_, _, _, _, _, _, _) => 3, + KConstantInfo.Axiom(_, _, _) => 4, + KConstantInfo.Quot(_, _, _) => 5, + } + } + + fn compare_kconst(x: KConstantInfo, y: KConstantInfo) -> G { + let kx = kconst_kind_ord(x); + let ky = kconst_kind_ord(y); + let kord = ord_cmp_g(kx, ky); + match kord { + 1 => compare_kconst_same_kind(x, y), + _ => kord, + } + } + + fn compare_kconst_same_kind(x: KConstantInfo, y: KConstantInfo) -> G { + match x { + KConstantInfo.Defn(xn, xt, xv, _xs, _xh) => + match y { + KConstantInfo.Defn(yn, yt, yv, _ys, _yh) => + ord_then(ord_cmp_g(xn, yn), + ord_then(compare_kexpr(xt, yt), compare_kexpr(xv, yv))), + }, + KConstantInfo.Thm(xn, xt, xv) => + match y { + KConstantInfo.Thm(yn, yt, yv) => + ord_then(ord_cmp_g(xn, yn), + ord_then(compare_kexpr(xt, yt), compare_kexpr(xv, yv))), + }, + KConstantInfo.Opaque(xn, xt, xv, xu) => + match y { + KConstantInfo.Opaque(yn, yt, yv, yu) => + ord_then(ord_cmp_g(xn, yn), + ord_then(compare_kexpr(xt, yt), + ord_then(compare_kexpr(xv, yv), ord_cmp_g(xu, yu)))), + }, + KConstantInfo.Quot(xn, xt, _xk) => + match y { + KConstantInfo.Quot(yn, yt, _yk) => + ord_then(ord_cmp_g(xn, yn), compare_kexpr(xt, yt)), + }, + KConstantInfo.Axiom(xn, xt, xu) => + match y { + KConstantInfo.Axiom(yn, yt, yu) => + ord_then(ord_cmp_g(xn, yn), + ord_then(compare_kexpr(xt, yt), ord_cmp_g(xu, yu))), + }, + -- Mirror: src/ix/kernel/canonical_check.rs:299-340 compare_kindc + -- order: (is_rec, is_unsafe, lvls, params, indices, ctors_len, ty, ctors). + KConstantInfo.Induct(xn, xt, xp, xi, xc, xr, _xrf, xu, _xne, _xa) => + match y { + KConstantInfo.Induct(yn, yt, yp, yi, yc, yr, _yrf, yu, _yne, _ya) => + ord_then(ord_cmp_g(xr, yr), + ord_then(ord_cmp_g(xu, yu), + ord_then(ord_cmp_g(xn, yn), + ord_then(ord_cmp_g(xp, yp), + ord_then(ord_cmp_g(xi, yi), + ord_then(ord_cmp_g(list_length(xc), list_length(yc)), + compare_kexpr(xt, yt))))))), + }, + -- Mirror: src/ix/kernel/canonical_check.rs:346-368 compare_kctor + -- order: (lvls, cidx, params, fields, ty). induct_idx + unsafe excluded + -- from comparator key (see Rust source). + KConstantInfo.Ctor(xn, xt, _xi, xc, xp, xf, _xu) => + match y { + KConstantInfo.Ctor(yn, yt, _yi, yc, yp, yf, _yu) => + ord_then(ord_cmp_g(xn, yn), + ord_then(ord_cmp_g(xc, yc), + ord_then(ord_cmp_g(xp, yp), + ord_then(ord_cmp_g(xf, yf), compare_kexpr(xt, yt))))), + }, + -- Mirror: src/ix/kernel/canonical_check.rs:374-407 compare_krecr + -- order: (lvls, params, indices, motives, minors, k, ty, rules). + KConstantInfo.Rec(xn, xt, xp, xi, xm, xmi, xrules, xk, _xu, _xba) => + match y { + KConstantInfo.Rec(yn, yt, yp, yi, ym, ymi, yrules, yk, _yu, _yba) => + ord_then(ord_cmp_g(xn, yn), + ord_then(ord_cmp_g(xp, yp), + ord_then(ord_cmp_g(xi, yi), + ord_then(ord_cmp_g(xm, ym), + ord_then(ord_cmp_g(xmi, ymi), + ord_then(ord_cmp_g(xk, yk), + ord_then(compare_kexpr(xt, yt), + compare_krec_rule_list(xrules, yrules)))))))), + }, + } + } + + + -- ctx-aware variant: SOrd-returning, threads ctx into compare_kexpr_ctx. + fn compare_kconst_ctx(x: KConstantInfo, y: KConstantInfo, + ctx: List‹G›) -> (G, G) { + let kx = kconst_kind_ord(x); + let ky = kconst_kind_ord(y); + let kord = ord_cmp_g(kx, ky); + match kord { + 1 => compare_kconst_same_kind_ctx(x, y, ctx), + _ => sord_of_g(kord), + } + } + + fn compare_kconst_same_kind_ctx(x: KConstantInfo, y: KConstantInfo, + ctx: List‹G›) -> (G, G) { + match x { + KConstantInfo.Defn(xn, xt, xv, _xs, _xh) => + match y { + KConstantInfo.Defn(yn, yt, yv, _ys, _yh) => + sord_then(sord_of_g(ord_cmp_g(xn, yn)), + sord_then(compare_kexpr_ctx(xt, yt, ctx), + compare_kexpr_ctx(xv, yv, ctx))), + }, + KConstantInfo.Thm(xn, xt, xv) => + match y { + KConstantInfo.Thm(yn, yt, yv) => + sord_then(sord_of_g(ord_cmp_g(xn, yn)), + sord_then(compare_kexpr_ctx(xt, yt, ctx), + compare_kexpr_ctx(xv, yv, ctx))), + }, + KConstantInfo.Opaque(xn, xt, xv, xu) => + match y { + KConstantInfo.Opaque(yn, yt, yv, yu) => + sord_then(sord_of_g(ord_cmp_g(xn, yn)), + sord_then(compare_kexpr_ctx(xt, yt, ctx), + sord_then(compare_kexpr_ctx(xv, yv, ctx), + sord_of_g(ord_cmp_g(xu, yu))))), + }, + KConstantInfo.Quot(xn, xt, _xk) => + match y { + KConstantInfo.Quot(yn, yt, _yk) => + sord_then(sord_of_g(ord_cmp_g(xn, yn)), compare_kexpr_ctx(xt, yt, ctx)), + }, + KConstantInfo.Axiom(xn, xt, xu) => + match y { + KConstantInfo.Axiom(yn, yt, yu) => + sord_then(sord_of_g(ord_cmp_g(xn, yn)), + sord_then(compare_kexpr_ctx(xt, yt, ctx), + sord_of_g(ord_cmp_g(xu, yu)))), + }, + KConstantInfo.Induct(xn, xt, xp, xi, xc, xr, _xrf, xu, _xne, _xa) => + match y { + KConstantInfo.Induct(yn, yt, yp, yi, yc, yr, _yrf, yu, _yne, _ya) => + sord_then(sord_of_g(ord_cmp_g(xr, yr)), + sord_then(sord_of_g(ord_cmp_g(xu, yu)), + sord_then(sord_of_g(ord_cmp_g(xn, yn)), + sord_then(sord_of_g(ord_cmp_g(xp, yp)), + sord_then(sord_of_g(ord_cmp_g(xi, yi)), + sord_then(sord_of_g(ord_cmp_g(list_length(xc), list_length(yc))), + compare_kexpr_ctx(xt, yt, ctx))))))), + }, + KConstantInfo.Ctor(xn, xt, _xi, xc, xp, xf, _xu) => + match y { + KConstantInfo.Ctor(yn, yt, _yi, yc, yp, yf, _yu) => + sord_then(sord_of_g(ord_cmp_g(xn, yn)), + sord_then(sord_of_g(ord_cmp_g(xc, yc)), + sord_then(sord_of_g(ord_cmp_g(xp, yp)), + sord_then(sord_of_g(ord_cmp_g(xf, yf)), + compare_kexpr_ctx(xt, yt, ctx))))), + }, + KConstantInfo.Rec(xn, xt, xp, xi, xm, xmi, xrules, xk, _xu, _xba) => + match y { + KConstantInfo.Rec(yn, yt, yp, yi, ym, ymi, yrules, yk, _yu, _yba) => + sord_then(sord_of_g(ord_cmp_g(xn, yn)), + sord_then(sord_of_g(ord_cmp_g(xp, yp)), + sord_then(sord_of_g(ord_cmp_g(xi, yi)), + sord_then(sord_of_g(ord_cmp_g(xm, ym)), + sord_then(sord_of_g(ord_cmp_g(xmi, ymi)), + sord_then(sord_of_g(ord_cmp_g(xk, yk)), + sord_then(compare_kexpr_ctx(xt, yt, ctx), + compare_krec_rule_list_ctx(xrules, yrules, ctx)))))))), + }, + } + } + + -- Build ctx (list of kernel positions) for all consts in `top` whose + -- block_addr equals `target`. Walks `top` once with a position counter. + fn block_members_of(target: [G; 32], top: List‹&KConstantInfo›, + all_top: List‹&KConstantInfo›, pos: G) -> List‹G› { + match load(top) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(&ci, rest) => + let ba = kconst_block_addr(ci, all_top); + let rest_members = block_members_of(target, rest, all_top, pos + 1); + match address_eq(ba, target) { + 1 => store(ListNode.Cons(pos, rest_members)), + 0 => rest_members, + }, + } + } + + -- Walks `top` finding adjacent KConstantInfo pairs that share a derived + -- block_addr. Asserts strict-lt for each such pair. Block_addr derivation: + -- Induct → its own 10th-field block_addr. + -- Ctor → parent Induct's block_addr (via induct_idx lookup). + -- Rec → block_addr of the parent of its first rule's ctor. + -- Other → [0;32] (not part of a Muts block). + fn check_canonical_block_sort(top: List‹&KConstantInfo›) { + check_canonical_block_sort_walk(top, [0; 32], store(ListNode.Nil), 0, top) + } + + fn kconst_block_addr(ci: KConstantInfo, top: List‹&KConstantInfo›) -> [G; 32] { + match ci { + KConstantInfo.Induct(_, _, _, _, _, _, _, _, _, ba) => ba, + KConstantInfo.Ctor(_, _, induct_idx, _, _, _, _) => + let ind_ci = load(list_lookup(top, induct_idx)); + match ind_ci { + KConstantInfo.Induct(_, _, _, _, _, _, _, _, _, ba) => ba, + _ => [0; 32], + }, + KConstantInfo.Rec(_, _, _, _, _, _, _, _, _, ba) => ba, + _ => [0; 32], + } + } + + -- Walk top, group consecutive consts sharing a non-zero block_addr, + -- and validate each block via iterative refinement. + fn check_canonical_block_sort_walk(consts: List‹&KConstantInfo›, + cur_ba: [G; 32], + cur_members: List‹G›, + pos: G, + top: List‹&KConstantInfo›) { + match load(consts) { + ListNode.Nil => validate_block_if_nonzero(cur_ba, cur_members, top), + ListNode.Cons(&ci, rest) => + let ba = kconst_block_addr(ci, top); + match address_eq(ba, cur_ba) { + 1 => + check_canonical_block_sort_walk(rest, ba, list_snoc(cur_members, pos), + pos + 1, top), + 0 => + let _ = validate_block_if_nonzero(cur_ba, cur_members, top); + let new_members = init_block_members(ba, pos); + check_canonical_block_sort_walk(rest, ba, new_members, pos + 1, top), + }, + } + } + + fn init_block_members(ba: [G; 32], pos: G) -> List‹G› { + match address_eq(ba, [0; 32]) { + 1 => store(ListNode.Nil), + 0 => store(ListNode.Cons(pos, store(ListNode.Nil))), + } + } + + fn validate_block_if_nonzero(ba: [G; 32], members: List‹G›, + top: List‹&KConstantInfo›) { + match address_eq(ba, [0; 32]) { + 1 => (), + 0 => + match list_length(members) { + 0 => (), + 1 => (), + _ => validate_block_canonical(members, top), + }, + } + } + + -- Mirror: src/ix/kernel/canonical_check.rs:732-762 validate_by_full_refinement. + -- Run sort_kconsts, assert all classes are singletons, and stored order + -- matches the canonical sort. Equal classes (size > 1) signal an alpha- + -- collision in the block — Lean's compiler should have collapsed those. + -- Mirror Rust ingress.rs:2025-2048: validate canonical order for + -- Indc subset only. Ctors share parent's block_addr in our walk but + -- aren't actual block members per Rust's KConst::Indc.block field. + fn validate_block_canonical(stored: List‹G›, top: List‹&KConstantInfo›) { + let stored_indcs = filter_indc_positions(stored, top); + match list_length(stored_indcs) { + 0 => (), + 1 => (), + _ => + let classes = sort_kconsts(stored_indcs, top); + let sing = all_singleton_classes(classes); + let sorted = flatten_classes(classes); + let eq = g_list_eq(sorted, stored_indcs); + assert_eq!(sing, 1); + assert_eq!(eq, 1); + (), + } + } + + fn filter_indc_positions(positions: List‹G›, + top: List‹&KConstantInfo›) -> List‹G› { + match load(positions) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(p, rest) => + let ci = load(list_lookup(top, p)); + let rest_filtered = filter_indc_positions(rest, top); + match kconst_kind_ord(ci) { + 1 => store(ListNode.Cons(p, rest_filtered)), + _ => rest_filtered, + }, + } + } + + fn all_singleton_classes(classes: List‹List‹G››) -> G { + match load(classes) { + ListNode.Nil => 1, + ListNode.Cons(c, rest) => + match list_length(c) { + 1 => all_singleton_classes(rest), + _ => 0, + }, + } + } + + fn flatten_classes(classes: List‹List‹G››) -> List‹G› { + match load(classes) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(c, rest) => + list_concat(c, flatten_classes(rest)), + } + } + + fn g_list_eq(xs: List‹G›, ys: List‹G›) -> G { + match load(xs) { + ListNode.Nil => + match load(ys) { + ListNode.Nil => 1, + _ => 0, + }, + ListNode.Cons(x, xt) => + match load(ys) { + ListNode.Nil => 0, + ListNode.Cons(y, yt) => + match x - y { + 0 => g_list_eq(xt, yt), + _ => 0, + }, + }, + } + } + + -- ============================================================================ + -- Iterative refinement: sort_kconsts (mirror canonical_check.rs:657-703). + -- + -- Seed: members sorted by kernel position (stable seed key in our layout). + -- Loop: + -- 1. ctx = flatten of current classes. + -- 2. For each multi-element class: sort by compare_kconst_ctx, then + -- group consecutive equals. + -- 3. If new partition equals previous: done. + -- ============================================================================ + fn sort_kconsts(members: List‹G›, top: List‹&KConstantInfo›) -> List‹List‹G›› { + -- Seed: single class with members in given order (kernel positions are + -- already a stable seed key). + let seed = store(ListNode.Cons(members, store(ListNode.Nil))); + sort_kconsts_loop(seed, top, 32) + } + + fn sort_kconsts_loop(classes: List‹List‹G››, top: List‹&KConstantInfo›, + fuel: G) -> List‹List‹G›› { + match fuel { + 0 => classes, + _ => + let ctx = flatten_classes(classes); + let new_classes = refine_classes(classes, ctx, top); + match classes_eq(classes, new_classes) { + 1 => classes, + _ => sort_kconsts_loop(new_classes, top, fuel - 1), + }, + } + } + + fn refine_classes(classes: List‹List‹G››, ctx: List‹G›, + top: List‹&KConstantInfo›) -> List‹List‹G›› { + match load(classes) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(c, rest) => + let refined = refine_one_class(c, ctx, top); + list_concat(refined, refine_classes(rest, ctx, top)), + } + } + + fn refine_one_class(c: List‹G›, ctx: List‹G›, + top: List‹&KConstantInfo›) -> List‹List‹G›› { + match list_length(c) { + 0 => store(ListNode.Nil), + 1 => store(ListNode.Cons(c, store(ListNode.Nil))), + _ => + let sorted = insertion_sort_class(c, ctx, top); + group_consecutive_class(sorted, ctx, top), + } + } + + fn insertion_sort_class(xs: List‹G›, ctx: List‹G›, + top: List‹&KConstantInfo›) -> List‹G› { + match load(xs) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(x, rest) => + insert_sorted(x, insertion_sort_class(rest, ctx, top), ctx, top), + } + } + + fn insert_sorted(x: G, sorted: List‹G›, ctx: List‹G›, + top: List‹&KConstantInfo›) -> List‹G› { + match load(sorted) { + ListNode.Nil => store(ListNode.Cons(x, store(ListNode.Nil))), + ListNode.Cons(h, rest) => + match compare_kconst_ctx(load(list_lookup(top, x)), + load(list_lookup(top, h)), ctx) { + (0, _) => store(ListNode.Cons(x, sorted)), + _ => store(ListNode.Cons(h, insert_sorted(x, rest, ctx, top))), + }, + } + } + + fn group_consecutive_class(sorted: List‹G›, ctx: List‹G›, + top: List‹&KConstantInfo›) -> List‹List‹G›› { + match load(sorted) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(h, rest) => + group_consecutive_walk(rest, ctx, top, h, + store(ListNode.Cons(h, store(ListNode.Nil)))), + } + } + + fn group_consecutive_walk(remaining: List‹G›, ctx: List‹G›, + top: List‹&KConstantInfo›, + last: G, current_group: List‹G›) -> List‹List‹G›› { + match load(remaining) { + ListNode.Nil => store(ListNode.Cons(current_group, store(ListNode.Nil))), + ListNode.Cons(h, rest) => + match compare_kconst_ctx(load(list_lookup(top, last)), + load(list_lookup(top, h)), ctx) { + (1, _) => + group_consecutive_walk(rest, ctx, top, h, list_snoc(current_group, h)), + _ => + store(ListNode.Cons(current_group, + group_consecutive_walk(rest, ctx, top, h, + store(ListNode.Cons(h, store(ListNode.Nil)))))), + }, + } + } + + fn classes_eq(a: List‹List‹G››, b: List‹List‹G››) -> G { + match load(a) { + ListNode.Nil => + match load(b) { + ListNode.Nil => 1, + _ => 0, + }, + ListNode.Cons(ah, arest) => + match load(b) { + ListNode.Nil => 0, + ListNode.Cons(bh, brest) => + match g_list_eq(ah, bh) { + 0 => 0, + _ => classes_eq(arest, brest), + }, + }, + } + } +⟧ + +end IxVM + +end diff --git a/Ix/IxVM/Kernel/Check.lean b/Ix/IxVM/Kernel/Check.lean new file mode 100644 index 00000000..648b2a47 --- /dev/null +++ b/Ix/IxVM/Kernel/Check.lean @@ -0,0 +1,362 @@ +module +public import Ix.Aiur.Meta +public import Ix.IxVM.KernelTypes +public import Ix.IxVM.Kernel.Subst +public import Ix.IxVM.Kernel.Whnf +public import Ix.IxVM.Kernel.Infer +public import Ix.IxVM.Kernel.DefEq + +public section + +namespace IxVM + +/-! ## Per-constant type checking + +Mirrors `src/ix/kernel/check.rs::check_const_member` (line 95+). + +Per `KConstantInfo` variant: + +* `Axio { ty }` → infer ty, ensure_sort. +* `Defn { ty, val }` → infer ty (ensure_sort), infer val, is_def_eq val_ty ty. +* `Thm { ty, val }` → as Defn, plus assert ty's level is Prop (Sort 0). +* `Opaque { ty, val, is_unsafe }` → ensure_sort ty; if !is_unsafe, also + infer val and is_def_eq. +* `Quot { ty }` → infer ty, ensure_sort. +* `Indc { ty }` → infer ty, ensure_sort. +* `Ctor { ty }` → infer ty, ensure_sort. +* `Recr { ty }` → infer ty, ensure_sort. + +Top-level entry: `check_all` walks the kernel const list and calls +`check_const` per element. Failures `assert_eq!(0, 1)` to fail the proof. +-/ + +def check := ⟦ + -- Mirror: each KConstantInfo's unsafe flag (Defn = DefinitionSafety, + -- others = G). Returns 1 if unsafe, 0 if safe. Thm and Quot are always safe. + fn is_unsafe_ci(ci: KConstantInfo) -> G { + match ci { + KConstantInfo.Axiom(_, _, u) => u, + KConstantInfo.Defn(_, _, _, s, _) => + match s { + DefinitionSafety.Unsafe => 1, + _ => 0, + }, + KConstantInfo.Thm(_, _, _) => 0, + KConstantInfo.Opaque(_, _, _, u) => u, + KConstantInfo.Quot(_, _, _) => 0, + KConstantInfo.Induct(_, _, _, _, _, _, _, u, _, _) => u, + KConstantInfo.Ctor(_, _, _, _, _, _, u) => u, + KConstantInfo.Rec(_, _, _, _, _, _, _, _, u, _) => u, + } + } + + -- Mirror: src/ix/kernel/check.rs Safe→Unsafe transitive rejection. + -- Walks every Const(idx, _) in `e`; returns 0 if any target const is + -- unsafe, 1 otherwise. Used only when the calling const is itself safe. + fn safe_refs_only(e: KExpr, top: List‹&KConstantInfo›) -> G { + match load(e) { + KExprNode.BVar(_) => 1, + KExprNode.Srt(_) => 1, + KExprNode.Const(idx, _) => + let ci = load(list_lookup(top, idx)); + let u = is_unsafe_ci(ci); + match u { + 0 => 1, + 1 => 0, + }, + KExprNode.App(f, a) => + safe_refs_only(f, top) * safe_refs_only(a, top), + KExprNode.Lam(t, b) => + safe_refs_only(t, top) * safe_refs_only(b, top), + KExprNode.Forall(t, b) => + safe_refs_only(t, top) * safe_refs_only(b, top), + KExprNode.Let(t, v, b) => + safe_refs_only(t, top) * safe_refs_only(v, top) * safe_refs_only(b, top), + KExprNode.Lit(_) => 1, + KExprNode.Proj(_, _, e1) => safe_refs_only(e1, top), + } + } + + -- Assert that a Safe-classified const has no unsafe refs in `e`. + -- For unsafe-classified consts, this is a no-op. + fn assert_safety(self_unsafe: G, e: KExpr, top: List‹&KConstantInfo›) { + match self_unsafe { + 1 => (), + 0 => + let ok = safe_refs_only(e, top); + assert_eq!(ok, 1); + (), + } + } + + -- Mirror: src/ix/kernel/check.rs:572-598 fn validate_univ_params_seen. + -- Walks a KLevel asserting `Param(i)` has `i < bound`. Aiur's `store`/ + -- `load` deduplication subsumes Rust's seen-set. + fn validate_univ_params_seen(u: KLevel, bound: G) { + match u { + KLevel.Zero => (), + KLevel.Succ(&inner) => validate_univ_params_seen(inner, bound), + KLevel.Max(&a, &b) => + let _ = validate_univ_params_seen(a, bound); + validate_univ_params_seen(b, bound), + KLevel.IMax(&a, &b) => + let _ = validate_univ_params_seen(a, bound); + validate_univ_params_seen(b, bound), + KLevel.Param(i) => + assert_eq!(u32_less_than(i, bound), 1); + (), + } + } + + fn validate_univ_params_list(lvls: List‹&KLevel›, bound: G) { + match load(lvls) { + ListNode.Nil => (), + ListNode.Cons(&u, rest) => + let _ = validate_univ_params_seen(u, bound); + validate_univ_params_list(rest, bound), + } + } + + -- Mirror: src/ix/kernel/check.rs:494-570 fn validate_expr_well_scoped. + -- Walks `e` checking `BVar(i) < depth`, Const universe-arity match, and + -- recurses into universes via `validate_univ_params_seen`. + fn validate_expr_well_scoped(e: KExpr, depth: G, bound: G, + top: List‹&KConstantInfo›) { + match load(e) { + KExprNode.BVar(i) => + assert_eq!(u32_less_than(i, depth), 1); + (), + KExprNode.Srt(&l) => validate_univ_params_seen(l, bound), + KExprNode.Const(idx, lvls) => + let ci = load(list_lookup(top, idx)); + let expected = const_num_lvls(ci); + assert_eq!(list_length(lvls), expected); + validate_univ_params_list(lvls, bound), + KExprNode.App(f, a) => + let _ = validate_expr_well_scoped(f, depth, bound, top); + validate_expr_well_scoped(a, depth, bound, top), + KExprNode.Lam(t, b) => + let _ = validate_expr_well_scoped(t, depth, bound, top); + validate_expr_well_scoped(b, depth + 1, bound, top), + KExprNode.Forall(t, b) => + let _ = validate_expr_well_scoped(t, depth, bound, top); + validate_expr_well_scoped(b, depth + 1, bound, top), + KExprNode.Let(t, v, b) => + let _ = validate_expr_well_scoped(t, depth, bound, top); + let _ = validate_expr_well_scoped(v, depth, bound, top); + validate_expr_well_scoped(b, depth + 1, bound, top), + KExprNode.Lit(_) => (), + KExprNode.Proj(_, _, val) => + validate_expr_well_scoped(val, depth, bound, top), + } + } + + -- Mirror: src/ix/kernel/check.rs:422-478 fn validate_const_well_scoped. + -- Validates type + variant-specific value/rules. Rec rules carry rhs each. + fn validate_const_well_scoped(ci: KConstantInfo, top: List‹&KConstantInfo›) { + let bound = const_num_lvls(ci); + let ty = const_type_of(ci); + let _ = validate_expr_well_scoped(ty, 0, bound, top); + match ci { + KConstantInfo.Defn(_, _, val, _, _) => + validate_expr_well_scoped(val, 0, bound, top), + KConstantInfo.Thm(_, _, val) => + validate_expr_well_scoped(val, 0, bound, top), + KConstantInfo.Opaque(_, _, val, _) => + validate_expr_well_scoped(val, 0, bound, top), + KConstantInfo.Rec(_, _, _, _, _, _, rules, _, _, _) => + validate_recr_rules(rules, bound, top), + _ => (), + } + } + + fn validate_recr_rules(rules: List‹KRecRule›, bound: G, + top: List‹&KConstantInfo›) { + match load(rules) { + ListNode.Nil => (), + ListNode.Cons(rule, rest) => + match rule { + KRecRule.Mk(_, _, rhs) => + let _ = validate_expr_well_scoped(rhs, 0, bound, top); + validate_recr_rules(rest, bound, top), + }, + } + } + + -- Mirror: src/ix/kernel/check.rs:678-720 fn check_eq_type. + -- Asserts the Eq inductive in `top` has 1 universe param, 2 params, and + -- exactly one ctor whose address matches `eq_refl_addr()`. + fn check_eq_type(top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + let eq_idx = find_addr_idx(eq_addr(), addrs, 0); + let eq_ci = load(list_lookup(top, eq_idx)); + match eq_ci { + KConstantInfo.Induct(num_lvls, _, n_params, _, ctor_indices, _, _, _, _, _) => + assert_eq!(num_lvls, 1); + assert_eq!(n_params, 2); + assert_eq!(list_length(ctor_indices), 1); + let ctor_pos = list_lookup(ctor_indices, 0); + let ctor_addr = list_lookup(addrs, ctor_pos); + assert_eq!(address_eq(ctor_addr, eq_refl_addr()), 1); + (), + } + } + + -- ============================================================================ + -- check_const: dispatch per KConstantInfo variant. + -- ============================================================================ + fn check_const(ci: KConstantInfo, pos: G, top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + let _ = validate_const_well_scoped(ci, top); + let u = is_unsafe_ci(ci); + match ci { + KConstantInfo.Axiom(_, ty, _) => + let _ = k_ensure_sort(ty, store(ListNode.Nil), top, addrs); + let _ = assert_safety(u, ty, top); + (), + + KConstantInfo.Defn(_, ty, val, _, _) => + let _ = k_ensure_sort(ty, store(ListNode.Nil), top, addrs); + let _ = assert_safety(u, ty, top); + let _ = assert_safety(u, val, top); + let _ = k_check(val, ty, store(ListNode.Nil), top, addrs); + (), + + KConstantInfo.Thm(_, ty, val) => + -- Mirror: src/ix/kernel/check.rs:135. Theorem type must be Sort 0. + let lvl = k_ensure_sort(ty, store(ListNode.Nil), top, addrs); + assert_eq!(level_equal(load(lvl), KLevel.Zero), 1); + let _ = assert_safety(u, ty, top); + let _ = assert_safety(u, val, top); + let _ = k_check(val, ty, store(ListNode.Nil), top, addrs); + (), + + KConstantInfo.Opaque(_, ty, val, is_unsafe) => + let _ = k_ensure_sort(ty, store(ListNode.Nil), top, addrs); + let _ = assert_safety(u, ty, top); + let _ = assert_safety(u, val, top); + match is_unsafe { + 1 => (), + 0 => + let _ = k_check(val, ty, store(ListNode.Nil), top, addrs); + (), + }, + + KConstantInfo.Quot(num_lvls, ty, kind) => + let _ = k_ensure_sort(ty, store(ListNode.Nil), top, addrs); + let _ = assert_safety(u, ty, top); + -- Mirror: src/ix/kernel/check.rs:606-675 fn check_quot. + -- Validate kind ↔ address consistency, universe-param count per + -- variant, and forall-binder count. + let self_addr = list_lookup(addrs, pos); + let _ = check_quot(self_addr, kind, num_lvls, ty, top, addrs); + (), + + KConstantInfo.Induct(_, ty, n_params, n_indices, ctor_indices, + is_rec, _, _, _, block_addr) => + let _ = k_ensure_sort(ty, store(ListNode.Nil), top, addrs); + let _ = assert_safety(u, ty, top); + let _ = check_block_peer_param_agreement(pos, ty, n_params, n_indices, + block_addr, top, addrs); + let block_idxs = derive_block_member_idxs(pos, top); + let _ = validate_block_auxes(block_idxs, top); + -- H1: constructively recompute is_rec by scanning ctor field doms + -- for block-member references. Mirror src/ix/kernel/inductive.rs:309-315. + -- Without this, an adversary could set is_rec=0 on a recursive + -- 1-ctor inductive to enable struct-eta on a recursive structure. + let computed_is_rec = compute_is_rec(ctor_indices, n_params, block_idxs, top); + assert_eq!(is_rec, computed_is_rec); + (), + + -- Ctor cross-ref + return-type + field-universe + strict-positivity + -- (positivity walks mutual + nested via derive_block_member_idxs). + KConstantInfo.Ctor(_, ty, induct_idx, _, num_params, num_fields, _) => + let _ = k_ensure_sort(ty, store(ListNode.Nil), top, addrs); + let _ = assert_safety(u, ty, top); + let _ = check_ctor_against_inductive_member(pos, ci, top); + let ind_ci = load(list_lookup(top, induct_idx)); + match ind_ci { + KConstantInfo.Induct(ind_num_lvls, ind_ty, ind_n_params, ind_n_indices, _, _, _, _, _, _) => + assert_eq!(num_params, ind_n_params); + -- A1 defense-in-depth: ctor's leading param domains must match + -- parent inductive's. Mirror src/ix/kernel/inductive.rs:283,393. + let _ = check_param_agreement(ind_ty, ty, ind_n_params, top, addrs); + let _ = check_ctor_return_type(ty, num_params, ind_n_indices, num_fields, + induct_idx, ind_num_lvls); + let ind_level = get_result_sort_level(ind_ty, ind_n_params + ind_n_indices); + let _ = check_field_universes(ty, num_params, ind_level, + store(ListNode.Nil), top, addrs); + let _ = check_positivity(ty, num_params, induct_idx, store(ListNode.Nil), top, addrs); + (), + }, + + KConstantInfo.Rec(_, ty, _, _, _, _, _, _, _, _) => + let _ = k_ensure_sort(ty, store(ListNode.Nil), top, addrs); + let _ = assert_safety(u, ty, top); + let _ = check_recursor_member(pos, ci, top, addrs); + (), + } + } + + -- Mirror: src/ix/kernel/check.rs:606-675 fn check_quot. + -- Validates quot variant consistency: address ↔ kind match, universe + -- param count, and at-least-N forall binders for the type. + fn check_quot(self_addr: [G; 32], kind: QuotKind, num_lvls: G, ty: KExpr, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + -- Address ↔ kind consistency + per-variant (lvls, foralls) expectations. + -- Type/Ctor/Ind = 1 lvl; Lift = 2 lvls. + -- Foralls: Type=2, Ctor=3, Lift=6, Ind=5. + let pair = match kind { + QuotKind.Typ => + assert_eq!(address_eq(self_addr, quot_type_addr()), 1); + (1, 2), + QuotKind.Ctor => + assert_eq!(address_eq(self_addr, quot_ctor_addr()), 1); + (1, 3), + QuotKind.Lift => + assert_eq!(address_eq(self_addr, quot_lift_addr()), 1); + -- Mirror: src/ix/kernel/check.rs:651-653. Lift requires Eq type + -- to be properly formed (Quot.lift uses Eq in its reduction rule). + let _ = check_eq_type(top, addrs); + (2, 6), + QuotKind.Ind => + assert_eq!(address_eq(self_addr, quot_ind_addr()), 1); + (1, 5), + }; + match pair { + (expected_lvls, expected_foralls) => + assert_eq!(num_lvls, expected_lvls); + assert_eq!(count_foralls_at_least(ty, expected_foralls, 0), 1); + (), + } + } + + -- Returns 1 iff `ty` has at least `n` leading Foralls. + fn count_foralls_at_least(ty: KExpr, n: G, seen: G) -> G { + match n - seen { + 0 => 1, + _ => + match load(ty) { + KExprNode.Forall(_, body) => count_foralls_at_least(body, n, seen + 1), + _ => 0, + }, + } + } + + fn check_all(consts: List‹&KConstantInfo›, top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + let _ = check_canonical_block_sort(top); + check_all_iter(consts, top, addrs, 0) + } + + fn check_all_iter(consts: List‹&KConstantInfo›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›, pos: G) { + match load(consts) { + ListNode.Nil => (), + ListNode.Cons(&ci, rest) => + let _ = check_const(ci, pos, top, addrs); + check_all_iter(rest, top, addrs, pos + 1), + } + } +⟧ + +end IxVM + +end diff --git a/Ix/IxVM/Kernel/DefEq.lean b/Ix/IxVM/Kernel/DefEq.lean new file mode 100644 index 00000000..1d1780bc --- /dev/null +++ b/Ix/IxVM/Kernel/DefEq.lean @@ -0,0 +1,801 @@ +module +public import Ix.Aiur.Meta +public import Ix.IxVM.KernelTypes +public import Ix.IxVM.Kernel.Subst +public import Ix.IxVM.Kernel.Whnf + +public section + +namespace IxVM + +/-! ## Definitional equality over `KExpr` + +Mirrors `src/ix/kernel/def_eq.rs`. + +Tiered strategy: + +1. **Structural alpha-equivalence**: same expression shapes with recursive + def_eq on sub-expressions. Pointer equality short-circuit. +2. **WHNF**: reduce both sides; retry structural. +3. **Proof irrelevance**: both sides accepted when their inferred type is + a Prop. Implemented via `is_prop_type_of` over the shared types ctx. +4. **Lazy delta**: simultaneous unfold of both sides when both heads are + Const(idx) of a Defn/Thm; falls through to Const-Proj / Proj-Const / + Const-App congruence when applicable. +5. **Lambda eta**: when one side is a `Lam` and the other isn't, wrap the + non-Lam side as `λ(dom). s #0` (via `expr_lift`) and recurse via the + structural Lam-Lam arm. +6. **Struct / unit-like eta**: subsingleton Prop ctors and recursive + single-ctor structures fold via `is_unit_like_type` and the iota + step in `Whnf.lean::try_struct_eta_iota`. +-/ + +def defEq := ⟦ + -- ============================================================================ + -- k_is_def_eq + -- + -- Mirror of `src/ix/kernel/def_eq.rs::is_def_eq`. Returns G: + -- 1 = def-eq, 0 = not. + -- ============================================================================ + fn k_is_def_eq(a: KExpr, b: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> G { + -- Tier 1: pointer equality short-circuit. + match ptr_val(a) - ptr_val(b) { + 0 => 1, + _ => + -- Tier 1.5: lazy-delta app-congruence pre-WHNF. Mirror: + -- src/ix/kernel/def_eq.rs:1262-1287 try_def_eq_app. When both + -- sides share Const(idx, lvls) head with same arg count, recurse + -- on args directly — skips delta+beta of the def's body. + -- Sound: only accepts when args recursively def-eq; bails to + -- WHNF path otherwise. + match try_lazy_delta_app(a, b, types, top, addrs) { + 1 => 1, + 0 => + -- Tier 1c: string literal expansion (must run before WHNF). Mirror: + -- src/ix/kernel/def_eq.rs:295-304. If exactly one side is Lit(Str), + -- expand to `String.ofList [Char.ofNat c, ...]` ctor form so both + -- sides reduce in lockstep through delta + iota. + match try_string_lit_pair(a, b, types, top, addrs) { + 1 => 1, + 0 => + -- Tier 2: WHNF both sides. + let aw = whnf(a, types, top, addrs); + let bw = whnf(b, types, top, addrs); + match ptr_val(aw) - ptr_val(bw) { + 0 => 1, + _ => + -- Tier 3: proof irrelevance. + match try_proof_irrel(aw, bw, types, top, addrs) { + 1 => 1, + 0 => + -- Tier 3b: unit-like-type symmetry. + match try_unit_like(aw, bw, types, top, addrs) { + 1 => 1, + 0 => + -- Tier 3c: struct eta (mirror def_eq.rs:778-784). + match try_eta_struct(aw, bw, types, top, addrs) { + 1 => 1, + 0 => + match try_eta_struct(bw, aw, types, top, addrs) { + 1 => 1, + 0 => + -- Tier 3d: Nat offset (mirror def_eq.rs:751). + match try_def_eq_nat(aw, bw, types, top, addrs) { + (1, eq) => eq, + (0, _) => + -- Tier 4: lazy-delta unfold loop (mirror + -- def_eq.rs:1418-1483 lazy_delta_reduction_step). + -- Both sides may be Const-headed Defn/Thm + -- left stuck by whnf (Opaque hint or Theorem). + -- Unfold one side per rank; recurse. + match lazy_delta_loop(aw, bw, 16, types, top, addrs) { + (1, eq) => eq, + (0, _) => k_is_def_eq_struct(aw, bw, types, top, addrs), + }, + }, + }, + }, + }, + }, + }, + }, + }, + } + } + + -- Mirror: src/ix/kernel/def_eq.rs:801-818 fn try_proof_irrel. + fn try_proof_irrel(a: KExpr, b: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> G { + let a_ty = k_infer(a, types, top, addrs); + match is_prop_type(a_ty, types, top, addrs) { + 0 => 0, + 1 => + let b_ty = k_infer(b, types, top, addrs); + k_is_def_eq(a_ty, b_ty, types, top, addrs), + } + } + + -- Returns 1 iff `whnf(infer(ty))` is `Sort 0`. + fn is_prop_type(ty: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> G { + let sort = k_infer(ty, types, top, addrs); + let sort_w = whnf(sort, types, top, addrs); + match load(sort_w) { + KExprNode.Srt(l) => + match load(l) { + KLevel.Zero => 1, + _ => 0, + }, + _ => 0, + } + } + + -- Mirror: src/ix/kernel/def_eq.rs:858-905 fn try_unit_like_eq. + fn try_unit_like(a: KExpr, b: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> G { + let ta = k_infer(a, types, top, addrs); + let ta_w = whnf(ta, types, top, addrs); + match is_unit_like_type(ta_w, top) { + 0 => 0, + 1 => + let tb = k_infer(b, types, top, addrs); + k_is_def_eq(ta, tb, types, top, addrs), + } + } + + -- 1 iff ty is `Const(I, _) args` for non-rec 1-ctor 0-field inductive. + fn is_unit_like_type(ty: KExpr, top: List‹&KConstantInfo›) -> G { + match collect_spine_simple(ty) { + (head, _) => + match load(head) { + KExprNode.Const(idx, _) => + let ci = load(list_lookup(top, idx)); + match ci { + KConstantInfo.Induct(_, _, _, _, ctor_indices, is_rec, _, _, _, _) => + match is_rec { + 1 => 0, + 0 => + match list_length(ctor_indices) { + 1 => + let ctor_idx = list_lookup(ctor_indices, 0); + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, _, _, _, _, n_fields, _) => + match n_fields { + 0 => 1, + _ => 0, + }, + _ => 0, + }, + _ => 0, + }, + }, + _ => 0, + }, + _ => 0, + }, + } + } + + -- Mirror: src/ix/kernel/def_eq.rs:1007-1018 try_string_lit_expansion, + -- attempted in both directions. + fn try_string_lit_pair(a: KExpr, b: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> G { + match try_string_lit_one(a, b, types, top, addrs) { + 1 => 1, + 0 => try_string_lit_one(b, a, types, top, addrs), + } + } + + fn try_string_lit_one(t: KExpr, s: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> G { + match load(t) { + KExprNode.Lit(lit) => + match lit { + KLiteral.Str(bs) => + match str_lit_to_ctor(bs, addrs) { + (1, expanded) => k_is_def_eq(expanded, s, types, top, addrs), + (0, _) => 0, + }, + _ => 0, + }, + _ => 0, + } + } + + -- Mirror: src/ix/kernel/def_eq.rs:920-926 fn is_nat_zero. + fn is_nat_zero(e: KExpr, addrs: List‹[G; 32]›) -> G { + match load(e) { + KExprNode.Lit(lit) => + match lit { + KLiteral.Nat(limbs) => klimbs_is_zero(limbs), + _ => 0, + }, + KExprNode.Const(idx, _) => + address_eq(list_lookup(addrs, idx), nat_zero_addr()), + _ => 0, + } + } + + -- Mirror: src/ix/kernel/def_eq.rs:930-948 fn nat_succ_of. + -- `Lit(n)` n>0 → (1, Lit(n-1)). `App(Const(Nat.succ), arg)` → (1, arg). + -- Else (0, _). + fn nat_succ_of(e: KExpr, addrs: List‹[G; 32]›) -> (G, KExpr) { + match load(e) { + KExprNode.Lit(lit) => + match lit { + KLiteral.Nat(limbs) => + match klimbs_is_zero(limbs) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => (1, mk_nat_lit(klimbs_dec(limbs))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + KExprNode.App(f, a) => + match load(f) { + KExprNode.Const(idx, _) => + match address_eq(list_lookup(addrs, idx), nat_succ_addr()) { + 1 => (1, a), + 0 => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + } + } + + -- Mirror: src/ix/kernel/def_eq.rs:953-995 is_def_eq_nat / try_def_eq_offset. + -- Returns (matched, eq). `matched=1` iff both sides are nat-shaped (both + -- zero, both succ-headed, or both literals); `eq` is the verdict. + fn try_def_eq_nat(a: KExpr, b: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, G) { + let za = is_nat_zero(a, addrs); + let zb = is_nat_zero(b, addrs); + match za * zb { + 1 => (1, 1), + 0 => + match nat_succ_of(a, addrs) { + (1, ap) => + match nat_succ_of(b, addrs) { + (1, bp) => (1, k_is_def_eq(ap, bp, types, top, addrs)), + _ => (0, 0), + }, + _ => (0, 0), + }, + } + } + + -- Mirror: src/ix/kernel/def_eq.rs:1105-1231 fn try_eta_struct. + -- Direction (t, s): s is the candidate ctor-headed side. Asserts s is + -- App-spine of `Const(ctor)` fully applied, induct is struct-like + -- (non-rec, 0 indices, 1 ctor), and field-by-field `Proj(induct, i, t) + -- ≡ s_args[num_params + i]`. + fn try_eta_struct(t: KExpr, s: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> G { + match collect_spine(s) { + (s_head, s_args) => + match load(s_head) { + KExprNode.Const(cidx, _) => + match load(list_lookup(top, cidx)) { + KConstantInfo.Ctor(_, _, induct_idx, _, num_params, num_fields, _) => + let arity_diff = list_length(s_args) - (num_params + num_fields); + match arity_diff { + 0 => + match load(list_lookup(top, induct_idx)) { + KConstantInfo.Induct(_, _, _, n_indices, ctor_indices, is_rec, _, _, _, _) => + let struct_like = eq_zero(is_rec) * eq_zero(n_indices) * + eq_zero(list_length(ctor_indices) - 1); + match struct_like { + 0 => 0, + 1 => + compare_struct_fields(induct_idx, num_params, + num_fields, t, s_args, 0, + types, top, addrs), + }, + _ => 0, + }, + _ => 0, + }, + _ => 0, + }, + _ => 0, + }, + } + } + + fn compare_struct_fields(induct_idx: G, num_params: G, num_fields: G, + t: KExpr, s_args: List‹KExpr›, i: G, + types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> G { + match num_fields - i { + 0 => 1, + _ => + let proj_expr = store(KExprNode.Proj(induct_idx, i, t)); + let s_field = list_lookup(s_args, num_params + i); + match k_is_def_eq(proj_expr, s_field, types, top, addrs) { + 0 => 0, + 1 => compare_struct_fields(induct_idx, num_params, num_fields, t, + s_args, i + 1, types, top, addrs), + }, + } + } + + -- ============================================================================ + -- Structural comparison + Lambda-eta after WHNF. + -- ============================================================================ + -- Mirror: src/ix/kernel/def_eq.rs lambda-eta tier (both directions). + -- Every non-Lam `a` paired with Lam `b` falls through to symmetric eta + -- expansion (try_eta_expand swap), to accept `λx. axiom x ≡ axiom`. + fn k_is_def_eq_struct(a: KExpr, b: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> G { + match load(a) { + KExprNode.Srt(la) => + match load(b) { + KExprNode.Srt(lb) => level_equal(load(la), load(lb)), + KExprNode.Lam(ty_b, body_b) => try_eta_expand(ty_b, body_b, a, types, top, addrs), + _ => 0, + }, + + KExprNode.BVar(ia) => + match load(b) { + KExprNode.BVar(ib) => + match ia - ib { + 0 => 1, + _ => 0, + }, + KExprNode.Lam(ty_b, body_b) => try_eta_expand(ty_b, body_b, a, types, top, addrs), + _ => 0, + }, + + KExprNode.Const(ia, lvls_a) => + match load(b) { + KExprNode.Const(ib, lvls_b) => + match ia - ib { + 0 => k_is_def_eq_levels(lvls_a, lvls_b), + _ => 0, + }, + KExprNode.Lam(ty_b, body_b) => try_eta_expand(ty_b, body_b, a, types, top, addrs), + _ => 0, + }, + + KExprNode.App(fa, xa) => + match load(b) { + KExprNode.App(fb, xb) => + let f_eq = k_is_def_eq(fa, fb, types, top, addrs); + match f_eq { + 1 => k_is_def_eq(xa, xb, types, top, addrs), + 0 => 0, + }, + KExprNode.Lam(ty_b, body_b) => try_eta_expand(ty_b, body_b, a, types, top, addrs), + _ => 0, + }, + + KExprNode.Lam(ty_a, body_a) => + match load(b) { + KExprNode.Lam(ty_b, body_b) => + let ty_eq = k_is_def_eq(ty_a, ty_b, types, top, addrs); + match ty_eq { + 1 => + let inner = store(ListNode.Cons(ty_a, types)); + k_is_def_eq(body_a, body_b, inner, top, addrs), + 0 => 0, + }, + _ => try_eta_expand(ty_a, body_a, b, types, top, addrs), + }, + + KExprNode.Forall(ty_a, body_a) => + match load(b) { + KExprNode.Forall(ty_b, body_b) => + let ty_eq = k_is_def_eq(ty_a, ty_b, types, top, addrs); + match ty_eq { + 1 => + let inner = store(ListNode.Cons(ty_a, types)); + k_is_def_eq(body_a, body_b, inner, top, addrs), + 0 => 0, + }, + KExprNode.Lam(ty_b, body_b) => try_eta_expand(ty_b, body_b, a, types, top, addrs), + _ => 0, + }, + + KExprNode.Let(ty_a, val_a, body_a) => + match load(b) { + KExprNode.Let(ty_b, val_b, body_b) => + let ty_eq = k_is_def_eq(ty_a, ty_b, types, top, addrs); + match ty_eq { + 1 => + let v_eq = k_is_def_eq(val_a, val_b, types, top, addrs); + match v_eq { + 1 => + let inner = store(ListNode.Cons(ty_a, types)); + k_is_def_eq(body_a, body_b, inner, top, addrs), + 0 => 0, + }, + 0 => 0, + }, + KExprNode.Lam(ty_b, body_b) => try_eta_expand(ty_b, body_b, a, types, top, addrs), + _ => 0, + }, + + KExprNode.Lit(la) => + match load(b) { + KExprNode.Lit(lb) => literal_eq(la, lb), + KExprNode.Lam(ty_b, body_b) => try_eta_expand(ty_b, body_b, a, types, top, addrs), + _ => 0, + }, + + KExprNode.Proj(tidx_a, fidx_a, ea) => + match load(b) { + KExprNode.Proj(tidx_b, fidx_b, eb) => + let same = eq_zero(tidx_a - tidx_b) * eq_zero(fidx_a - fidx_b); + match same { + 1 => k_is_def_eq(ea, eb, types, top, addrs), + 0 => 0, + }, + KExprNode.Lam(ty_b, body_b) => try_eta_expand(ty_b, body_b, a, types, top, addrs), + _ => 0, + }, + } + } + + -- ============================================================================ + -- Lambda eta expansion (Rust def_eq.rs:1068-1100). + -- + -- We have `λ(ty_a). body_a` on one side and a non-Lam `b` on the other. + -- Build the wrap `λ(ty_a). (lift(b, 1, 0)) #0` and compare its body + -- against body_a. + -- + -- Equivalently: compare `body_a` vs `App(lift(b, 1, 0), BVar(0))`. + -- ============================================================================ + fn try_eta_expand(ty_a: KExpr, body_a: KExpr, b: KExpr, + types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> G { + let b_lifted = expr_lift(b, 1, 0); + let bvar0 = store(KExprNode.BVar(0)); + let b_app = store(KExprNode.App(b_lifted, bvar0)); + let inner = store(ListNode.Cons(ty_a, types)); + k_is_def_eq(body_a, b_app, inner, top, addrs) + } + + -- ============================================================================ + -- Level list equality. + -- ============================================================================ + fn k_is_def_eq_levels(a: List‹&KLevel›, b: List‹&KLevel›) -> G { + match load(a) { + ListNode.Nil => + match load(b) { + ListNode.Nil => 1, + _ => 0, + }, + ListNode.Cons(la, ra) => + match load(b) { + ListNode.Nil => 0, + ListNode.Cons(lb, rb) => + let l_eq = level_equal(load(la), load(lb)); + match l_eq { + 1 => k_is_def_eq_levels(ra, rb), + 0 => 0, + }, + }, + } + } + + -- ============================================================================ + -- Lazy-delta app-congruence (Tier 1.5 of k_is_def_eq). + -- Mirror: src/ix/kernel/def_eq.rs:1262-1287 try_def_eq_app. + -- + -- When both sides reduce syntactically to `Const(idx, lvls) ◦ args` with + -- matching idx, level list, and arg count, recurse on args directly via + -- k_is_def_eq. Bypasses delta+beta of the def's body for the common + -- congruence case (`f x ≡ f y` whenever `x ≡ y`). + -- + -- Sound: returns 1 only when args recursively def-eq; returns 0 to fall + -- through to the regular WHNF-based pipeline. + -- ============================================================================ + fn try_lazy_delta_app(a: KExpr, b: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> G { + match collect_spine(a) { + (ah, aa) => + match collect_spine(b) { + (bh, bb) => + match load(ah) { + KExprNode.Const(ai, al) => + match load(bh) { + KExprNode.Const(bi, bl) => + match ai - bi { + 0 => + match k_is_def_eq_levels(al, bl) { + 0 => 0, + 1 => + let len_a = list_length(aa); + let len_b = list_length(bb); + match len_a - len_b { + 0 => is_def_eq_arg_list(aa, bb, types, top, addrs), + _ => 0, + }, + }, + _ => 0, + }, + _ => 0, + }, + _ => 0, + }, + }, + } + } + + fn is_def_eq_arg_list(aa: List‹KExpr›, bb: List‹KExpr›, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> G { + match load(aa) { + ListNode.Nil => + match load(bb) { + ListNode.Nil => 1, + _ => 0, + }, + ListNode.Cons(a, ar) => + match load(bb) { + ListNode.Nil => 0, + ListNode.Cons(b, br) => + match k_is_def_eq(a, b, types, top, addrs) { + 0 => 0, + 1 => is_def_eq_arg_list(ar, br, types, top, addrs), + }, + }, + } + } + + -- ============================================================================ + -- Lazy-delta unfold loop (Tier 4 of k_is_def_eq). + -- Mirror: src/ix/kernel/def_eq.rs:1418-1483 lazy_delta_reduction_step + -- + outer fuel loop in def_eq.rs:1383-1414. + -- + -- After Tier 2 whnf may leave both sides stuck on Const-headed + -- Defn(Opaque) or Thm. Iteratively unfolds one side per rank-cmp: + -- * Both delta-eligible, ar > br : unfold a. + -- * Both delta-eligible, ar < br : unfold b. + -- * Both delta-eligible, ar == br: unfold both. + -- * Only a eligible: unfold a. + -- * Only b eligible: unfold b. + -- * Neither: stuck → fall through to structural compare. + -- After each unfold, whnf the result and recurse. Fuel-bounded. + -- ============================================================================ + fn is_delta_eligible(idx: G, top: List‹&KConstantInfo›) -> G { + let ci = load(list_lookup(top, idx)); + match ci { + KConstantInfo.Defn(_, _, _, _, _) => 1, + KConstantInfo.Thm(_, _, _) => 1, + _ => 0, + } + } + + fn delta_rank(idx: G, top: List‹&KConstantInfo›) -> G { + let ci = load(list_lookup(top, idx)); + match ci { + KConstantInfo.Defn(_, _, _, _, hint) => hint, + _ => 0, + } + } + + -- Unfold a delta-eligible Const-headed expr to its body[lvls] applied + -- to the spine. Returns (1, expr2) on success, (0, e) otherwise. + fn delta_unfold(e: KExpr, top: List‹&KConstantInfo›) -> (G, KExpr) { + match collect_spine(e) { + (head, spine) => + match load(head) { + KExprNode.Const(idx, lvls) => + let ci = load(list_lookup(top, idx)); + match ci { + KConstantInfo.Defn(_, _, value, _, _) => + let body = expr_inst_levels(value, lvls); + (1, apply_spine(body, spine)), + KConstantInfo.Thm(_, _, value) => + let body = expr_inst_levels(value, lvls); + (1, apply_spine(body, spine)), + _ => (0, e), + }, + _ => (0, e), + }, + } + } + + -- Mirror: src/ix/kernel/def_eq.rs:1539-1549 try_unfold_proj_app. + -- If e collects to App-spine on Proj(_, _, inner) where inner is itself + -- delta-eligible Const-headed, unfold inner's body and rewrap the Proj. + -- Returns (1, e2) on progress, (0, e) otherwise. + fn try_unfold_proj_app(e: KExpr, top: List‹&KConstantInfo›) -> (G, KExpr) { + match collect_spine(e) { + (head, spine) => + match load(head) { + KExprNode.Proj(tidx, fidx, inner) => + match delta_unfold(inner, top) { + (1, inner2) => + let new_head = store(KExprNode.Proj(tidx, fidx, inner2)); + (1, apply_spine(new_head, spine)), + (0, _) => (0, e), + }, + _ => (0, e), + }, + } + } + + fn lazy_delta_loop(a: KExpr, b: KExpr, fuel: G, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> (G, G) { + match fuel { + 0 => (0, 0), + _ => + match ptr_val(a) - ptr_val(b) { + 0 => (1, 1), + _ => + -- Hoist heads once; structural congruence Tier 1 only fires when + -- both heads are Const (avoids try_lazy_delta_app's redundant + -- collect_spine when one side is Proj/Lam/etc). + -- Mirror src/ix/kernel/def_eq.rs:1418-1495: when one side is + -- Const-delta-eligible and the other isn't (Proj, Sort, Lam, + -- Forall, Lit, …), unfold the Const side regardless of the + -- other head's shape. The (Const, Proj) / (Proj, Const) cases + -- are subsumed by the generalized step helpers, since + -- try_unfold_proj_app is a no-op on non-Proj heads. + match collect_spine(a) { + (ah, aa) => + match collect_spine(b) { + (bh, bb) => + match load(ah) { + KExprNode.Const(ai, al) => + match load(bh) { + KExprNode.Const(bi, bl) => + match try_const_app_congruence(ai, al, aa, bi, bl, bb, + types, top, addrs) { + 1 => (1, 1), + _ => lazy_delta_step_const_const(ai, bi, a, b, fuel, + types, top, addrs), + }, + _ => lazy_delta_step_a_const(ai, a, b, fuel, types, top, addrs), + }, + _ => + match load(bh) { + KExprNode.Const(bi, _) => + lazy_delta_step_b_const(bi, a, b, fuel, types, top, addrs), + _ => (0, 0), + }, + }, + }, + }, + }, + } + } + + -- Const-vs-Const spine congruence: same idx, def-eq levels, same arity, + -- pairwise def-eq args. Inner of try_lazy_delta_app without the head + -- loads (caller already did them). + fn try_const_app_congruence(ai: G, al: List‹&KLevel›, aa: List‹KExpr›, + bi: G, bl: List‹&KLevel›, bb: List‹KExpr›, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> G { + match ai - bi { + 0 => + match k_is_def_eq_levels(al, bl) { + 0 => 0, + 1 => + let len_a = list_length(aa); + let len_b = list_length(bb); + match len_a - len_b { + 0 => is_def_eq_arg_list(aa, bb, types, top, addrs), + _ => 0, + }, + }, + _ => 0, + } + } + + fn lazy_delta_step_const_const(ai: G, bi: G, a: KExpr, b: KExpr, fuel: G, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, G) { + let ae = is_delta_eligible(ai, top); + let be = is_delta_eligible(bi, top); + match ae { + 0 => + match be { + 0 => (0, 0), + 1 => unfold_b_and_loop(a, b, fuel, types, top, addrs), + _ => (0, 0), + }, + 1 => + match be { + 0 => unfold_a_and_loop(a, b, fuel, types, top, addrs), + 1 => + let ar = delta_rank(ai, top); + let br = delta_rank(bi, top); + match u32_less_than(br, ar) { + 1 => unfold_a_and_loop(a, b, fuel, types, top, addrs), + 0 => + match u32_less_than(ar, br) { + 1 => unfold_b_and_loop(a, b, fuel, types, top, addrs), + 0 => unfold_both_and_loop(a, b, fuel, types, top, addrs), + }, + }, + _ => (0, 0), + }, + _ => (0, 0), + } + } + + -- a is Const-headed at idx ai; b is anything else (Proj, Sort, Lam, …). + -- Mirror Rust def_eq.rs:1438-1445 (a_delta && !b_delta branch). + -- If a-delta-eligible, try try_unfold_proj_app(b) (no-op for non-Proj b); + -- else unfold a. + fn lazy_delta_step_a_const(ai: G, a: KExpr, b: KExpr, fuel: G, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, G) { + match is_delta_eligible(ai, top) { + 0 => (0, 0), + 1 => + match try_unfold_proj_app(b, top) { + (1, b2) => + let bw = whnf(b2, types, top, addrs); + lazy_delta_loop(a, bw, fuel - 1, types, top, addrs), + (0, _) => unfold_a_and_loop(a, b, fuel, types, top, addrs), + }, + _ => (0, 0), + } + } + + -- b is Const-headed at idx bi; a is anything else. Symmetric to the above. + -- Mirror Rust def_eq.rs:1446-1453 (!a_delta && b_delta branch). + fn lazy_delta_step_b_const(bi: G, a: KExpr, b: KExpr, fuel: G, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, G) { + match is_delta_eligible(bi, top) { + 0 => (0, 0), + 1 => + match try_unfold_proj_app(a, top) { + (1, a2) => + let aw = whnf(a2, types, top, addrs); + lazy_delta_loop(aw, b, fuel - 1, types, top, addrs), + (0, _) => unfold_b_and_loop(a, b, fuel, types, top, addrs), + }, + _ => (0, 0), + } + } + + fn unfold_a_and_loop(a: KExpr, b: KExpr, fuel: G, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> (G, G) { + match delta_unfold(a, top) { + (1, a2) => + let aw = whnf(a2, types, top, addrs); + lazy_delta_loop(aw, b, fuel - 1, types, top, addrs), + (0, _) => (0, 0), + } + } + + fn unfold_b_and_loop(a: KExpr, b: KExpr, fuel: G, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> (G, G) { + match delta_unfold(b, top) { + (1, b2) => + let bw = whnf(b2, types, top, addrs); + lazy_delta_loop(a, bw, fuel - 1, types, top, addrs), + (0, _) => (0, 0), + } + } + + fn unfold_both_and_loop(a: KExpr, b: KExpr, fuel: G, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> (G, G) { + match delta_unfold(a, top) { + (1, a2) => + match delta_unfold(b, top) { + (1, b2) => + let aw = whnf(a2, types, top, addrs); + let bw = whnf(b2, types, top, addrs); + lazy_delta_loop(aw, bw, fuel - 1, types, top, addrs), + (0, _) => (0, 0), + }, + (0, _) => (0, 0), + } + } +⟧ + +end IxVM + +end diff --git a/Ix/IxVM/Kernel/Inductive.lean b/Ix/IxVM/Kernel/Inductive.lean new file mode 100644 index 00000000..87a6f4b5 --- /dev/null +++ b/Ix/IxVM/Kernel/Inductive.lean @@ -0,0 +1,2405 @@ +module +public import Ix.Aiur.Meta +public import Ix.IxVM.KernelTypes + +public section + +namespace IxVM + +/-! ## Inductive block validation + +Mirror: `src/ix/kernel/inductive.rs` (parameter agreement, return-type +validation, universe constraints, strict positivity, recursor synthesis). + +Structure-only validation, no name diagnostics. +-/ + +def inductive_check := ⟦ + -- Mirror: src/ix/kernel/inductive.rs:1968-2080 check_ctor_return_type. + -- Validates that a ctor's declared type, after peeling + -- `n_params + n_fields` Foralls, is a syntactic `Indc(params, indices)` + -- application: + -- * head is `Const(ind_idx, lvls)` + -- * `lvls.len() == ind_num_lvls` + -- * each `lvls[i]` is `Param(i)` + -- * spine args count is `n_params + n_indices` + -- * first `n_params` args are the param BVars (de Bruijn equivalents + -- of Rust's param fvars at line 1986-1994). + -- + -- Failure path: `assert_eq!(0, 1)` panics the Aiur execution per the + -- kernel's accept/reject convention. + fn check_ctor_return_type(ctor_ty: KExpr, + n_params: G, n_indices: G, n_fields: G, + ind_idx: G, ind_num_lvls: G) { + let body = peel_n_foralls(ctor_ty, n_params + n_fields); + let pair = collect_spine_simple(body); + match pair { + (head, args) => + match load(head) { + KExprNode.Const(idx, lvls) => + assert_eq!(idx, ind_idx); + let _ = assert_lvls_are_params(lvls, ind_num_lvls, 0); + let args_len = list_length(args); + assert_eq!(args_len, n_params + n_indices); + let _ = assert_first_args_are_param_bvars(args, n_params, n_fields, 0); + (), + }, + } + } + + -- Peel `n` Foralls off the head, return the body. Panics if fewer + -- Foralls than requested. + fn peel_n_foralls(e: KExpr, n: G) -> KExpr { + match n { + 0 => e, + _ => + match load(e) { + KExprNode.Forall(_, body) => peel_n_foralls(body, n - 1), + }, + } + } + + -- Walk a left-associative App chain, return (head, args-in-application-order). + -- Inlined to avoid Whnf import here (module ordering — Inductive precedes Whnf + -- in the dependency-first build order); identical to whnf.lean's collect_spine. + fn collect_spine_simple_go(e: KExpr, acc: List‹KExpr›) -> (KExpr, List‹KExpr›) { + match load(e) { + KExprNode.App(f, a) => + collect_spine_simple_go(f, store(ListNode.Cons(a, acc))), + _ => (e, acc), + } + } + + fn collect_spine_simple(e: KExpr) -> (KExpr, List‹KExpr›) { + collect_spine_simple_go(e, store(ListNode.Nil)) + } + + -- Each `lvls[i]` must be `Param(expected_start + i)` for i in 0..count. + fn assert_lvls_are_params(lvls: List‹&KLevel›, count: G, idx: G) { + match count { + 0 => + -- Mirror: src/ix/kernel/inductive.rs:2018: us.len() == ind_lvls. + -- At base case all expected lvls consumed; remaining list must be empty. + assert_eq!(list_length(lvls), 0); + (), + _ => + match load(lvls) { + ListNode.Cons(&l, rest) => + match l { + KLevel.Param(i) => + assert_eq!(i, idx); + let _ = assert_lvls_are_params(rest, count - 1, idx + 1); + (), + }, + }, + } + } + + -- The first `n_params` args of the spine must be exactly + -- `BVar(n_fields + n_params - 1 - i)` for i in 0..n_params, i.e. the + -- de Bruijn references to the param binders peeled off the ctor's + -- type. The remaining args are the indices — those are unrestricted + -- here (per Rust 2046+). + fn assert_first_args_are_param_bvars(args: List‹KExpr›, + n_params: G, n_fields: G, i: G) { + -- `n_params` is the TOTAL param count (constant across recursion); + -- iterate i = 0..n_params. Expected j = (n_fields + n_params) - 1 - i + -- where i is the param index from outermost, so arg[0] points at the + -- outermost binder (highest BVar) and arg[n_params-1] at the innermost. + match n_params - i { + 0 => (), + _ => + match load(args) { + ListNode.Cons(arg, rest) => + match load(arg) { + KExprNode.BVar(j) => + assert_eq!(j, ((n_fields + n_params) - 1) - i); + let _ = assert_first_args_are_param_bvars(rest, n_params, n_fields, i + 1); + (), + }, + }, + } + } + + -- Extract the inductive's result-sort level: peel `n` (params + indices) + -- Foralls; the body must be `Srt(level)`. Returns the level value. + -- Mirror: src/ix/kernel/inductive.rs::get_result_sort_level (line 2089+). + fn get_result_sort_level(ind_ty: KExpr, n: G) -> KLevel { + match n { + 0 => + match load(ind_ty) { + KExprNode.Srt(&l) => l, + }, + _ => + match load(ind_ty) { + KExprNode.Forall(_, body) => get_result_sort_level(body, n - 1), + }, + } + } + + -- Mirror: src/ix/kernel/inductive.rs:1917-1965 check_field_universes. + -- Each ctor field's domain universe must be ≤ the inductive's + -- result universe. Skipped for Prop (Sort 0) per Rust line 1924. + -- + -- Walks the ctor type past `n_params` Foralls (param binders), + -- threading the binder types into `types`. Then on each remaining + -- Forall (a field), ensures `dom`'s sort level is ≤ ind_level via + -- `k_ensure_sort` + `level_leq`. + fn check_field_universes(ctor_ty: KExpr, n_params: G, ind_level: KLevel, + types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + -- Skip if inductive is Prop. + match ind_level { + KLevel.Zero => (), + _ => check_field_universes_skip_params(ctor_ty, n_params, ind_level, types, top, addrs), + } + } + + fn check_field_universes_skip_params(ctor_ty: KExpr, n_params: G, ind_level: KLevel, + types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + match n_params { + 0 => check_field_universes_inner(ctor_ty, ind_level, types, top, addrs), + _ => + match load(ctor_ty) { + KExprNode.Forall(dom, body) => + let types2 = store(ListNode.Cons(dom, types)); + check_field_universes_skip_params(body, n_params - 1, ind_level, types2, top, addrs), + }, + } + } + + fn check_field_universes_inner(ty: KExpr, ind_level: KLevel, + types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + match load(ty) { + KExprNode.Forall(dom, body) => + let dom_level = k_ensure_sort(dom, types, top, addrs); + let ok = level_leq(load(dom_level), ind_level); + assert_eq!(ok, 1); + let types2 = store(ListNode.Cons(dom, types)); + check_field_universes_inner(body, ind_level, types2, top, addrs), + _ => (), + } + } + + -- Mirror: src/ix/kernel/inductive.rs:1702-1830 check_positivity. + -- Strict positivity: each ctor field's domain must not have any inductive + -- of `ind_idx`'s mutual block in a negative position (left of an arrow). + -- + -- For mutual blocks, the initial positivity context is the full set of + -- peer inductive idxs (derived via block_addr). Nested inductives are + -- handled by augment_block_idxs walking ctor bodies recursively. + fn check_positivity(ctor_ty: KExpr, n_params: G, ind_idx: G, + types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + let pair = peel_n_foralls_with_types(ctor_ty, n_params, types); + match pair { + (body, types_after) => + let block_idxs = derive_block_member_idxs(ind_idx, top); + check_positivity_fields(body, block_idxs, types_after, top, addrs), + } + } + + fn peel_n_foralls_tolerant(e: KExpr, n: G) -> KExpr { + match n { + 0 => e, + _ => + match load(e) { + KExprNode.Forall(_, body) => peel_n_foralls_tolerant(body, n - 1), + _ => e, + }, + } + } + + -- Like `peel_n_foralls_tolerant` but accumulates each binder's domain into + -- the types context so subsequent WHNF calls have the right local context. + fn peel_n_foralls_with_types(e: KExpr, n: G, types: List‹KExpr›) -> (KExpr, List‹KExpr›) { + match n { + 0 => (e, types), + _ => + match load(e) { + KExprNode.Forall(dom, body) => + let types2 = store(ListNode.Cons(dom, types)); + peel_n_foralls_with_types(body, n - 1, types2), + _ => (e, types), + }, + } + } + + fn check_positivity_fields(ty: KExpr, block_idxs: List‹G›, + types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + match load(ty) { + KExprNode.Forall(dom, body) => + let _ = check_positivity_aug(dom, block_idxs, types, top, addrs); + let types2 = store(ListNode.Cons(dom, types)); + check_positivity_fields(body, block_idxs, types2, top, addrs), + _ => (), + } + } + + -- Mirror src/ix/kernel/inductive.rs:1741-1850. WHNF `dom` first so that + -- ctor-field types written via reducible defs (e.g. `constType (n α) (n α)`, + -- `id Sort`) collapse to their underlying inductive head before we + -- classify them as block / nested / non-inductive. + fn check_positivity_aug(dom: KExpr, block_idxs: List‹G›, + types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + match expr_mentions_any_idx(dom, block_idxs) { + 0 => (), + _ => + let dom_w = whnf(dom, types, top, addrs); + match load(dom_w) { + KExprNode.Forall(inner_dom, inner_body) => + assert_eq!(expr_mentions_any_idx(inner_dom, block_idxs), 0); + let types2 = store(ListNode.Cons(inner_dom, types)); + check_positivity_aug(inner_body, block_idxs, types2, top, addrs), + _ => + match collect_spine_simple(dom_w) { + (head, args) => + match load(head) { + KExprNode.Const(idx, _) => + match list_contains_g(block_idxs, idx) { + 1 => (), + 0 => + -- Nested: idx must be an Inductive in top. Mirror + -- src/ix/kernel/inductive.rs:1781-1784: anything + -- else (Defn/Thm/Axio/etc.) is "not a valid + -- inductive app". + let ci = load(list_lookup(top, idx)); + match ci { + KConstantInfo.Induct(_, _, n_params, _, ctor_indices, _, _, _, _, ext_block_addr) => + let after_params = list_drop(args, n_params); + assert_eq!(list_any_mentions(after_params, block_idxs), 0); + let aug = augment_block_idxs(block_idxs, ext_block_addr, top); + check_ctors_positivity(ctor_indices, args, aug, top, addrs), + _ => + assert_eq!(0, 1); + (), + }, + }, + _ => + assert_eq!(0, 1); + (), + }, + }, + }, + } + } + + fn list_contains_g(xs: List‹G›, target: G) -> G { + match load(xs) { + ListNode.Nil => 0, + ListNode.Cons(x, rest) => + match x - target { + 0 => 1, + _ => list_contains_g(rest, target), + }, + } + } + + -- Returns 1 iff `e` contains any Const(idx) with idx in `idxs`. + -- Mirror: src/ix/kernel/inductive.rs:448-483 fn compute_is_rec. + -- Constructively recompute is_rec by scanning each ctor's field domains + -- (post n_params peeling) for any reference to a block member's idx. + -- Returns 1 iff at least one field domain mentions a block_idx. + fn compute_is_rec(ctors: List‹G›, n_params: G, block_idxs: List‹G›, + top: List‹&KConstantInfo›) -> G { + match load(ctors) { + ListNode.Nil => 0, + ListNode.Cons(ctor_idx, rest) => + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, ctor_ty, _, _, _, _, _) => + let after_params = peel_n_foralls_tolerant(ctor_ty, n_params); + match scan_fields_for_block_ref(after_params, block_idxs) { + 1 => 1, + 0 => compute_is_rec(rest, n_params, block_idxs, top), + }, + _ => compute_is_rec(rest, n_params, block_idxs, top), + }, + } + } + + fn scan_fields_for_block_ref(ty: KExpr, block_idxs: List‹G›) -> G { + match load(ty) { + KExprNode.Forall(dom, body) => + match expr_mentions_any_idx(dom, block_idxs) { + 1 => 1, + 0 => scan_fields_for_block_ref(body, block_idxs), + }, + _ => 0, + } + } + + fn expr_mentions_any_idx(e: KExpr, idxs: List‹G›) -> G { + match load(e) { + KExprNode.BVar(_) => 0, + KExprNode.Srt(_) => 0, + KExprNode.Const(idx, _) => list_contains_g(idxs, idx), + KExprNode.App(f, a) => + let fm = expr_mentions_any_idx(f, idxs); + match fm { + 1 => 1, + 0 => expr_mentions_any_idx(a, idxs), + }, + KExprNode.Lam(t, b) => + let tm = expr_mentions_any_idx(t, idxs); + match tm { + 1 => 1, + 0 => expr_mentions_any_idx(b, idxs), + }, + KExprNode.Forall(t, b) => + let tm = expr_mentions_any_idx(t, idxs); + match tm { + 1 => 1, + 0 => expr_mentions_any_idx(b, idxs), + }, + KExprNode.Let(t, v, b) => + let tm = expr_mentions_any_idx(t, idxs); + match tm { + 1 => 1, + 0 => + let vm = expr_mentions_any_idx(v, idxs); + match vm { + 1 => 1, + 0 => expr_mentions_any_idx(b, idxs), + }, + }, + KExprNode.Lit(_) => 0, + KExprNode.Proj(_, _, e1) => expr_mentions_any_idx(e1, idxs), + } + } + + fn list_any_mentions(es: List‹KExpr›, idxs: List‹G›) -> G { + match load(es) { + ListNode.Nil => 0, + ListNode.Cons(e, rest) => + let m = expr_mentions_any_idx(e, idxs); + match m { + 1 => 1, + 0 => list_any_mentions(rest, idxs), + }, + } + } + + -- Augment block_idxs with the indices of all Inducts in `top` that share + -- `ext_block_addr` (so the ext Induct's own block becomes part of the + -- positivity context). + fn augment_block_idxs(block_idxs: List‹G›, ext_block_addr: [G; 32], + top: List‹&KConstantInfo›) -> List‹G› { + augment_walk(block_idxs, ext_block_addr, top, 0) + } + + fn augment_walk(block_idxs: List‹G›, ext_block_addr: [G; 32], + consts: List‹&KConstantInfo›, idx: G) -> List‹G› { + match load(consts) { + ListNode.Nil => block_idxs, + ListNode.Cons(&ci, rest) => + match ci { + KConstantInfo.Induct(_, _, _, _, _, _, _, _, _, ba) => + let same = address_eq(ba, ext_block_addr); + let nonzero = 1 - address_eq(ext_block_addr, [0; 32]); + let already = list_contains_g(block_idxs, idx); + let add = same * nonzero * (1 - already); + match add { + 1 => + let new_idxs = store(ListNode.Cons(idx, block_idxs)); + augment_walk(new_idxs, ext_block_addr, rest, idx + 1), + 0 => + augment_walk(block_idxs, ext_block_addr, rest, idx + 1), + }, + _ => + augment_walk(block_idxs, ext_block_addr, rest, idx + 1), + }, + } + } + + -- Walk ext inductive's ctors. For each, apply substituted-positivity check + -- on field types via `check_positivity_aug`. Param substitution is implicit + -- — ext ctors reference params via BVar; we treat ext's params as lifted + -- block_idxs since the nested ext fields can mention any of those. + -- Simplification: walk ctor body fields directly; their refs to ext + -- params correspond positionally to args[0..n_params] which are checked + -- transitively when augmented. Sound for direct nested cases. + fn check_ctors_positivity(ctor_indices: List‹G›, args: List‹KExpr›, + aug: List‹G›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) { + match load(ctor_indices) { + ListNode.Nil => (), + ListNode.Cons(ctor_idx, rest) => + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, ctor_ty, _, _, n_params, _, _) => + let pair = peel_n_foralls_with_types(ctor_ty, n_params, store(ListNode.Nil)); + match pair { + (body, types_after) => + let _ = check_positivity_fields_aug(body, aug, types_after, top, addrs); + check_ctors_positivity(rest, args, aug, top, addrs), + }, + _ => check_ctors_positivity(rest, args, aug, top, addrs), + }, + } + } + + fn check_positivity_fields_aug(ty: KExpr, aug: List‹G›, + types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + match load(ty) { + KExprNode.Forall(dom, body) => + let _ = check_positivity_aug(dom, aug, types, top, addrs); + let types2 = store(ListNode.Cons(dom, types)); + check_positivity_fields_aug(body, aug, types2, top, addrs), + _ => (), + } + } + + -- Returns 1 if `e` contains any Const(ind_idx, _), 0 otherwise. + fn expr_mentions_idx(e: KExpr, ind_idx: G) -> G { + match load(e) { + KExprNode.BVar(_) => 0, + KExprNode.Srt(_) => 0, + KExprNode.Const(idx, _) => + match idx - ind_idx { + 0 => 1, + _ => 0, + }, + KExprNode.App(f, a) => + g_or(expr_mentions_idx(f, ind_idx), expr_mentions_idx(a, ind_idx)), + KExprNode.Lam(t, b) => + g_or(expr_mentions_idx(t, ind_idx), expr_mentions_idx(b, ind_idx)), + KExprNode.Forall(t, b) => + g_or(expr_mentions_idx(t, ind_idx), expr_mentions_idx(b, ind_idx)), + KExprNode.Let(t, v, b) => + g_or(expr_mentions_idx(t, ind_idx), + g_or(expr_mentions_idx(v, ind_idx), expr_mentions_idx(b, ind_idx))), + KExprNode.Lit(_) => 0, + KExprNode.Proj(_, _, e1) => expr_mentions_idx(e1, ind_idx), + } + } + + -- ============================================================================ + -- Canonical recursor type generation (solo / mutual / nested-aux) + -- + -- Mirror: src/ix/kernel/inductive.rs::build_motive_type_flat (line 2475+). + -- For non-aux members (is_aux=0): peel own_params with BVar refs to + -- recursor's outer params. For aux members (is_aux=1): substitute first + -- |spec_params| with the concrete spec_params lifted to the current depth. + -- n_rec_params is the block-shared param count; univ_offset is 0 for + -- Prop-targeting inductives, 1 for large eliminators (motive output univ + -- added at position 0). + -- ============================================================================ + + -- Build motive type: + -- forall (i_0 : I_0_ty) ... (i_M : I_M_ty), + -- forall (major : Indc.{occurrence_us} params indices), + -- Sort elim_level + -- + -- Where: + -- * params come from peeling n_params Foralls off ind_ty, substituting + -- each binder with BVar(n_rec_params - 1 - j) (recursor outer-scope + -- param refs); + -- * occurrence_us = [Param(univ_offset), ..., Param(univ_offset + ind_lvls - 1)]; + -- * elim_level is Param(0) for large eliminators, Zero for Prop. + -- Mirror: src/ix/kernel/inductive.rs:2128-2205 is_large_eliminator. + -- Returns 1 if recursor for this Indc can target any universe (i.e. + -- gets a motive output univ param), 0 if must target Prop. + -- + -- Cases (mirrors lean4lean): + -- 1. Result level non-zero → always large. + -- 2. Result is Prop AND 0 ctors (Empty/False) → large. + -- 3. Result is Prop AND single ctor AND 0 fields → large. + -- 4. Result is Prop AND single ctor AND all fields are Prop-typed + -- → large (subsingleton). Covers And/Eq/Acc/Iff/etc. + -- (Approximation of Rust's full check: all non-trivial fields + -- must appear in return args; conservative when some field is + -- Type-typed since we then return 0 = not large.) + -- 5. Otherwise (multiple ctors in Prop) → not large. + fn is_large_eliminator(result_level: KLevel, + ctor_indices: List‹G›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> G { + let nz = level_is_not_zero(result_level); + match nz { + 1 => 1, + 0 => + let n_ctors = list_length(ctor_indices); + match n_ctors { + 0 => 1, + 1 => + let ctor_idx = list_lookup(ctor_indices, 0); + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, ctor_ty, _, _, n_params, n_fields, _) => + match n_fields { + 0 => 1, + _ => + check_large_prop_ctor(ctor_ty, n_params, n_fields, + store(ListNode.Nil), top, addrs), + }, + }, + _ => 0, + }, + } + } + + -- Mirror: src/ix/kernel/inductive.rs:2148-2200 large-elim check on Prop + -- single-ctor inductive. Walk past `n_params` Foralls (skipping params), + -- then walk `n_fields` Foralls collecting de Bruijn indices of data fields + -- (those whose domain has sort != 0). Body after walk is the ctor's return + -- type; check each data field's BVar appears in the return-type's spine + -- args. If all do → large eliminator. + fn check_large_prop_ctor(ty: KExpr, n_params: G, n_fields: G, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> G { + match n_params { + 0 => + check_large_walk_fields(ty, n_fields, 0, types, top, addrs, + store(ListNode.Nil)), + _ => + match load(ty) { + KExprNode.Forall(dom, body) => + let inner = store(ListNode.Cons(dom, types)); + check_large_prop_ctor(body, n_params - 1, n_fields, inner, top, addrs), + _ => 0, + }, + } + } + + -- Walk `n_fields` Foralls, threading list of data-field BVars (de Bruijn + -- indices in the post-walk ret context). After walk, collect ret spine + -- args and verify every data BVar appears. + fn check_large_walk_fields(ty: KExpr, n_fields: G, field_idx: G, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›, + data_bvars: List‹G›) -> G { + match n_fields - field_idx { + 0 => + match collect_spine_simple(ty) { + (_, args) => all_bvars_in_args(data_bvars, args), + }, + _ => + match load(ty) { + KExprNode.Forall(dom, body) => + let lvl = k_ensure_sort(dom, types, top, addrs); + let is_data = 1 - level_equal(load(lvl), KLevel.Zero); + let bvar_idx = n_fields - 1 - field_idx; + let new_bvars = match is_data { + 0 => data_bvars, + _ => store(ListNode.Cons(bvar_idx, data_bvars)), + }; + let inner = store(ListNode.Cons(dom, types)); + check_large_walk_fields(body, n_fields, field_idx + 1, inner, top, addrs, + new_bvars), + _ => 0, + }, + } + } + + -- Returns 1 iff every BVar idx in `bvars` appears in `args` (as a syntactic + -- BVar at the ret-binder depth). + fn all_bvars_in_args(bvars: List‹G›, args: List‹KExpr›) -> G { + match load(bvars) { + ListNode.Nil => 1, + ListNode.Cons(b, rest) => + match args_contain_bvar(args, b) { + 0 => 0, + 1 => all_bvars_in_args(rest, args), + }, + } + } + + -- Returns 1 if any element of `args` is syntactically `BVar(target)`. + fn args_contain_bvar(args: List‹KExpr›, target: G) -> G { + match load(args) { + ListNode.Nil => 0, + ListNode.Cons(a, rest) => + match load(a) { + KExprNode.BVar(i) => + match i - target { + 0 => 1, + _ => args_contain_bvar(rest, target), + }, + _ => args_contain_bvar(rest, target), + }, + } + } + + -- Build motive type for a flat block member. + -- is_aux=0 (original): peel n_own_params subst with BVar(n_rec_params-1-j). + -- is_aux=1 (aux for nested ext): peel ext.n_params; substitute first + -- spec_params.len() with spec_params[j] (lifted to current depth=0, + -- which equals identity when spec_params live in recursor-param frame), + -- the rest with BVar(n_rec_params-1-j). + -- Major: aux applies spec_params lifted by n_indices; non-aux applies + -- recursor-param BVars. + fn build_motive_type_flat(ind_idx: G, ind_ty: KExpr, + n_own_params: G, n_indices: G, + occurrence_us: List‹&KLevel›, + elim_level: KLevel, + n_rec_params: G, + is_aux: G, spec_params: List‹KExpr›) -> KExpr { + let ind_ty_inst = expr_inst_levels(ind_ty, occurrence_us); + let after_params = peel_motive_params_subst(ind_ty_inst, n_own_params, n_rec_params, + is_aux, spec_params, 0); + let index_doms = collect_index_doms(after_params, n_indices); + let head = store(KExprNode.Const(ind_idx, occurrence_us)); + let with_args = build_major_args_for_member(head, n_rec_params, n_indices, + is_aux, spec_params); + let major_ty = build_major_indices(with_args, n_indices, 0); + let sort_e = store(KExprNode.Srt(store(elim_level))); + let with_major = store(KExprNode.Forall(major_ty, sort_e)); + wrap_foralls(with_major, index_doms) + } + + -- For aux: apply spec_params (each lifted by depth=n_indices) to head. + -- For non-aux: apply n_rec_params recursor-param BVars to head. + fn build_major_args_for_member(head: KExpr, n_rec_params: G, depth: G, + is_aux: G, spec_params: List‹KExpr›) -> KExpr { + match is_aux { + 0 => build_major_params(head, n_rec_params, depth, 0), + _ => apply_spec_params_lifted(head, spec_params, depth), + } + } + + fn apply_spec_params_lifted(head: KExpr, spec_params: List‹KExpr›, + depth: G) -> KExpr { + match load(spec_params) { + ListNode.Nil => head, + ListNode.Cons(sp, rest) => + let lifted = expr_lift(sp, depth, 0); + apply_spec_params_lifted(store(KExprNode.App(head, lifted)), rest, depth), + } + } + + -- Peel n Foralls; for each binder j substitute per is_aux: + -- non-aux: BVar(n_rec_params - 1 - j). + -- aux: spec_params[j] when j < |spec_params|, else BVar(n_rec_params - 1 - j). + fn peel_motive_params_subst(ty: KExpr, n: G, n_rec_params: G, + is_aux: G, spec_params: List‹KExpr›, j: G) -> KExpr { + match n { + 0 => ty, + _ => + match load(ty) { + KExprNode.Forall(_, body) => + let p = subst_param_for(j, n_rec_params, is_aux, spec_params); + let body_substed = expr_inst1(body, p, 0); + peel_motive_params_subst(body_substed, n - 1, n_rec_params, + is_aux, spec_params, j + 1), + }, + } + } + + fn subst_param_for(j: G, n_rec_params: G, is_aux: G, + spec_params: List‹KExpr›) -> KExpr { + match is_aux { + 0 => store(KExprNode.BVar((n_rec_params - 1) - j)), + _ => + let len = list_length(spec_params); + let lt = u32_less_than(j, len); + match lt { + 1 => list_lookup(spec_params, j), + _ => store(KExprNode.BVar((n_rec_params - 1) - j)), + }, + } + } + + -- Peel ctor's leading own_params. Non-aux: plain peel. Aux: substitute + -- each with spec_params[j] (or BVar fallback if beyond spec). + fn peel_rule_ctor_params(ty: KExpr, n: G, + is_aux: G, spec_params: List‹KExpr›) -> KExpr { + match is_aux { + 0 => peel_n_foralls(ty, n), + _ => peel_ctor_params_subst(ty, n, 0, 1, spec_params, 0), + } + } + + -- Look up Ctor.num_params from top. + fn ctor_num_params_of(ctor_idx: G, top: List‹&KConstantInfo›) -> G { + let ci = load(list_lookup(top, ctor_idx)); + match ci { + KConstantInfo.Ctor(_, _, _, _, num_params, _, _) => num_params, + _ => 0, + } + } + + -- Peel ctor's own_params with depth-aware substitution. For non-aux: + -- BVar(depth-1-j). For aux: spec_params[j] lifted by depth when j < |spec|; + -- BVar(depth-1-j) otherwise. + fn peel_ctor_params_subst(ty: KExpr, n: G, depth: G, + is_aux: G, spec_params: List‹KExpr›, j: G) -> KExpr { + match n { + 0 => ty, + _ => + match load(ty) { + KExprNode.Forall(_, body) => + let p = ctor_subst_param_for(j, depth, is_aux, spec_params); + let body_substed = expr_inst1(body, p, 0); + peel_ctor_params_subst(body_substed, n - 1, depth, + is_aux, spec_params, j + 1), + }, + } + } + + fn ctor_subst_param_for(j: G, depth: G, is_aux: G, + spec_params: List‹KExpr›) -> KExpr { + match is_aux { + 0 => store(KExprNode.BVar((depth - 1) - j)), + _ => + let len = list_length(spec_params); + let lt = u32_less_than(j, len); + match lt { + 1 => + let sp = list_lookup(spec_params, j); + expr_lift(sp, depth, 0), + _ => store(KExprNode.BVar((depth - 1) - j)), + }, + } + } + + fn collect_index_doms(ty: KExpr, n: G) -> List‹KExpr› { + match n { + 0 => store(ListNode.Nil), + _ => + match load(ty) { + KExprNode.Forall(dom, body) => + store(ListNode.Cons(dom, collect_index_doms(body, n - 1))), + }, + } + } + + -- [Param(start), Param(start+1), ..., Param(start+count-1)] as List‹&KLevel›. + fn build_param_lvls_range(start: G, count: G, i: G) -> List‹&KLevel› { + match count - i { + 0 => store(ListNode.Nil), + _ => + store(ListNode.Cons( + store(KLevel.Param(start + i)), + build_param_lvls_range(start, count, i + 1))), + } + } + + -- Apply head to recursor params: `App(... App(head, BVar(n_rec_params-1+depth)), ...)`. + -- Each param j is at BVar(n_rec_params - 1 - j + depth) where depth is + -- the index-binder count above the major position. + fn build_major_params(head: KExpr, n_rec_params: G, depth: G, j: G) -> KExpr { + match n_rec_params - j { + 0 => head, + _ => + let v = store(KExprNode.BVar(((n_rec_params - 1) - j) + depth)); + build_major_params(store(KExprNode.App(head, v)), n_rec_params, depth, j + 1), + } + } + + -- Apply head to indices: `App(... App(head, BVar(n_indices-1)), ...)`. + -- Index i (0-indexed from outermost) is BVar(n_indices - 1 - i) at the + -- major's scope. + fn build_major_indices(head: KExpr, n_indices: G, i: G) -> KExpr { + match n_indices - i { + 0 => head, + _ => + let v = store(KExprNode.BVar((n_indices - 1) - i)); + build_major_indices(store(KExprNode.App(head, v)), n_indices, i + 1), + } + } + + -- Wrap body in foralls outside-in: doms = [d0, d1, ..., dM] → + -- `forall (_ : d0), forall (_ : d1), ..., forall (_ : dM), body`. + fn wrap_foralls(body: KExpr, doms: List‹KExpr›) -> KExpr { + match load(doms) { + ListNode.Nil => body, + ListNode.Cons(dom, rest) => + store(KExprNode.Forall(dom, wrap_foralls(body, rest))), + } + } + + -- ============================================================================ + -- build_minor_at_depth (flat-aware: solo / mutual / nested-aux) + -- + -- Mirror: src/ix/kernel/inductive.rs:2596-2806. + -- Builds the minor binder type for a single ctor, including IHs for + -- recursive fields. Forall-wrapped recursive fields (e.g. + -- `(Nat → Foo) → Foo`) are handled via `is_rec_field` peeling leading + -- foralls + `build_ih_doms` wrapping IH body in matching foralls. + -- For aux ctors (is_aux=1): peel ext.n_params with spec_params subst. + -- Motive offset = `motive_base + member_local_idx_of(ind_idx)` for the + -- owning member. + -- ============================================================================ + fn build_minor_at_depth(ind_idx: G, ctor_idx: G, ctor_ty: KExpr, + is_aux: G, spec_params: List‹KExpr›, + occurrence_us: List‹&KLevel›, flat_idxs: List‹G›, + flat_own_params: List‹G›, + n_rec_params: G, n_motives: G, prev_minors: G, + motive_base: G, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> KExpr { + let ctor_ty_inst = expr_inst_levels(ctor_ty, occurrence_us); + let minor_saved = n_rec_params + n_motives + prev_minors; + -- Peel ctor's own_params. For non-aux: n_own_params == n_rec_params, all + -- substitute with BVar(minor_saved-1-j). For aux: n_own_params == ext.n_params; + -- first |spec_params| substitute with spec_params[j] lifted by minor_saved, + -- the rest with BVar(minor_saved-1-j). + let n_own_params = ctor_num_params_of(ctor_idx, top); + let after_params = peel_ctor_params_subst(ctor_ty_inst, n_own_params, + minor_saved, is_aux, spec_params, 0); + let self_mem_pair = find_member_local_idx(flat_idxs, ind_idx, 0); + let self_mem_idx = match self_mem_pair { (_, m) => m, }; + let walk = walk_fields_classify(after_params, flat_idxs, store(ListNode.Nil), + store(ListNode.Nil), store(ListNode.Nil), + store(ListNode.Nil), top, addrs, 0); + match walk { + (field_doms, rec_indices, rec_member_idxs, ret_ty) => + let n_fields = list_length(field_doms); + let n_ihs = list_length(rec_indices); + let n_binders = n_fields + n_ihs; + let depth_now = minor_saved + n_binders; + let ret_pair = collect_spine_simple(ret_ty); + match ret_pair { + (_ret_head, ret_args) => + -- Drop n_own_params from ret to expose indices. + let ret_indices = list_drop(ret_args, n_own_params); + let ret_indices_lifted = list_lift_each(ret_indices, n_ihs, 0); + let motive_var = (depth_now - 1) - (motive_base + self_mem_idx); + let motive_ref = store(KExprNode.BVar(motive_var)); + let with_indices = apply_spine(motive_ref, ret_indices_lifted); + let ctor_head = store(KExprNode.Const(ctor_idx, occurrence_us)); + -- For non-aux: apply n_rec_params recursor-param BVars. + -- For aux: apply spec_params lifted to body scope (depth_now-1+1 = depth_now). + let with_params = build_ctor_app_params(ctor_head, n_own_params, + n_rec_params, depth_now, + is_aux, spec_params); + let ctor_app = build_apply_field_bvars(with_params, n_fields, n_binders, 0); + let conclusion = store(KExprNode.App(with_indices, ctor_app)); + let ih_doms = build_ih_doms(rec_indices, rec_member_idxs, field_doms, + flat_own_params, motive_base, n_fields, + minor_saved, store(ListNode.Nil), top, addrs, 0); + let with_ihs = wrap_foralls(conclusion, ih_doms); + wrap_foralls(with_ihs, field_doms), + }, + } + } + + -- Apply ctor head with its own_params. Non-aux: BVar refs to recursor + -- params (depth_now-1 down to depth_now-n_rec_params). Aux: spec_params + -- lifted to depth_now. + fn build_ctor_app_params(head: KExpr, n_own_params: G, n_rec_params: G, + depth_now: G, + is_aux: G, spec_params: List‹KExpr›) -> KExpr { + match is_aux { + 0 => build_apply_bvars_decreasing(head, n_rec_params, depth_now - 1, 0), + _ => apply_spec_params_lifted(head, spec_params, depth_now), + } + } + + -- Peel n Foralls; substitute each binder with `BVar(depth - 1 - j)`. + fn peel_n_subst_at_depth(ty: KExpr, n: G, depth: G, j: G) -> KExpr { + match n { + 0 => ty, + _ => + match load(ty) { + KExprNode.Forall(_, body) => + let p = store(KExprNode.BVar((depth - 1) - j)); + let body_substed = expr_inst1(body, p, 0); + peel_n_subst_at_depth(body_substed, n - 1, depth, j + 1), + }, + } + } + + -- Walk Foralls of `ty` collecting (field_doms, rec_field_indices, ret_ty). + -- A field is recursive (direct case) when its spine head is Const(ind_idx). + -- Builds accumulators with O(1) cons (prepend) and reverses once at end — + -- O(F) total vs O(F²) with snoc. + fn walk_fields_classify(ty: KExpr, block_member_idxs: List‹G›, + doms_acc: List‹KExpr›, rec_acc: List‹G›, + rec_mem_acc: List‹G›, + types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›, + fidx: G) -> (List‹KExpr›, List‹G›, List‹G›, KExpr) { + match load(ty) { + KExprNode.Forall(dom, body) => + let r = is_rec_field(dom, block_member_idxs, types, top, addrs); + let new_doms = store(ListNode.Cons(dom, doms_acc)); + let types2 = store(ListNode.Cons(dom, types)); + match r { + (1, mem_idx) => + let new_rec = store(ListNode.Cons(fidx, rec_acc)); + let new_mem = store(ListNode.Cons(mem_idx, rec_mem_acc)); + walk_fields_classify(body, block_member_idxs, new_doms, new_rec, new_mem, + types2, top, addrs, fidx + 1), + _ => + walk_fields_classify(body, block_member_idxs, new_doms, rec_acc, rec_mem_acc, + types2, top, addrs, fidx + 1), + }, + _ => (list_reverse(doms_acc), list_reverse(rec_acc), list_reverse(rec_mem_acc), ty), + } + } + + -- Derive the list of block-member indices for a recursor's parent ind. + -- Solo (ind's block_addr = [0;32]) → `[ind_idx]`. Otherwise walks `top` + -- collecting all Inducts sharing the block_addr in positional order. + fn derive_block_member_idxs(ind_idx: G, top: List‹&KConstantInfo›) -> List‹G› { + let ci = load(list_lookup(top, ind_idx)); + match ci { + KConstantInfo.Induct(_, _, _, _, _, _, _, _, _, block_addr) => + match address_eq(block_addr, [0; 32]) { + 1 => store(ListNode.Cons(ind_idx, store(ListNode.Nil))), + 0 => collect_block_members(block_addr, top, 0), + }, + _ => store(ListNode.Cons(ind_idx, store(ListNode.Nil))), + } + } + + fn collect_block_members(block_addr: [G; 32], + consts: List‹&KConstantInfo›, idx: G) -> List‹G› { + match load(consts) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(&ci, rest) => + match ci { + KConstantInfo.Induct(_, _, _, _, _, _, _, _, _, ba) => + match address_eq(ba, block_addr) { + 1 => store(ListNode.Cons(idx, collect_block_members(block_addr, rest, idx + 1))), + 0 => collect_block_members(block_addr, rest, idx + 1), + }, + _ => collect_block_members(block_addr, rest, idx + 1), + }, + } + } + + -- Returns 1 iff idx ≠ ind_idx but Inducts at idx and ind_idx share the + -- same non-[0;32] block_addr. Used to classify peer/aux refs in block + -- as recursive for IH building. + fn is_in_same_block(idx: G, ind_idx: G, top: List‹&KConstantInfo›) -> G { + let i_ci = load(list_lookup(top, ind_idx)); + match i_ci { + KConstantInfo.Induct(_, _, _, _, _, _, _, _, _, ind_ba) => + match address_eq(ind_ba, [0; 32]) { + 1 => 0, + 0 => + let other_ci = load(list_lookup(top, idx)); + match other_ci { + KConstantInfo.Induct(_, _, _, _, _, _, _, _, _, other_ba) => + address_eq(other_ba, ind_ba), + _ => 0, + }, + }, + _ => 0, + } + } + + -- Mirror: src/ix/kernel/inductive.rs:2968-3019 fn is_rec_field. + -- Returns (is_rec, member_local_idx) where member_local_idx is the + -- position of the head idx within block_member_idxs (0 for direct). + -- Returns (0, 0) if not recursive. WHNFs the per-field body so that + -- ctor field types written via reducible defs (e.g. `constType (n α) + -- (n α)`) collapse to expose the underlying inductive head, mirroring + -- the Rust kernel's whnf inside `is_rec_field`. + fn is_rec_field(dom: KExpr, block_member_idxs: List‹G›, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, G) { + match peel_leading_foralls(dom) { + (doms, body) => + let inner_types = list_concat(list_reverse(doms), types); + let body_w = whnf(body, inner_types, top, addrs); + match collect_spine_simple(body_w) { + (head, _) => + match load(head) { + KExprNode.Const(idx, _) => find_member_local_idx(block_member_idxs, idx, 0), + _ => (0, 0), + }, + }, + } + } + + -- Find target's index within block_member_idxs. (1, idx) if found else (0, 0). + fn find_member_local_idx(idxs: List‹G›, target: G, i: G) -> (G, G) { + match load(idxs) { + ListNode.Nil => (0, 0), + ListNode.Cons(x, rest) => + match x - target { + 0 => (1, i), + _ => find_member_local_idx(rest, target, i + 1), + }, + } + } + + fn peel_leading_foralls(ty: KExpr) -> (List‹KExpr›, KExpr) { + let pair = peel_leading_foralls_acc(ty, store(ListNode.Nil)); + match pair { + (rev_acc, body) => (list_reverse(rev_acc), body), + } + } + + -- Builds doms in reverse via O(1) cons; caller reverses once. + fn peel_leading_foralls_acc(ty: KExpr, acc: List‹KExpr›) -> (List‹KExpr›, KExpr) { + match load(ty) { + KExprNode.Forall(dom, body) => + peel_leading_foralls_acc(body, store(ListNode.Cons(dom, acc))), + _ => (acc, ty), + } + } + + -- Apply head to xs from outermost: x_0 = BVar(n_xs - 1), ..., x_{n-1} = BVar(0). + fn build_apply_xs(head: KExpr, n_xs: G, i: G) -> KExpr { + match n_xs - i { + 0 => head, + _ => + let v = store(KExprNode.BVar((n_xs - 1) - i)); + build_apply_xs(store(KExprNode.App(head, v)), n_xs, i + 1), + } + } + + fn list_lift_each(es: List‹KExpr›, shift: G, cutoff: G) -> List‹KExpr› { + match load(es) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(e, rest) => + store(ListNode.Cons(expr_lift(e, shift, cutoff), + list_lift_each(rest, shift, cutoff))), + } + } + + -- Apply head to BVars at descending positions: `App(... App(head, BVar(start)), BVar(start-1)), ...` + -- Used for ctor params (BVar refs to recursor's outer params at depth_now). + fn build_apply_bvars_decreasing(head: KExpr, n: G, start: G, j: G) -> KExpr { + match n - j { + 0 => head, + _ => + let v = store(KExprNode.BVar(start - j)); + build_apply_bvars_decreasing(store(KExprNode.App(head, v)), n, start, j + 1), + } + } + + -- Apply head to ctor fields: `App(... App(head, BVar(n_binders-1)), BVar(n_binders-2)), ...` + fn build_apply_field_bvars(head: KExpr, n_fields: G, n_binders: G, i: G) -> KExpr { + match n_fields - i { + 0 => head, + _ => + let v = store(KExprNode.BVar((n_binders - 1) - i)); + build_apply_field_bvars(store(KExprNode.App(head, v)), n_fields, n_binders, i + 1), + } + } + + -- For each (rec_field_idx, k) pair, build IH dom: + -- `motive (lifted_idx_args) field_var` + -- at scope where fields are bound but k IHs already pushed. + -- depth_at_this_ih = minor_saved + n_fields + k + -- motive_var = depth_at_this_ih - 1 - motive_base + -- field_var = depth_at_this_ih - 1 - (minor_saved + field_idx) + -- Lift the field's dom by (n_fields - field_idx + k) so its BVars + -- map into current scope. Then strip first n_rec_params spine args. + -- Collect first n Forall doms; return (doms, remaining_body). + fn collect_n_doms(ty: KExpr, n: G) -> (List‹KExpr›, KExpr) { + let pair = collect_n_doms_acc(ty, n, store(ListNode.Nil)); + match pair { + (rev_acc, body) => (list_reverse(rev_acc), body), + } + } + + fn collect_n_doms_acc(ty: KExpr, n: G, acc: List‹KExpr›) -> (List‹KExpr›, KExpr) { + match n { + 0 => (acc, ty), + _ => + match load(ty) { + KExprNode.Forall(dom, body) => + collect_n_doms_acc(body, n - 1, store(ListNode.Cons(dom, acc))), + }, + } + } + + -- Apply head to indices in conclusion scope. + -- Index i (0-indexed from outer) at BVar(n_indices - i). + fn apply_indices_in_conclusion(head: KExpr, n_indices: G, i: G) -> KExpr { + match n_indices - i { + 0 => head, + _ => + let v = store(KExprNode.BVar(n_indices - i)); + apply_indices_in_conclusion(store(KExprNode.App(head, v)), n_indices, i + 1), + } + } + + -- Build per-ctor minor type list. is_aux + spec_params + occ_us control + -- how the ctor's leading own_params are substituted and what occurrence_us + -- to use for the ctor head; flat_idxs is used for rec field detection. + fn build_minor_doms(ctor_indices: List‹G›, ind_idx: G, + is_aux: G, spec_params: List‹KExpr›, + occurrence_us: List‹&KLevel›, flat_idxs: List‹G›, + flat_own_params: List‹G›, + n_rec_params: G, n_motives: G, + motive_base: G, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›, + prev_minors: G) -> List‹KExpr› { + match load(ctor_indices) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(ctor_idx, rest) => + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, ctor_ty, _, _, _, _, _) => + let minor = build_minor_at_depth(ind_idx, ctor_idx, ctor_ty, + is_aux, spec_params, occurrence_us, + flat_idxs, flat_own_params, + n_rec_params, n_motives, prev_minors, + motive_base, top, addrs); + let rest_minors = build_minor_doms(rest, ind_idx, is_aux, spec_params, + occurrence_us, flat_idxs, + flat_own_params, + n_rec_params, n_motives, + motive_base, + top, addrs, prev_minors + 1); + store(ListNode.Cons(minor, rest_minors)), + }, + } + } + + -- Build motive types for every flat block member. Each member's motive_ty + -- references shared params (BVar 0..n_rec_params-1) and its own n_indices, + -- with elim_level + univ_offset shared. Motive j (j>0) lifted by j to + -- account for the j prior motives bound between params and motive j + -- (mirror src/ix/kernel/inductive.rs:3074-3082). + fn build_all_motives(flat: List‹(G, G, List‹KExpr›, List‹&KLevel›)›, + n_params: G, + ind_lvls: G, elim_level: KLevel, univ_offset: G, + n_rec_params: G, + top: List‹&KConstantInfo›) -> List‹KExpr› { + build_all_motives_walk(flat, n_params, ind_lvls, elim_level, + univ_offset, n_rec_params, top, 0) + } + + fn build_all_motives_walk(flat: List‹(G, G, List‹KExpr›, List‹&KLevel›)›, + n_params: G, + ind_lvls: G, elim_level: KLevel, univ_offset: G, + n_rec_params: G, + top: List‹&KConstantInfo›, j: G) -> List‹KExpr› { + match load(flat) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(m, rest) => + match m { + (member_idx, is_aux, spec_params, occ_us) => + let ci = load(list_lookup(top, member_idx)); + match ci { + KConstantInfo.Induct(_, m_ind_ty, m_own_params, m_n_indices, + _, _, _, _, _, _) => + let mt = build_motive_type_flat(member_idx, m_ind_ty, m_own_params, + m_n_indices, occ_us, elim_level, + n_rec_params, + is_aux, spec_params); + let mt_lifted = expr_lift(mt, j, 0); + store(ListNode.Cons(mt_lifted, + build_all_motives_walk(rest, n_params, ind_lvls, elim_level, + univ_offset, n_rec_params, top, j + 1))), + _ => + build_all_motives_walk(rest, n_params, ind_lvls, elim_level, + univ_offset, n_rec_params, top, j), + }, + }, + } + } + + -- Aggregate minor types across all flat block members' ctors. prev_minors + -- is the count of minors already added from previous members; threaded + -- through so each minor's depth math is correct. flat carries (ind_idx, + -- is_aux, spec_params) so aux ctors can substitute spec_params during + -- their own-param peel. + fn build_all_minors(flat: List‹(G, G, List‹KExpr›, List‹&KLevel›)›, + flat_idxs: List‹G›, flat_own_params: List‹G›, + n_rec_params: G, n_motives: G, + ind_lvls: G, univ_offset: G, motive_base: G, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›, + prev_minors: G) -> List‹KExpr› { + match load(flat) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(m, rest) => + match m { + (member_idx, is_aux, spec_params, occ_us) => + let ci = load(list_lookup(top, member_idx)); + match ci { + KConstantInfo.Induct(_, _, _, _, m_ctor_indices, _, _, _, _, _) => + let m_minors = build_minor_doms(m_ctor_indices, member_idx, + is_aux, spec_params, occ_us, flat_idxs, + flat_own_params, + n_rec_params, n_motives, + motive_base, top, addrs, prev_minors); + let added = list_length(m_minors); + let rest_minors = build_all_minors(rest, flat_idxs, flat_own_params, + n_rec_params, + n_motives, ind_lvls, univ_offset, + motive_base, top, addrs, + prev_minors + added); + list_concat(m_minors, rest_minors), + _ => + build_all_minors(rest, flat_idxs, flat_own_params, + n_rec_params, n_motives, ind_lvls, + univ_offset, motive_base, top, addrs, prev_minors), + }, + }, + } + } + + -- ============================================================================ + -- build_rec_type (flat-aware: solo / mutual / nested-aux) + -- + -- Mirror: src/ix/kernel/inductive.rs:3027+ build_rec_type. + -- Assembles full recursor type: + -- + -- forall (params...), + -- forall (motive_0 : motive_ty_0) ... forall (motive_{N-1} : motive_ty_{N-1}), + -- forall (minor_0) ... forall (minor_{M-1}), + -- forall (indices...), + -- forall (major : Indc params indices), + -- motive_self indices major + -- + -- N motives = |flat| (one per original + nested-aux block member). + -- M minors = sum of |ctors| across all flat members. + -- Computes elim_level / univ_offset internally via is_large_eliminator. + -- `primary_ind_idx` is the canonical block's source for `flat`; `ind_idx` + -- is self (= primary for solo/mutual; aux ext for nested aux recursors). + -- ============================================================================ + fn build_rec_type(ind_idx: G, ind_ty: KExpr, ctor_indices: List‹G›, + n_params: G, n_indices: G, ind_lvls: G, + self_own_params: G, + primary_ind_idx: G, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> KExpr { + let result_level = get_result_sort_level(ind_ty, self_own_params + n_indices); + let is_large = is_large_eliminator(result_level, ctor_indices, top, addrs); + let elim_level = match is_large { + 1 => KLevel.Param(0), + 0 => KLevel.Zero, + }; + let univ_offset = is_large; + let block_member_idxs = derive_block_member_idxs(primary_ind_idx, top); + let flat = build_flat_block(block_member_idxs, univ_offset, top); + let flat_idxs = flat_ind_idxs(flat); + let n_motives = list_length(flat); + let n_rec_params = n_params; + let motive_base = n_rec_params; + + -- Use self's concrete occurrence_us for ind_ty inst (so nested aux univ + -- args match what Lean stored). + let self_mem_pair0 = find_member_local_idx(flat_idxs, ind_idx, 0); + let self_mem_idx0 = match self_mem_pair0 { (_, m) => m, }; + let self_member0 = flat_member_at(flat, self_mem_idx0); + let self_occ_us0 = match self_member0 { (_, _, _, ou) => ou, }; + let ind_ty_inst = expr_inst_levels(ind_ty, self_occ_us0); + + let params_walk = collect_n_doms(ind_ty_inst, n_params); + match params_walk { + (param_doms, after_params) => + let flat_own_params = flat_own_params_of(flat, top); + let motive_doms = build_all_motives(flat, n_params, + ind_lvls, elim_level, univ_offset, + n_rec_params, top); + + let minor_doms = build_all_minors(flat, flat_idxs, flat_own_params, + n_rec_params, n_motives, + ind_lvls, univ_offset, motive_base, top, addrs, 0); + let n_minors = list_length(minor_doms); + + let indices_walk = collect_n_doms(after_params, n_indices); + match indices_walk { + (index_doms_raw, _ret_sort) => + let index_doms = list_lift_indices(index_doms_raw, n_motives + n_minors, 0); + let self_mem_pair = find_member_local_idx(flat_idxs, ind_idx, 0); + let self_mem_idx = match self_mem_pair { (_, m) => m, }; + let self_member = flat_member_at(flat, self_mem_idx); + let self_is_aux = match self_member { (_, ia, _, _) => ia, }; + let self_spec_params = match self_member { (_, _, sp, _) => sp, }; + let self_occ_us = match self_member { (_, _, _, ou) => ou, }; + let head = store(KExprNode.Const(ind_idx, self_occ_us)); + let pre_major_depth = n_rec_params + n_motives + n_minors + n_indices; + let with_args = build_major_args_for_self(head, n_rec_params, + pre_major_depth - 1, n_indices, + self_is_aux, self_spec_params); + let major_ty = build_major_indices(with_args, n_indices, 0); + + let depth_after_major = pre_major_depth + 1; + let motive_var = (depth_after_major - 1) - (motive_base + self_mem_idx); + let motive_ref = store(KExprNode.BVar(motive_var)); + let with_indices = apply_indices_in_conclusion(motive_ref, n_indices, 0); + let conclusion = store(KExprNode.App(with_indices, store(KExprNode.BVar(0)))); + + let with_major = store(KExprNode.Forall(major_ty, conclusion)); + let with_idx_foralls = wrap_foralls(with_major, index_doms); + let with_minors = wrap_foralls(with_idx_foralls, minor_doms); + let with_motives = wrap_foralls(with_minors, motive_doms); + wrap_foralls(with_motives, param_doms), + }, + } + } + + -- ============================================================================ + -- build_rule_rhs (flat-aware: solo / mutual / nested-aux, direct case) + -- + -- Mirror: src/ix/kernel/inductive.rs:3571+ build_rule_rhs. + -- Builds the RHS of the recursor rule for one ctor: + -- + -- λ (params...) λ (motives...) λ (minors...) λ (fields...), + -- minor_i field_0 ... field_{F-1} IH_0 ... IH_R + -- + -- Where IH_j (direct case for rec field of target member t) = + -- Const(peer_recs[t], lvls) (params) (motives) (minors) (idx_args) field_j + -- + -- For aux ctors (is_aux=1): peel ext.n_params with spec_params subst so + -- field doms become concrete. For non-aux mutual: peer_recs maps each + -- block member to its own recursor. No post-major args. + -- ============================================================================ + fn build_rule_rhs(rec_idx: G, ind_idx: G, ctor_idx: G, ctor_ty: KExpr, + ctor_minor_index: G, + n_params: G, n_motives: G, n_minors: G, + ind_lvls: G, univ_offset: G, + motive_doms: List‹KExpr›, minor_doms: List‹KExpr›, + param_doms: List‹KExpr›, peer_recs: List‹G›, + flat_idxs: List‹G›, flat_own_params: List‹G›, + is_aux: G, spec_params: List‹KExpr›, + occurrence_us: List‹&KLevel›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> KExpr { + let rec_lvls_list = build_rec_lvls(ind_lvls + univ_offset, 0); + let ctor_ty_inst = expr_inst_levels(ctor_ty, occurrence_us); + -- Peel n_params Foralls without substitution. Field doms collected + -- by walk live in their natural ctor-body scope: at walk position i, + -- peer fields 0..i-1 occupy BVar(0..i-1) (stack: latest peer at + -- BVar(0)) and params occupy BVar(i..i+n_params-1) (auto-shifted by + -- de Bruijn semantics as walk descends Foralls). Lifts to Lam-type + -- and body scopes happen at use sites. + -- Peel ctor's own_params. Non-aux: leave as natural BVars (caller lifts + -- via list_lift_indices). Aux: substitute with spec_params so field doms + -- become concrete; at this point, depth=0, fields will be lifted later. + let n_own_params = ctor_num_params_of(ctor_idx, top); + let after_params = peel_rule_ctor_params(ctor_ty_inst, n_own_params, + is_aux, spec_params); + let walk = walk_fields_classify(after_params, flat_idxs, store(ListNode.Nil), + store(ListNode.Nil), store(ListNode.Nil), + store(ListNode.Nil), top, addrs, 0); + match walk { + (field_doms, rec_indices, rec_member_idxs, _ret_ty) => + let n_fields = list_length(field_doms); + let n_ihs = list_length(rec_indices); + -- Body scope: all Lams pushed. depth = n_params + n_motives + n_minors + n_fields. + let body_depth = n_params + n_motives + n_minors + n_fields; + -- minor_i at BVar(body_depth - 1 - (n_params + n_motives + ctor_minor_index)) + let minor_var = (body_depth - 1) - (n_params + n_motives + ctor_minor_index); + let minor_ref = store(KExprNode.BVar(minor_var)); + -- Apply ctor fields: field j at BVar(n_fields - 1 - j) + let with_fields = build_apply_field_bvars(minor_ref, n_fields, n_fields, 0); + -- Apply IHs: for each rec field j, build IH using peer_recs[mem_idx] + let body = apply_ihs(with_fields, rec_indices, rec_member_idxs, field_doms, + peer_recs, flat_own_params, + n_params, n_motives, n_minors, n_fields, + rec_lvls_list, store(ListNode.Nil), top, addrs, 0); + -- Lift each field_dom (in walk-pos i scope) into its Lam-type + -- scope: peer refs (BVar < walk_pos i) stay; param refs + -- (BVar >= i) lift by n_motives + n_minors (= the additional + -- binders between params and field-Lams in rule-rhs). + let field_doms_for_lams = list_lift_indices(field_doms, n_motives + n_minors, 0); + let with_field_lams = wrap_lams(body, field_doms_for_lams); + -- Wrap with minor Lams (innermost first, but our minor_doms is outermost first) + let with_minor_lams = wrap_lams(with_field_lams, minor_doms); + -- Wrap with N motive Lams (one per block member; multi-member mutual) + let with_motive_lams = wrap_lams(with_minor_lams, motive_doms); + -- Wrap with param Lams + wrap_lams(with_motive_lams, param_doms), + } + } + + -- Recursor's univ params: [Param(0), ..., Param(total_lvls-1)]. + fn build_rec_lvls(total: G, i: G) -> List‹&KLevel› { + match total - i { + 0 => store(ListNode.Nil), + _ => + store(ListNode.Cons( + store(KLevel.Param(i)), + build_rec_lvls(total, i + 1))), + } + } + + fn wrap_lams(body: KExpr, doms: List‹KExpr›) -> KExpr { + match load(doms) { + ListNode.Nil => body, + ListNode.Cons(dom, rest) => + store(KExprNode.Lam(dom, wrap_lams(body, rest))), + } + } + + -- For each rec field j, append IH_j applied to `head`. + -- IH_j = `Const(target_rec, rec_lvls) (params...) (motives...) (minors...) (idx_args) field_j`. + -- target_rec = peer_recs[mem_idx] — the recursor for the field's own type. + -- Mirror: src/ix/kernel/inductive.rs:3838-3956 fn build_rule_ih. + -- Mirror src/ix/kernel/inductive.rs:3838-3956 fn build_rule_ih: WHNF the + -- field's lifted dom and the inner body so the head/args reflect the + -- true inductive occurrence (after reducing wrappers like + -- `constType (n α) (n α)` → `n α`). + fn apply_ihs(head: KExpr, rec_indices: List‹G›, rec_member_idxs: List‹G›, + field_doms: List‹KExpr›, + peer_recs: List‹G›, flat_own_params: List‹G›, + n_params: G, n_motives: G, n_minors: G, n_fields: G, + rec_lvls_list: List‹&KLevel›, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›, k: G) -> KExpr { + match load(rec_indices) { + ListNode.Nil => head, + ListNode.Cons(field_idx, rest) => + let mem_idx = list_lookup(rec_member_idxs, k); + let target_rec = list_lookup(peer_recs, mem_idx); + let target_n_params = list_lookup(flat_own_params, mem_idx); + let body_depth = n_params + n_motives + n_minors + n_fields; + let dom = list_lookup(field_doms, field_idx); + let dom_s1 = expr_lift(dom, n_fields - field_idx, 0); + let dom_lifted = expr_lift(dom_s1, n_motives + n_minors, n_fields); + let dom_w = whnf(dom_lifted, types, top, addrs); + match peel_leading_foralls(dom_w) { + (forall_doms, inner_body_raw) => + let inner_types = list_concat(list_reverse(forall_doms), types); + let inner_body = whnf(inner_body_raw, inner_types, top, addrs); + let n_xs = list_length(forall_doms); + let inner_depth = body_depth + n_xs; + let rec_const = store(KExprNode.Const(target_rec, rec_lvls_list)); + let with_params = build_apply_bvars_decreasing(rec_const, n_params, inner_depth - 1, 0); + let with_motives = build_apply_motives(with_params, n_motives, + ((inner_depth - 1) - n_params), 0); + let with_minors = build_apply_minors(with_motives, n_minors, + (((inner_depth - 1) - n_params) - n_motives), 0); + match collect_spine_simple(inner_body) { + (_dh, dargs) => + let idx_args = list_drop(dargs, target_n_params); + let with_idx = apply_spine(with_minors, idx_args); + let field_base = ((n_fields - 1) - field_idx) + n_xs; + let field_ref = store(KExprNode.BVar(field_base)); + let field_app = build_apply_xs(field_ref, n_xs, 0); + let ih_inner = store(KExprNode.App(with_idx, field_app)); + let ih = wrap_lams(ih_inner, forall_doms); + let new_head = store(KExprNode.App(head, ih)); + apply_ihs(new_head, rest, rec_member_idxs, field_doms, peer_recs, + flat_own_params, n_params, n_motives, n_minors, n_fields, + rec_lvls_list, types, top, addrs, k + 1), + }, + }, + } + } + + -- Apply n motives to `head`, each as BVar(start - i). + fn build_apply_motives(head: KExpr, n_motives: G, start: G, i: G) -> KExpr { + match n_motives - i { + 0 => head, + _ => + let v = store(KExprNode.BVar(start - i)); + build_apply_motives(store(KExprNode.App(head, v)), n_motives, start, i + 1), + } + } + + fn build_apply_minors(head: KExpr, n_minors: G, start: G, i: G) -> KExpr { + match n_minors - i { + 0 => head, + _ => + let v = store(KExprNode.BVar(start - i)); + build_apply_minors(store(KExprNode.App(head, v)), n_minors, start, i + 1), + } + } + + -- Generate all KRecRules for an Indc's ctors via build_rule_rhs. + -- Get the parent Indc's positional idx from a Recr's first rule. + -- For solo recursors, all rules dispatch on ctors of the same Indc. + -- Recursors with at least one rule: derive ind_idx via the rule's ctor. + -- Recursors with NO rules (inductives with 0 ctors, e.g. False.rec / + -- empty propositions): parse the recursor's type to extract the major's + -- head Const(ind_idx). Mirrors `get_major_inductive_id` in Rust. + fn rec_to_ind_idx_with_ty(rules: List‹KRecRule›, ty: KExpr, + n_params: G, n_motives: G, n_minors: G, + n_indices: G, top: List‹&KConstantInfo›) -> G { + -- Derive ind_idx from the recursor's typ ONLY (walk past + -- params+motives+minors+indices to reach `major`'s type, take its + -- head Const). The rule-path was unreliable: rule.ctor_idx points + -- to a Ctor whose own induct_idx was assigned at convert time using + -- `find_matching_block_addr` heuristic — when multiple in-scope + -- inductives share ctor count, that heuristic picks the wrong one. + -- Mirror: src/ix/kernel/inductive.rs::rec_to_ind_idx (typ-only path). + let skip = n_params + n_motives + n_minors + n_indices; + let after_skip = peel_n_foralls(ty, skip); + match load(after_skip) { + KExprNode.Forall(major_ty, _) => + match collect_spine_simple(major_ty) { + (head, _) => + match load(head) { + KExprNode.Const(idx, _) => idx, + }, + }, + } + } + + -- Find the Rec idx in `top` whose major inductive is `target_ind_idx`. + -- If `rec_block` is non-zero, restrict search to recursors with matching + -- block_addr (so aux Recs in the same block resolve before sibling-namespace + -- Recs of the same external Indc). Linear scan; returns 0 (never a valid + -- Rec) if not found. + fn find_rec_for_ind(target_ind_idx: G, rec_block: [G; 32], + top: List‹&KConstantInfo›) -> G { + let in_block = find_rec_for_ind_walk(target_ind_idx, rec_block, 1, top, top, 0); + match in_block { + 0 => find_rec_for_ind_walk(target_ind_idx, rec_block, 0, top, top, 0), + _ => in_block, + } + } + + fn find_rec_for_ind_walk(target_ind_idx: G, rec_block: [G; 32], + require_block: G, + consts: List‹&KConstantInfo›, + top: List‹&KConstantInfo›, idx: G) -> G { + match load(consts) { + ListNode.Nil => 0, + ListNode.Cons(&ci, rest) => + match ci { + KConstantInfo.Rec(_, ty, n_p, n_i, n_m, n_min, rules, _, _, ba) => + let rec_ind = rec_to_ind_idx_with_ty(rules, ty, n_p, n_m, n_min, n_i, top); + let ind_match = eq_zero(rec_ind - target_ind_idx); + let block_match = match require_block { + 0 => 1, + _ => address_eq(ba, rec_block), + }; + match ind_match * block_match { + 1 => idx, + 0 => find_rec_for_ind_walk(target_ind_idx, rec_block, require_block, + rest, top, idx + 1), + }, + _ => find_rec_for_ind_walk(target_ind_idx, rec_block, require_block, + rest, top, idx + 1), + }, + } + } + + -- Build peer_recs[i] = rec_idx for flat_idxs[i], scoped to `rec_block`. + fn build_peer_recs(flat_idxs: List‹G›, rec_block: [G; 32], + top: List‹&KConstantInfo›) -> List‹G› { + match load(flat_idxs) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(member_idx, rest) => + store(ListNode.Cons(find_rec_for_ind(member_idx, rec_block, top), + build_peer_recs(rest, rec_block, top))), + } + } + + fn rec_to_ind_idx(rules: List‹KRecRule›, top: List‹&KConstantInfo›) -> G { + match load(rules) { + ListNode.Cons(rule, _) => + match rule { + KRecRule.Mk(ctor_idx, _, _) => + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, _, induct_idx, _, _, _, _) => induct_idx, + }, + }, + } + } + + -- Pairwise compare stored vs canonical KRecRule lists via k_is_def_eq. + fn list_lift_indices(doms: List‹KExpr›, lift: G, i: G) -> List‹KExpr› { + match load(doms) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(d, rest) => + let lifted = expr_lift(d, lift, i); + store(ListNode.Cons(lifted, list_lift_indices(rest, lift, i + 1))), + } + } + + fn compare_rules(stored: List‹KRecRule›, canonical: List‹KRecRule›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + match load(stored) { + ListNode.Nil => + match load(canonical) { + ListNode.Nil => (), + }, + ListNode.Cons(s, rs) => + match load(canonical) { + ListNode.Cons(c, rc) => + match s { + KRecRule.Mk(s_ctor, s_nf, s_rhs) => + match c { + KRecRule.Mk(c_ctor, c_nf, c_rhs) => + assert_eq!(s_ctor, c_ctor); + assert_eq!(s_nf, c_nf); + let eq = k_is_def_eq(s_rhs, c_rhs, store(ListNode.Nil), top, addrs); + assert_eq!(eq, 1); + compare_rules(rs, rc, top, addrs), + }, + }, + }, + } + } + + -- ============================================================================ + -- check_recursor_member (flat-aware: solo / mutual / nested-aux). + -- + -- Mirror: src/ix/kernel/inductive.rs::check_recursor_member. + -- For a stored Recr, regenerate canonical type + rules from the Indc + -- and compare via k_is_def_eq. Resolves the canonical block via + -- `resolve_primary_ind_for_rec` so aux recursors (e.g. Tree.rec_1) build + -- their canonical form against the original block (Tree's), not the + -- aux's external Indc block (List's). + -- ============================================================================ + -- Mirror: src/ix/kernel/inductive.rs check_ctor_against_inductive_member. + -- Validates that the Ctor's (induct_idx, cidx) cross-references the + -- parent Indc's ctor_indices: `ctor_indices[cidx] == this_ctor_idx`. + fn check_ctor_against_inductive_member(ctor_idx: G, ci_ctor: KConstantInfo, + top: List‹&KConstantInfo›) { + match ci_ctor { + KConstantInfo.Ctor(_, _, induct_idx, cidx, _, _, _) => + let ind_ci = load(list_lookup(top, induct_idx)); + match ind_ci { + KConstantInfo.Induct(_, _, _, _, ctor_indices, _, _, _, _, _) => + let expected = list_lookup(ctor_indices, cidx); + assert_eq!(expected, ctor_idx); + (), + }, + } + } + + -- Mirror: src/ix/kernel/inductive.rs:4451-4489 compute_k_target. + -- K-target valid iff: solo block, result level == 0 (Prop), single ctor + -- with zero non-param fields. Returns 1 if K-target, else 0. + fn compute_k_target(ind_idx: G, top: List‹&KConstantInfo›) -> G { + let ind_ci = load(list_lookup(top, ind_idx)); + match ind_ci { + KConstantInfo.Induct(_, ind_ty, n_params, n_indices, ctor_indices, + _, _, _, _, _) => + let block_members = derive_block_member_idxs(ind_idx, top); + match list_length(block_members) - 1 { + 0 => + let result_level = get_result_sort_level(ind_ty, n_params + n_indices); + match level_equal(result_level, KLevel.Zero) { + 0 => 0, + 1 => + match list_length(ctor_indices) - 1 { + 0 => + let ctor_idx = list_lookup(ctor_indices, 0); + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, _, _, _, _, n_fields, _) => eq_zero(n_fields), + _ => 0, + }, + _ => 0, + }, + }, + _ => 0, + }, + _ => 0, + } + } + + fn check_recursor_member(rec_idx: G, ci_rec: KConstantInfo, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + match ci_rec { + KConstantInfo.Rec(_, ty, n_p, n_i, n_mot, n_min, rules, k_flag, _, rec_block) => + -- For aux recursors (Tree.rec_1 etc), the "primary" inductive is the + -- one whose block carries the auxes — discoverable in the same rec_block. + -- Use it as the canonical-block source. Self's major may be different + -- (an external aux ind). + let self_major = rec_to_ind_idx_with_ty(rules, ty, n_p, n_mot, n_min, n_i, top); + let ind_idx = resolve_primary_ind_for_rec(self_major, rec_block, top); + let computed_k = compute_k_target(self_major, top); + assert_eq!(k_flag, computed_k); + let primary_ci = load(list_lookup(top, ind_idx)); + let self_ci = load(list_lookup(top, self_major)); + match primary_ci { + KConstantInfo.Induct(ind_lvls, ind_ty, ind_n_params, _, + _, _, _, _, _, _) => + match self_ci { + KConstantInfo.Induct(_, self_ind_ty, self_own_params, self_n_indices, + self_ctor_indices, _, _, _, _, _) => + let canonical_ty = build_rec_type(self_major, self_ind_ty, self_ctor_indices, + ind_n_params, self_n_indices, ind_lvls, + self_own_params, ind_idx, top, addrs); + let ty_eq = k_is_def_eq(ty, canonical_ty, store(ListNode.Nil), top, addrs); + assert_eq!(ty_eq, 1); + -- Re-derive elim_level / univ_offset using self's data. + let result_level = get_result_sort_level(self_ind_ty, self_own_params + self_n_indices); + let univ_offset = is_large_eliminator(result_level, self_ctor_indices, top, addrs); + let elim_level = match univ_offset { + 1 => KLevel.Param(0), + 0 => KLevel.Zero, + }; + let block_member_idxs = derive_block_member_idxs(ind_idx, top); + let flat = build_flat_block(block_member_idxs, univ_offset, top); + let flat_idxs = flat_ind_idxs(flat); + let n_motives = list_length(flat); + let n_rec_params = ind_n_params; + let motive_base = n_rec_params; + let flat_own_params = flat_own_params_of(flat, top); + let motive_doms = build_all_motives(flat, ind_n_params, + ind_lvls, elim_level, univ_offset, + n_rec_params, top); + let minor_doms = build_all_minors(flat, flat_idxs, flat_own_params, + n_rec_params, n_motives, + ind_lvls, univ_offset, motive_base, top, addrs, 0); + let n_minors = list_length(minor_doms); + -- Rules cover SELF's ctors only. ctor_pos_offset = sum of + -- |ctors| for flat members preceding self in flat order. + let ctor_pos_offset = ctors_before_member(flat_idxs, self_major, top, 0); + let occ_us = build_param_lvls_range(univ_offset, ind_lvls, 0); + let ind_ty_inst = expr_inst_levels(ind_ty, occ_us); + let params_walk = collect_n_doms(ind_ty_inst, ind_n_params); + match params_walk { + (param_doms, _) => + let peer_recs = build_peer_recs(flat_idxs, rec_block, top); + -- Look up self's flat member for is_aux/spec_params/occ_us. + let self_mem_pair = find_member_local_idx(flat_idxs, self_major, 0); + let self_mem_idx = match self_mem_pair { (_, m) => m, }; + let self_member = flat_member_at(flat, self_mem_idx); + let self_is_aux = match self_member { (_, ia, _, _) => ia, }; + let self_spec_params = match self_member { (_, _, sp, _) => sp, }; + let self_occ_us = match self_member { (_, _, _, ou) => ou, }; + let canonical_rules = populate_rules(rec_idx, self_major, self_ctor_indices, + ind_n_params, n_motives, n_minors, + ind_lvls, univ_offset, + motive_doms, minor_doms, param_doms, + peer_recs, flat_idxs, flat_own_params, + self_is_aux, self_spec_params, self_occ_us, + top, addrs, ctor_pos_offset); + compare_rules(rules, canonical_rules, top, addrs), + }, + }, + }, + } + } + + -- Build flat block members: [originals from block_member_idxs] ++ + -- [auxes from gather_block_nested]. Each entry is (ind_idx, is_aux, + -- spec_params, occurrence_us). For originals: is_aux=0, spec_params=[], + -- occurrence_us = build_param_lvls_range(univ_offset, lvls, 0). For + -- auxes: is_aux=1, ind_idx=ext_ind_idx, spec_params=detected substitution + -- exprs, occurrence_us = univ args from the actual nested ref. + -- Mirror: src/ix/kernel/inductive.rs:490-601 build_flat_block. + fn build_flat_block(block_member_idxs: List‹G›, univ_offset: G, + top: List‹&KConstantInfo›) + -> List‹(G, G, List‹KExpr›, List‹&KLevel›)› { + let originals = build_flat_originals(block_member_idxs, univ_offset, top); + let nested = gather_block_nested(block_member_idxs, block_member_idxs, top); + let auxes = build_flat_auxes(nested); + list_concat(originals, auxes) + } + + fn build_flat_originals(block_member_idxs: List‹G›, univ_offset: G, + top: List‹&KConstantInfo›) + -> List‹(G, G, List‹KExpr›, List‹&KLevel›)› { + match load(block_member_idxs) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(idx, rest) => + let ci = load(list_lookup(top, idx)); + match ci { + KConstantInfo.Induct(lvls, _, _, _, _, _, _, _, _, _) => + let occ_us = build_param_lvls_range(univ_offset, lvls, 0); + store(ListNode.Cons((idx, 0, store(ListNode.Nil), occ_us), + build_flat_originals(rest, univ_offset, top))), + _ => + store(ListNode.Cons((idx, 0, store(ListNode.Nil), store(ListNode.Nil)), + build_flat_originals(rest, univ_offset, top))), + }, + } + } + + fn build_flat_auxes(nested: List‹(G, List‹KExpr›, List‹&KLevel›)›) + -> List‹(G, G, List‹KExpr›, List‹&KLevel›)› { + match load(nested) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(p, rest) => + match p { + (ext_idx, sps, occ_us) => + store(ListNode.Cons((ext_idx, 1, sps, occ_us), + build_flat_auxes(rest))), + }, + } + } + + -- Project per-member ind_idx from flat list. + fn flat_ind_idxs(flat: List‹(G, G, List‹KExpr›, List‹&KLevel›)›) -> List‹G› { + match load(flat) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(m, rest) => + match m { + (ind_idx, _, _, _) => + store(ListNode.Cons(ind_idx, flat_ind_idxs(rest))), + }, + } + } + + -- Look up nth flat member. + fn flat_member_at(flat: List‹(G, G, List‹KExpr›, List‹&KLevel›)›, n: G) + -> (G, G, List‹KExpr›, List‹&KLevel›) { + match load(flat) { + ListNode.Nil => (0, 0, store(ListNode.Nil), store(ListNode.Nil)), + ListNode.Cons(m, rest) => + match n { + 0 => m, + _ => flat_member_at(rest, n - 1), + }, + } + } + + -- Build major args. Non-aux: apply n_rec_params recursor-param BVars at + -- decreasing positions starting from `start`. Aux: apply each spec_param + -- lifted by `n_indices` (since major sits below n_indices binders). + fn build_major_args_for_self(head: KExpr, n_rec_params: G, start: G, + n_indices: G, + is_aux: G, spec_params: List‹KExpr›) -> KExpr { + match is_aux { + 0 => build_apply_bvars_decreasing(head, n_rec_params, start, 0), + _ => apply_spec_params_lifted(head, spec_params, n_indices), + } + } + + -- For each flat member, look up its own_params from top. + fn flat_own_params_of(flat: List‹(G, G, List‹KExpr›, List‹&KLevel›)›, + top: List‹&KConstantInfo›) -> List‹G› { + match load(flat) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(m, rest) => + match m { + (ind_idx, _, _, _) => + let ci = load(list_lookup(top, ind_idx)); + match ci { + KConstantInfo.Induct(_, _, np, _, _, _, _, _, _, _) => + store(ListNode.Cons(np, flat_own_params_of(rest, top))), + _ => + store(ListNode.Cons(0, flat_own_params_of(rest, top))), + }, + }, + } + } + + -- Find the primary inductive for the canonical block of a recursor. + -- Walk recs in `rec_block`; pick the first one whose major's `nested > 0` + -- (= the original Indc whose ctors carry nested occurrences). Returns + -- (found, ind_idx); on `found = 0` callers fall back to `self_major`. + fn resolve_primary_ind_for_rec(self_major: G, rec_block: [G; 32], + top: List‹&KConstantInfo›) -> G { + match address_eq(rec_block, [0; 32]) { + 1 => self_major, + 0 => + let p = scan_primary_in_rec_block(rec_block, top, top, 0); + match p { + (1, idx) => idx, + _ => self_major, + }, + } + } + + fn scan_primary_in_rec_block(rec_block: [G; 32], + consts: List‹&KConstantInfo›, + top: List‹&KConstantInfo›, idx: G) -> (G, G) { + match load(consts) { + ListNode.Nil => (0, 0), + ListNode.Cons(&ci, rest) => + match ci { + KConstantInfo.Rec(_, ty, n_p, n_i, n_m, n_min, rules, _, _, ba) => + match address_eq(ba, rec_block) { + 0 => scan_primary_in_rec_block(rec_block, rest, top, idx + 1), + 1 => + let r_ind = rec_to_ind_idx_with_ty(rules, ty, n_p, n_m, n_min, n_i, top); + let r_ci = load(list_lookup(top, r_ind)); + match r_ci { + KConstantInfo.Induct(_, _, _, _, _, _, _, _, ne, _) => + match ne { + 0 => scan_primary_in_rec_block(rec_block, rest, top, idx + 1), + _ => (1, r_ind), + }, + _ => scan_primary_in_rec_block(rec_block, rest, top, idx + 1), + }, + }, + _ => scan_primary_in_rec_block(rec_block, rest, top, idx + 1), + }, + } + } + + -- Sum of |ctors| over block members positioned before `target_ind_idx`. + fn ctors_before_member(block_member_idxs: List‹G›, target_ind_idx: G, + top: List‹&KConstantInfo›, acc: G) -> G { + match load(block_member_idxs) { + ListNode.Nil => acc, + ListNode.Cons(member_idx, rest) => + match eq_zero(member_idx - target_ind_idx) { + 1 => acc, + 0 => + let ci = load(list_lookup(top, member_idx)); + match ci { + KConstantInfo.Induct(_, _, _, _, m_ctors, _, _, _, _, _) => + ctors_before_member(rest, target_ind_idx, top, + acc + list_length(m_ctors)), + _ => ctors_before_member(rest, target_ind_idx, top, acc), + }, + }, + } + } + + -- Returns 1 iff `ci_idx` is an auxiliary + -- Inductive in its block. Aux iff: in non-solo block AND own nested=0 + -- AND some other member of the block has nested>0 (i.e., the block is + -- a nested-emitting block, not pure mutual). + fn is_aux_inductive(ci_idx: G, top: List‹&KConstantInfo›) -> G { + let ci = load(list_lookup(top, ci_idx)); + match ci { + KConstantInfo.Induct(_, _, _, _, _, _, _, _, this_nested, this_block_addr) => + match address_eq(this_block_addr, [0; 32]) { + 1 => 0, + 0 => + match this_nested { + 0 => block_has_some_nested(this_block_addr, top, 0), + _ => 0, + }, + }, + _ => 0, + } + } + + -- Returns 1 iff some Inductive in `top` shares `target_block` AND has + -- nested > 0 (i.e., it's an original in a nested-emitting block). + fn block_has_some_nested(target_block: [G; 32], + consts: List‹&KConstantInfo›, idx: G) -> G { + match load(consts) { + ListNode.Nil => 0, + ListNode.Cons(&ci, rest) => + match ci { + KConstantInfo.Induct(_, _, _, _, _, _, _, _, n, ba) => + let same = address_eq(ba, target_block); + match same * n { + 0 => block_has_some_nested(target_block, rest, idx + 1), + _ => 1, + }, + _ => block_has_some_nested(target_block, rest, idx + 1), + }, + } + } + + -- Mirror: src/ix/kernel/inductive.rs:619+ + -- fn try_detect_nested. For each ctor of `orig_idx`, walk its fields; + -- for each field's domain, peel leading Foralls + check spine head: + -- if head is non-block Inductive AND first ext_n_params args mention + -- block_idxs, record (ext_idx, spec_params). + fn detect_nested_in_orig(orig_idx: G, block_idxs: List‹G›, + top: List‹&KConstantInfo›) + -> List‹(G, List‹KExpr›, List‹&KLevel›)› { + let orig_ci = load(list_lookup(top, orig_idx)); + match orig_ci { + KConstantInfo.Induct(_, _, n_params, _, ctor_indices, _, _, _, _, _) => + detect_nested_in_ctors(ctor_indices, n_params, block_idxs, top), + _ => store(ListNode.Nil), + } + } + + fn detect_nested_in_ctors(ctor_indices: List‹G›, n_params: G, + block_idxs: List‹G›, + top: List‹&KConstantInfo›) + -> List‹(G, List‹KExpr›, List‹&KLevel›)› { + match load(ctor_indices) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(ctor_idx, rest) => + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, ctor_ty, _, _, _, _, _) => + let body = peel_n_foralls_tolerant(ctor_ty, n_params); + let from_this = detect_nested_in_field_chain(body, block_idxs, top); + let from_rest = detect_nested_in_ctors(rest, n_params, block_idxs, top); + list_concat(from_this, from_rest), + _ => detect_nested_in_ctors(rest, n_params, block_idxs, top), + }, + } + } + + fn detect_nested_in_field_chain(ty: KExpr, block_idxs: List‹G›, + top: List‹&KConstantInfo›) + -> List‹(G, List‹KExpr›, List‹&KLevel›)› { + match load(ty) { + KExprNode.Forall(dom, body) => + let from_dom = detect_nested_in_dom(dom, block_idxs, top); + let from_rest = detect_nested_in_field_chain(body, block_idxs, top); + list_concat(from_dom, from_rest), + _ => store(ListNode.Nil), + } + } + + fn detect_nested_in_dom(dom: KExpr, block_idxs: List‹G›, + top: List‹&KConstantInfo›) + -> List‹(G, List‹KExpr›, List‹&KLevel›)› { + match peel_leading_foralls(dom) { + (_doms, body) => + match collect_spine_simple(body) { + (head, args) => + match load(head) { + KExprNode.Const(idx, occ_us) => + match list_contains_g(block_idxs, idx) { + 1 => store(ListNode.Nil), + 0 => + let ci = load(list_lookup(top, idx)); + match ci { + KConstantInfo.Induct(_, _, ext_n_params, _, _, _, _, _, _, _) => + let n_args = list_length(args); + match u32_less_than(n_args, ext_n_params) { + 1 => store(ListNode.Nil), + 0 => + let param_args = list_take(args, ext_n_params); + match list_any_mentions(param_args, block_idxs) { + 0 => store(ListNode.Nil), + 1 => store(ListNode.Cons((idx, param_args, occ_us), + store(ListNode.Nil))), + }, + }, + _ => store(ListNode.Nil), + }, + }, + _ => store(ListNode.Nil), + }, + }, + } + } + + -- Synthesize canonical aux.ind_ty from + -- ext's ind_ty + spec_params. Mirror: src/ix/kernel/inductive.rs + -- canonical_aux_order's synthetic indc construction. + -- Mechanism: peel ext's first ext_n_params Pi binders, substituting each + -- with the corresponding spec_param. Result is the body (indices + sort) + -- with α-substitution applied — ext's signature specialized at spec_params. + fn synth_aux_ind_ty(ext_ind_ty: KExpr, ext_n_params: G, + spec_params: List‹KExpr›) -> KExpr { + synth_aux_subst(ext_ind_ty, ext_n_params, spec_params, 0) + } + + -- Walk first n Pi binders. For each, substitute the binder's BVar(0) with + -- spec_params[k] (lifted appropriately). + fn synth_aux_subst(ty: KExpr, n: G, spec_params: List‹KExpr›, k: G) -> KExpr { + match n { + 0 => ty, + _ => + match load(ty) { + KExprNode.Forall(_, body) => + let sp = list_lookup(spec_params, k); + let body_substed = expr_inst1(body, sp, 0); + synth_aux_subst(body_substed, n - 1, spec_params, k + 1), + }, + } + } + + -- Synthesize canonical aux.ctor_ty from ext.ctor_ty + spec_params. + -- Same substitution but applied to a ctor type (which has ext's params + -- as leading Pi binders too). + fn synth_aux_ctor_ty(ext_ctor_ty: KExpr, ext_n_params: G, + spec_params: List‹KExpr›) -> KExpr { + synth_aux_subst(ext_ctor_ty, ext_n_params, spec_params, 0) + } + + -- Mirror: src/ix/kernel/inductive.rs + -- canonical_aux_order. For each aux in `block_idxs`, find a matching + -- nested occurrence in some non-aux original's ctors and verify aux's + -- ind_ty matches the synthesized canonical (ext.ind_ty with spec_params + -- substituted). + fn validate_block_auxes(block_idxs: List‹G›, top: List‹&KConstantInfo›) { + let nested_list = gather_block_nested(block_idxs, block_idxs, top); + -- Block_param_decls = first n_params Pi domains of first non-aux original. + let bp_pair = block_param_decls(block_idxs, top); + match bp_pair { + (n_block_params, block_param_doms) => + validate_auxes_walk(block_idxs, nested_list, n_block_params, + block_param_doms, top), + } + } + + -- Find first non-aux original; return (n_params, first n_params Pi domains). + fn block_param_decls(walk_idxs: List‹G›, + top: List‹&KConstantInfo›) -> (G, List‹KExpr›) { + match load(walk_idxs) { + ListNode.Nil => (0, store(ListNode.Nil)), + ListNode.Cons(idx, rest) => + match is_aux_inductive(idx, top) { + 1 => block_param_decls(rest, top), + 0 => + let ci = load(list_lookup(top, idx)); + match ci { + KConstantInfo.Induct(_, ind_ty, n_params, _, _, _, _, _, _, _) => + let walk = collect_n_doms(ind_ty, n_params); + match walk { + (doms, _) => (n_params, doms), + }, + _ => block_param_decls(rest, top), + }, + }, + } + } + + -- Collect detected nested occurrences across all non-aux originals in block. + fn gather_block_nested(walk_idxs: List‹G›, all_block_idxs: List‹G›, + top: List‹&KConstantInfo›) + -> List‹(G, List‹KExpr›, List‹&KLevel›)› { + match load(walk_idxs) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(idx, rest) => + match is_aux_inductive(idx, top) { + 1 => gather_block_nested(rest, all_block_idxs, top), + 0 => + let from_orig = detect_nested_in_orig(idx, all_block_idxs, top); + let from_rest = gather_block_nested(rest, all_block_idxs, top); + list_concat(from_orig, from_rest), + }, + } + } + + -- Walk auxes in block; for each, assert it matches some nested occurrence. + fn validate_auxes_walk(walk_idxs: List‹G›, + nested_list: List‹(G, List‹KExpr›, List‹&KLevel›)›, + n_block_params: G, + block_param_doms: List‹KExpr›, + top: List‹&KConstantInfo›) { + match load(walk_idxs) { + ListNode.Nil => (), + ListNode.Cons(idx, rest) => + match is_aux_inductive(idx, top) { + 0 => validate_auxes_walk(rest, nested_list, n_block_params, block_param_doms, top), + 1 => + let aux_ci = load(list_lookup(top, idx)); + match aux_ci { + KConstantInfo.Induct(_, aux_ind_ty, _, _, aux_ctor_indices, _, _, _, _, _) => + let matched = try_match_aux(aux_ind_ty, aux_ctor_indices, nested_list, + n_block_params, block_param_doms, top); + assert_eq!(matched, 1); + validate_auxes_walk(rest, nested_list, n_block_params, block_param_doms, top), + _ => validate_auxes_walk(rest, nested_list, n_block_params, block_param_doms, top), + }, + }, + } + } + + -- Returns 1 iff aux's ind_ty matches (block_params Pi → ext.ind_ty[α := spec_params]) + -- AND ctor count matches AND each ctor ty matches likewise. + fn try_match_aux(aux_ind_ty: KExpr, aux_ctor_indices: List‹G›, + nested_list: List‹(G, List‹KExpr›, List‹&KLevel›)›, + n_block_params: G, + block_param_doms: List‹KExpr›, + top: List‹&KConstantInfo›) -> G { + match load(nested_list) { + ListNode.Nil => 0, + ListNode.Cons(occ, rest) => + match occ { + (ext_idx, spec_params, _occ_us) => + let ext_ci = load(list_lookup(top, ext_idx)); + match ext_ci { + KConstantInfo.Induct(_, ext_ind_ty, ext_n_params, _, ext_ctor_indices, _, _, _, _, _) => + let body = synth_aux_ind_ty(ext_ind_ty, ext_n_params, spec_params); + let synth = wrap_foralls(body, block_param_doms); + let cmp = compare_kexpr(synth, aux_ind_ty); + match cmp { + 1 => + let aux_n = list_length(aux_ctor_indices); + let ext_n = list_length(ext_ctor_indices); + match aux_n - ext_n { + 0 => + let ctors_ok = compare_aux_ctors(aux_ctor_indices, ext_ctor_indices, + ext_n_params, spec_params, + block_param_doms, top); + match ctors_ok { + 1 => 1, + _ => try_match_aux(aux_ind_ty, aux_ctor_indices, rest, + n_block_params, block_param_doms, top), + }, + _ => try_match_aux(aux_ind_ty, aux_ctor_indices, rest, + n_block_params, block_param_doms, top), + }, + _ => try_match_aux(aux_ind_ty, aux_ctor_indices, rest, + n_block_params, block_param_doms, top), + }, + _ => try_match_aux(aux_ind_ty, aux_ctor_indices, rest, + n_block_params, block_param_doms, top), + }, + }, + } + } + + -- Pairwise compare aux ctors against ext ctors with spec_params substituted. + fn compare_aux_ctors(aux_ctor_indices: List‹G›, ext_ctor_indices: List‹G›, + ext_n_params: G, spec_params: List‹KExpr›, + block_param_doms: List‹KExpr›, + top: List‹&KConstantInfo›) -> G { + match load(aux_ctor_indices) { + ListNode.Nil => + match load(ext_ctor_indices) { + ListNode.Nil => 1, + _ => 0, + }, + ListNode.Cons(aux_c_idx, aux_rest) => + match load(ext_ctor_indices) { + ListNode.Nil => 0, + ListNode.Cons(ext_c_idx, ext_rest) => + let aux_c_ci = load(list_lookup(top, aux_c_idx)); + let ext_c_ci = load(list_lookup(top, ext_c_idx)); + match aux_c_ci { + KConstantInfo.Ctor(_, aux_c_ty, _, _, _, _, _) => + match ext_c_ci { + KConstantInfo.Ctor(_, ext_c_ty, _, _, _, _, _) => + let body = synth_aux_ctor_ty(ext_c_ty, ext_n_params, spec_params); + let synth_c = wrap_foralls(body, block_param_doms); + match compare_kexpr(synth_c, aux_c_ty) { + 1 => compare_aux_ctors(aux_rest, ext_rest, ext_n_params, spec_params, + block_param_doms, top), + _ => 0, + }, + _ => 0, + }, + _ => 0, + }, + }, + } + } + + fn populate_rules(rec_idx: G, ind_idx: G, ctor_indices: List‹G›, + n_params: G, n_motives: G, n_minors: G, + ind_lvls: G, univ_offset: G, + motive_doms: List‹KExpr›, minor_doms: List‹KExpr›, + param_doms: List‹KExpr›, peer_recs: List‹G›, + flat_idxs: List‹G›, flat_own_params: List‹G›, + is_aux: G, spec_params: List‹KExpr›, + occurrence_us: List‹&KLevel›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›, + ctor_pos: G) -> List‹KRecRule› { + match load(ctor_indices) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(ctor_idx, rest) => + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, ctor_ty, owning_ind, _, _, n_fields, _) => + let rhs = build_rule_rhs(rec_idx, owning_ind, ctor_idx, ctor_ty, ctor_pos, + n_params, n_motives, n_minors, ind_lvls, univ_offset, + motive_doms, minor_doms, param_doms, peer_recs, + flat_idxs, flat_own_params, + is_aux, spec_params, occurrence_us, top, addrs); + let rule = KRecRule.Mk(ctor_idx, n_fields, rhs); + store(ListNode.Cons(rule, + populate_rules(rec_idx, ind_idx, rest, n_params, n_motives, n_minors, + ind_lvls, univ_offset, motive_doms, minor_doms, param_doms, + peer_recs, flat_idxs, flat_own_params, + is_aux, spec_params, occurrence_us, + top, addrs, ctor_pos + 1))), + }, + } + } + + -- Mirror: src/ix/kernel/inductive.rs:2817-2924 fn build_direct_ih. + -- Forall-wrapped recursive fields wrap IH body in matching Foralls. + -- For each rec field, motive at offset motive_base + member_local_idx so + -- multi-member blocks correctly select per-member motives. + -- Mirror src/ix/kernel/inductive.rs:2817-2924 fn build_direct_ih. + -- Each rec-field's dom may be written via reducible defs (e.g. + -- `constType (n α) (n α)`); we WHNF before peeling Foralls and before + -- collecting the inner spine so the head/args reflect the *true* + -- inductive occurrence rather than the surface alias. + fn build_ih_doms(rec_indices: List‹G›, rec_member_idxs: List‹G›, + field_doms: List‹KExpr›, + flat_own_params: List‹G›, + motive_base: G, n_fields: G, + minor_saved: G, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›, + k: G) -> List‹KExpr› { + match load(rec_indices) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(field_idx, rest) => + let mem_idx = list_lookup(rec_member_idxs, k); + let target_n_params = list_lookup(flat_own_params, mem_idx); + let depth = minor_saved + n_fields + k; + let dom = list_lookup(field_doms, field_idx); + let dom_lifted = expr_lift(dom, (n_fields - field_idx) + k, 0); + let dom_w = whnf(dom_lifted, types, top, addrs); + match peel_leading_foralls(dom_w) { + (forall_doms, inner_body_raw) => + let inner_types = list_concat(list_reverse(forall_doms), types); + let inner_body = whnf(inner_body_raw, inner_types, top, addrs); + let n_xs = list_length(forall_doms); + let inner_depth = depth + n_xs; + let motive_bvar = (inner_depth - 1) - (motive_base + mem_idx); + let field_bvar = (inner_depth - 1) - (minor_saved + field_idx); + match collect_spine_simple(inner_body) { + (_h, dom_args) => + let idx_args = list_drop(dom_args, target_n_params); + let motive_ref = store(KExprNode.BVar(motive_bvar)); + let with_indices = apply_spine(motive_ref, idx_args); + let field_ref = store(KExprNode.BVar(field_bvar)); + let field_app = build_apply_xs(field_ref, n_xs, 0); + let ih_body = store(KExprNode.App(with_indices, field_app)); + let ih_dom = wrap_foralls(ih_body, forall_doms); + store(ListNode.Cons(ih_dom, + build_ih_doms(rest, rec_member_idxs, field_doms, flat_own_params, + motive_base, n_fields, minor_saved, types, top, addrs, k + 1))), + }, + }, + } + } + + -- Mirror: src/ix/kernel/inductive.rs:216-250 mutual peer-loop + + -- 1660-1700 fn check_param_agreement. + -- Solo (block_addr = [0;32]) is no-op. + fn check_block_peer_param_agreement(self_pos: G, self_ty: KExpr, + self_n_params: G, self_n_indices: G, + block_addr: [G; 32], + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) { + match address_eq(block_addr, [0; 32]) { + 1 => (), + 0 => peer_param_loop(self_pos, self_ty, self_n_params, self_n_indices, + block_addr, top, top, addrs, 0), + } + } + + fn peer_param_loop(self_pos: G, self_ty: KExpr, + self_n_params: G, self_n_indices: G, + block_addr: [G; 32], + consts: List‹&KConstantInfo›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›, idx: G) { + match load(consts) { + ListNode.Nil => (), + ListNode.Cons(&ci, rest) => + match ci { + KConstantInfo.Induct(_, peer_ty, peer_n_params, peer_n_indices, _, + _, _, _, _, peer_block_addr) => + let same_block = address_eq(peer_block_addr, block_addr); + let is_self = eq_zero(idx - self_pos); + let should_check = same_block * (1 - is_self); + match should_check { + 1 => + -- S3b: param-count + param-domain agreement. + assert_eq!(peer_n_params, self_n_params); + let _ = check_param_agreement(self_ty, peer_ty, self_n_params, top, addrs); + -- S3: result-universe agreement. Mirror src/ix/kernel/inductive.rs:228-237. + let self_lvl = get_result_sort_level(self_ty, self_n_params + self_n_indices); + let peer_lvl = get_result_sort_level(peer_ty, peer_n_params + peer_n_indices); + assert_eq!(level_equal(self_lvl, peer_lvl), 1); + peer_param_loop(self_pos, self_ty, self_n_params, self_n_indices, + block_addr, rest, top, addrs, idx + 1), + 0 => + peer_param_loop(self_pos, self_ty, self_n_params, self_n_indices, + block_addr, rest, top, addrs, idx + 1), + }, + _ => + peer_param_loop(self_pos, self_ty, self_n_params, self_n_indices, + block_addr, rest, top, addrs, idx + 1), + }, + } + } + + -- Walk first n Foralls of both types asserting domain def-eq under the + -- accumulated param-binder context. + fn check_param_agreement(ta: KExpr, tb: KExpr, n: G, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + check_param_agreement_go(ta, tb, n, store(ListNode.Nil), top, addrs) + } + + fn check_param_agreement_go(ta: KExpr, tb: KExpr, n: G, + types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + match n { + 0 => (), + _ => + match load(ta) { + KExprNode.Forall(da, ba) => + match load(tb) { + KExprNode.Forall(db, bb) => + let eq = k_is_def_eq(da, db, types, top, addrs); + assert_eq!(eq, 1); + let inner = store(ListNode.Cons(da, types)); + check_param_agreement_go(ba, bb, n - 1, inner, top, addrs), + }, + }, + } + } +⟧ + +end IxVM + +end diff --git a/Ix/IxVM/Kernel/Infer.lean b/Ix/IxVM/Kernel/Infer.lean new file mode 100644 index 00000000..d1c995e4 --- /dev/null +++ b/Ix/IxVM/Kernel/Infer.lean @@ -0,0 +1,348 @@ +module +public import Ix.Aiur.Meta +public import Ix.IxVM.KernelTypes +public import Ix.IxVM.Kernel.Subst +public import Ix.IxVM.Kernel.Whnf +public import Ix.IxVM.Kernel.DefEq + +public section + +namespace IxVM + +/-! ## Type inference over `KExpr` + +Mirrors `src/ix/kernel/infer.rs::infer`. + +Local context is a `List‹KExpr›` of binder types in innermost-first order. +A `BVar(i)` looks up `types[i]` and lifts it by `i + 1` (since the binder +was bound `i + 1` levels up). + +For `Lam(ty, body)`, the inferred type is `Forall(ty, body_type)` where +`body_type` is computed under `types ++ [ty]` (which means `Cons(ty, types)`). +No fvars — we work directly with de Bruijn indices. +-/ + +def infer := ⟦ + -- ============================================================================ + -- Context lookup with lift + -- + -- types[i] is the type of the i-th binder from the innermost. When we + -- look up BVar(i), we get types[i] but it was assembled at depth d-i-1 + -- (counting from outermost). The current context is at depth d, so we + -- need to lift the result by (i + 1). + -- ============================================================================ + fn types_lookup(types: List‹KExpr›, i: G) -> KExpr { + match load(types) { + ListNode.Nil => store(KExprNode.BVar(0)), + ListNode.Cons(ty, rest) => + match i { + 0 => expr_lift(ty, 1, 0), + _ => + let inner = types_lookup(rest, i - 1); + expr_lift(inner, 1, 0), + }, + } + } + + -- ============================================================================ + -- k_infer + -- + -- Mirror of `src/ix/kernel/infer.rs::infer`. Per-variant dispatch. + -- + -- Lvls placed at outermost; level-params instantiation handled where + -- the constant's declared type is fetched. + -- ============================================================================ + fn k_infer(e: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> KExpr { + match load(e) { + KExprNode.BVar(i) => types_lookup(types, i), + + -- Normalize the constructed `Succ(l)` so callers see canonical + -- forms (e.g. `Succ(IMax 0 1)` → `Succ(Succ Zero) = 2`). Without + -- this, `level_leq` for cases like `levelComp2` (`Sort(IMax 0 1)`) + -- has to compare `Succ(IMax 0 1)` against `Succ(Succ Zero)` + -- structurally and the case-split paths can diverge. + KExprNode.Srt(l) => + store(KExprNode.Srt(store(level_reduce(KLevel.Succ(l))))), + + -- Mirror: src/ix/kernel/check.rs:110-120 universe-arity validation. + -- Validates `lvls.len() == num_lvls(ci)`. + KExprNode.Const(idx, lvls) => + let ci = load(list_lookup(top, idx)); + let expected = const_num_lvls(ci); + let given = list_length(lvls); + assert_eq!(given, expected); + let ty = const_type_of(ci); + expr_inst_levels(ty, lvls), + + KExprNode.App(f, a) => + let f_ty = k_infer(f, types, top, addrs); + -- Mirror: src/ix/kernel/infer.rs:454-478 peel_proj_forall syntactic fast-path. + match load(f_ty) { + KExprNode.Forall(dom, cod) => + let _ = k_check(a, dom, types, top, addrs); + expr_inst1(cod, a, 0), + _ => + let f_ty_whnf = whnf(f_ty, types, top, addrs); + let triple = ensure_forall_post_whnf(f_ty_whnf); + match triple { + (ok, dom, cod) => + assert_eq!(ok, 1); + let _ = k_check(a, dom, types, top, addrs); + expr_inst1(cod, a, 0), + }, + }, + + KExprNode.Lam(ty, body) => + let _ = k_ensure_sort(ty, types, top, addrs); + let types2 = store(ListNode.Cons(ty, types)); + let body_ty = k_infer(body, types2, top, addrs); + store(KExprNode.Forall(ty, body_ty)), + + KExprNode.Forall(ty, body) => + let u1 = k_ensure_sort(ty, types, top, addrs); + let types2 = store(ListNode.Cons(ty, types)); + let u2 = k_ensure_sort(body, types2, top, addrs); + store(KExprNode.Srt(store(level_imax(load(u1), load(u2))))), + + KExprNode.Let(ty, val, body) => + let _ = k_ensure_sort(ty, types, top, addrs); + let _ = k_check(val, ty, types, top, addrs); + let body_substed = expr_inst1(body, val, 0); + k_infer(body_substed, types, top, addrs), + + KExprNode.Lit(lit) => + match lit { + KLiteral.Nat(_) => nat_const_type(addrs), + KLiteral.Str(_) => str_const_type(addrs), + }, + + -- Mirror: src/ix/kernel/infer.rs:331-450 infer_proj. + KExprNode.Proj(tidx, fidx, e1) => + let val_ty = k_infer(e1, types, top, addrs); + let wty = whnf(val_ty, types, top, addrs); + let pair = collect_spine(wty); + match pair { + (head, args) => + match load(head) { + KExprNode.Const(idx, lvls) => + assert_eq!(idx, tidx); + let ind_ci = load(list_lookup(top, idx)); + match ind_ci { + KConstantInfo.Induct(_, ind_ty, n_params, n_indices, ctor_indices, _, _, _, _, _) => + -- Single-ctor structure required. + assert_eq!(list_length(ctor_indices), 1); + let is_prop = is_inductive_prop(ind_ty, lvls, n_params + n_indices, + types, top, addrs); + let ctor_idx = list_lookup(ctor_indices, 0); + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, ctor_ty, _, _, _, _, _) => + let ctor_ty_inst = expr_inst_levels(ctor_ty, lvls); + let after_params = peel_params_subst(ctor_ty_inst, args, n_params); + peel_field_loop(after_params, fidx, 0, tidx, e1, is_prop, + types, top, addrs), + }, + }, + }, + }, + } + } + + fn k_ensure_sort(e: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> &KLevel { + let ty = k_infer(e, types, top, addrs); + -- Mirror: src/ix/kernel/infer.rs:454-478 syntactic Sort fast-path. + match load(ty) { + KExprNode.Srt(l) => l, + _ => + let ty_whnf = whnf(ty, types, top, addrs); + let pair = ensure_sort_post_whnf(ty_whnf); + match pair { + (ok, l) => + assert_eq!(ok, 1); + l, + }, + } + } + + -- Mirror: src/ix/kernel/infer.rs App-arg / Let-val pattern. Infer e's + -- type and compare against expected via k_is_def_eq. Mismatch panics. + fn k_check(e: KExpr, expected: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + let inferred = k_infer(e, types, top, addrs); + let eq = k_is_def_eq(inferred, expected, types, top, addrs); + assert_eq!(eq, 1); + () + } + + -- ============================================================================ + -- Helpers: extract const declared type, Nat/Str literal types. + -- ============================================================================ + fn const_type_of(ci: KConstantInfo) -> KExpr { + match ci { + KConstantInfo.Axiom(_, ty, _) => ty, + KConstantInfo.Defn(_, ty, _, _, _) => ty, + KConstantInfo.Thm(_, ty, _) => ty, + KConstantInfo.Opaque(_, ty, _, _) => ty, + KConstantInfo.Quot(_, ty, _) => ty, + KConstantInfo.Induct(_, ty, _, _, _, _, _, _, _, _) => ty, + KConstantInfo.Ctor(_, ty, _, _, _, _, _) => ty, + KConstantInfo.Rec(_, ty, _, _, _, _, _, _, _, _) => ty, + } + } + + -- Mirror: each KConstantInfo carries num_lvls as its first G field. + fn const_num_lvls(ci: KConstantInfo) -> G { + match ci { + KConstantInfo.Axiom(n, _, _) => n, + KConstantInfo.Defn(n, _, _, _, _) => n, + KConstantInfo.Thm(n, _, _) => n, + KConstantInfo.Opaque(n, _, _, _) => n, + KConstantInfo.Quot(n, _, _) => n, + KConstantInfo.Induct(n, _, _, _, _, _, _, _, _, _) => n, + KConstantInfo.Ctor(n, _, _, _, _, _, _) => n, + KConstantInfo.Rec(n, _, _, _, _, _, _, _, _, _) => n, + } + } + + -- Mirror: peel n_params Foralls off ctor_ty, substituting each + -- bound var with the corresponding `args[i]`. Used by infer_proj. + fn peel_params_subst(ty: KExpr, args: List‹KExpr›, n_params: G) -> KExpr { + match n_params { + 0 => ty, + _ => + match load(ty) { + KExprNode.Forall(_, body) => + match load(args) { + ListNode.Cons(arg, rest) => + let body_substed = expr_inst1(body, arg, 0); + peel_params_subst(body_substed, rest, n_params - 1), + }, + }, + } + } + + -- Mirror: src/ix/kernel/infer.rs:414-449 ctor-field walk loop. + -- For each preceding field i < target_field, substitute body[Var(0) + -- := Proj(struct_idx, i, val)] before recursing. At i == target_field, + -- return the field's domain type. + -- + -- For Prop structures (`is_prop == 1`), enforce two soundness checks: + -- (a) preceding data field whose body depends on BVar(0) is forbidden + -- (no projection past dependent data fields, mirror Rust line 433-444); + -- (b) projected (target) field must itself be in Prop (mirror Rust line 418-427). + fn peel_field_loop(ty: KExpr, target_field: G, current: G, + struct_idx: G, val: KExpr, is_prop: G, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> KExpr { + match load(ty) { + KExprNode.Forall(dom, body) => + match target_field - current { + 0 => + let _ = check_prop_field_if_prop(is_prop, dom, types, top, addrs); + dom, + _ => + let _ = check_no_dep_data_field_if_prop(is_prop, dom, body, types, top, addrs); + let proj_expr = store(KExprNode.Proj(struct_idx, current, val)); + let body_substed = expr_inst1(body, proj_expr, 0); + peel_field_loop(body_substed, target_field, current + 1, + struct_idx, val, is_prop, types, top, addrs), + }, + } + } + + -- Mirror: src/ix/kernel/infer.rs:431-444 dependent-data-field guard. + -- For Prop structures: a preceding data field (sort != 0) whose body + -- has any loose bvar (`body.lbr() > 0`) makes projection past it unsound. + -- Matches Rust's `body.lbr() > 0` check exactly. + fn check_no_dep_data_field_if_prop(is_prop: G, dom: KExpr, body: KExpr, + types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) { + match is_prop { + 0 => (), + _ => + let lvl = k_ensure_sort(dom, types, top, addrs); + match level_equal(load(lvl), KLevel.Zero) { + 1 => (), + _ => + -- data field; body must have no loose bvars. + assert_eq!(expr_lbr(body), 0); + (), + }, + } + } + + -- Peel `n` Foralls, calling `whnf` on each step. Returns the whnf'd body. + fn peel_n_alls_whnf(e: KExpr, n: G, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> KExpr { + match n { + 0 => whnf(e, types, top, addrs), + _ => + let ew = whnf(e, types, top, addrs); + match load(ew) { + KExprNode.Forall(_, body) => + peel_n_alls_whnf(body, n - 1, types, top, addrs), + _ => ew, + }, + } + } + + -- Mirror: src/ix/kernel/infer.rs:488-520 inductive_app_is_prop. + -- Returns 1 iff the inductive lives in Prop (peeled past params + indices, + -- result sort = Zero). + fn is_inductive_prop(ind_ty: KExpr, lvls: List‹&KLevel›, n_skip: G, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> G { + let ind_ty_inst = expr_inst_levels(ind_ty, lvls); + let result = peel_n_alls_whnf(ind_ty_inst, n_skip, types, top, addrs); + match load(result) { + KExprNode.Srt(l) => level_equal(load(l), KLevel.Zero), + _ => 0, + } + } + + -- Mirror: src/ix/kernel/infer.rs:418-427 Prop-projection guard. When + -- projecting a field from a Prop-typed structure, the field MUST itself + -- live in Prop. Otherwise projection violates proof irrelevance. + fn check_prop_field_if_prop(is_prop: G, dom: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) { + match is_prop { + 0 => (), + _ => + let lvl = k_ensure_sort(dom, types, top, addrs); + assert_eq!(level_equal(load(lvl), KLevel.Zero), 1); + (), + } + } + + -- Address-keyed literal-type lookup. + -- Walks `addrs` to find the kernel position of Nat / String. Builds + -- `Const(idx, [])` for the type of `Lit(Nat _)` / `Lit(Str _)`. + fn nat_const_type(addrs: List‹[G; 32]›) -> KExpr { + let idx = find_addr_idx(nat_addr(), addrs, 0); + store(KExprNode.Const(idx, store(ListNode.Nil))) + } + + fn str_const_type(addrs: List‹[G; 32]›) -> KExpr { + let idx = find_addr_idx(str_addr(), addrs, 0); + store(KExprNode.Const(idx, store(ListNode.Nil))) + } + + -- Find the position of `target` in `addrs`. Panics (no Nil arm) if + -- not present — caller (literal typing) requires it. + fn find_addr_idx(target: [G; 32], addrs: List‹[G; 32]›, i: G) -> G { + match load(addrs) { + ListNode.Cons(a, rest) => + match address_eq(target, a) { + 1 => i, + 0 => find_addr_idx(target, rest, i + 1), + }, + } + } +⟧ + +end IxVM + +end diff --git a/Ix/IxVM/Kernel/Levels.lean b/Ix/IxVM/Kernel/Levels.lean new file mode 100644 index 00000000..7df032d6 --- /dev/null +++ b/Ix/IxVM/Kernel/Levels.lean @@ -0,0 +1,370 @@ +module +public import Ix.Aiur.Meta +public import Ix.IxVM.KernelTypes + +public section + +namespace IxVM + +/-! ## Level operations + literal equality + expr_inst_levels + +Mirror: src/ix/kernel/level.rs + literal-equality helpers from +src/ix/kernel/expr.rs. Self-contained; new kernel modules +(Subst/Whnf/Infer/DefEq/Check) import from here. +-/ + +def levels := ⟦ + fn level_is_not_zero(l: KLevel) -> G { + match l { + KLevel.Zero => 0, + KLevel.Param(_) => 0, + KLevel.Succ(_) => 1, + KLevel.Max(&a, &b) => match (level_is_not_zero(a), level_is_not_zero(b)) { + (0, 0) => 0, + _ => 1, + }, + KLevel.IMax(_, &b) => level_is_not_zero(b), + } + } + + fn level_eq(a: KLevel, b: KLevel) -> G { + match a { + KLevel.Zero => + match b { + KLevel.Zero => 1, + _ => 0, + }, + KLevel.Succ(&a1) => + match b { + KLevel.Succ(&b1) => level_eq(a1, b1), + _ => 0, + }, + KLevel.Max(&a1, &a2) => + match b { + KLevel.Max(&b1, &b2) => level_eq(a1, b1) * level_eq(a2, b2), + _ => 0, + }, + KLevel.IMax(&a1, &a2) => + match b { + KLevel.IMax(&b1, &b2) => level_eq(a1, b1) * level_eq(a2, b2), + _ => 0, + }, + KLevel.Param(i) => + match b { + KLevel.Param(j) => eq_zero(i - j), + _ => 0, + }, + } + } + + fn level_has_param(l: KLevel) -> G { + match l { + KLevel.Zero => 0, + KLevel.Param(_) => 1, + KLevel.Succ(&a) => level_has_param(a), + KLevel.Max(&a, &b) => + let ha = level_has_param(a); + match ha { + 1 => 1, + 0 => level_has_param(b), + }, + KLevel.IMax(&a, &b) => + let hb = level_has_param(b); + match hb { + 1 => 1, + 0 => level_has_param(a), + }, + } + } + + fn level_any_param(l: KLevel) -> G { + match l { + KLevel.Param(i) => i, + KLevel.Succ(&a) => level_any_param(a), + KLevel.Max(&a, &b) => + let ha = level_has_param(a); + match ha { + 1 => level_any_param(a), + 0 => level_any_param(b), + }, + KLevel.IMax(&a, &b) => + let hb = level_has_param(b); + match hb { + 1 => level_any_param(b), + 0 => level_any_param(a), + }, + KLevel.Zero => 0, + } + } + + fn level_subst_reduce(l: KLevel, p: G, repl: KLevel) -> KLevel { + match l { + KLevel.Zero => KLevel.Zero, + KLevel.Param(i) => + match i - p { + 0 => repl, + _ => KLevel.Param(i), + }, + KLevel.Succ(&a) => + KLevel.Succ(store(level_subst_reduce(a, p, repl))), + KLevel.Max(&a, &b) => + level_max(level_subst_reduce(a, p, repl), level_subst_reduce(b, p, repl)), + KLevel.IMax(&a, &b) => + level_imax(level_subst_reduce(a, p, repl), level_subst_reduce(b, p, repl)), + } + } + + fn level_leq(a: KLevel, b: KLevel) -> G { + match a { + KLevel.Zero => 1, + KLevel.Max(&a1, &a2) => + level_leq(a1, b) * level_leq(a2, b), + KLevel.Succ(&a1) => + match a1 { + KLevel.Max(&x, &y) => + level_leq(KLevel.Succ(store(x)), b) * level_leq(KLevel.Succ(store(y)), b), + _ => + match b { + KLevel.Succ(&b1) => level_leq(a1, b1), + KLevel.Max(&b1, &b2) => + let r1 = level_leq(a, b1); + match r1 { + 1 => 1, + 0 => + let r2 = level_leq(a, b2); + match r2 { + 1 => 1, + 0 => + let bfull = KLevel.Max(store(b1), store(b2)); + let hp = level_has_param(bfull); + match hp { + 0 => 0, + _ => + let p = level_any_param(bfull); + let sp = KLevel.Succ(store(KLevel.Param(p))); + let a0 = level_subst_reduce(a, p, KLevel.Zero); + let b0 = level_subst_reduce(bfull, p, KLevel.Zero); + let a1s = level_subst_reduce(a, p, sp); + let b1s = level_subst_reduce(bfull, p, sp); + level_leq(a0, b0) * level_leq(a1s, b1s), + }, + }, + }, + _ => 0, + }, + }, + KLevel.Param(i) => + match b { + KLevel.Param(j) => eq_zero(i - j), + KLevel.Succ(&b1) => level_leq(a, b1), + KLevel.Max(&b1, &b2) => + let r1 = level_leq(a, b1); + match r1 { + 1 => 1, + 0 => level_leq(a, b2), + }, + KLevel.IMax(&b1, &b2) => + let p = level_any_param(b2); + let sp = KLevel.Succ(store(KLevel.Param(p))); + let a0 = level_subst_reduce(a, p, KLevel.Zero); + let bfull = KLevel.IMax(store(b1), store(b2)); + let b0 = level_subst_reduce(bfull, p, KLevel.Zero); + let a1s = level_subst_reduce(a, p, sp); + let b1s = level_subst_reduce(bfull, p, sp); + level_leq(a0, b0) * level_leq(a1s, b1s), + KLevel.Zero => 0, + }, + KLevel.IMax(&a1, &a2) => + let not_zero = level_is_not_zero(a2); + match not_zero { + 1 => level_leq(a1, b) * level_leq(a2, b), + 0 => + let p = level_any_param(a2); + let sp = KLevel.Succ(store(KLevel.Param(p))); + let afull = KLevel.IMax(store(a1), store(a2)); + let a0 = level_subst_reduce(afull, p, KLevel.Zero); + let b0 = level_subst_reduce(b, p, KLevel.Zero); + let a1s = level_subst_reduce(afull, p, sp); + let b1s = level_subst_reduce(b, p, sp); + level_leq(a0, b0) * level_leq(a1s, b1s), + }, + } + } + + fn level_equal(a: KLevel, b: KLevel) -> G { + level_leq(a, b) * level_leq(b, a) + } + + fn level_max(a: KLevel, b: KLevel) -> KLevel { + match a { + KLevel.Zero => b, + _ => + match b { + KLevel.Zero => a, + _ => + let eq = level_eq(a, b); + match eq { + 1 => a, + 0 => + match a { + KLevel.Succ(&a1) => + match b { + KLevel.Succ(&b1) => KLevel.Succ(store(level_max(a1, b1))), + _ => KLevel.Max(store(a), store(b)), + }, + _ => KLevel.Max(store(a), store(b)), + }, + }, + }, + } + } + + fn level_imax(a: KLevel, b: KLevel) -> KLevel { + match b { + KLevel.Zero => KLevel.Zero, + KLevel.Succ(_) => level_max(a, b), + _ => + let not_zero = level_is_not_zero(b); + match not_zero { + 1 => level_max(a, b), + 0 => + match a { + KLevel.Zero => b, + _ => + let eq = level_eq(a, b); + match eq { + 1 => a, + 0 => KLevel.IMax(store(a), store(b)), + }, + }, + }, + } + } + + fn level_reduce(l: KLevel) -> KLevel { + match l { + KLevel.Zero => KLevel.Zero, + KLevel.Param(i) => KLevel.Param(i), + KLevel.Succ(&u) => KLevel.Succ(store(level_reduce(u))), + KLevel.Max(&a, &b) => level_max(level_reduce(a), level_reduce(b)), + KLevel.IMax(&a, &b) => level_imax(level_reduce(a), level_reduce(b)), + } + } + + fn level_inst_params(l: KLevel, params: List‹&KLevel›) -> KLevel { + match l { + KLevel.Zero => KLevel.Zero, + KLevel.Succ(&u) => KLevel.Succ(store(level_inst_params(u, params))), + KLevel.Max(&a, &b) => + level_max(level_inst_params(a, params), level_inst_params(b, params)), + KLevel.IMax(&a, &b) => + level_imax(level_inst_params(a, params), level_inst_params(b, params)), + KLevel.Param(i) => load(list_lookup(params, i)), + } + } + + fn level_list_inst(lvls: List‹&KLevel›, params: List‹&KLevel›) -> List‹&KLevel› { + match load(lvls) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(&l, rest) => + store(ListNode.Cons( + store(level_inst_params(l, params)), + level_list_inst(rest, params))), + } + } + + fn expr_inst_levels(e: KExpr, params: List‹&KLevel›) -> KExpr { + -- Fast path: empty param list = identity. Common when caller's lvls + -- list is Nil (constants with no universe params). + match load(params) { + ListNode.Nil => e, + _ => expr_inst_levels_walk(e, params), + } + } + + fn expr_inst_levels_walk(e: KExpr, params: List‹&KLevel›) -> KExpr { + match load(e) { + KExprNode.BVar(i) => store(KExprNode.BVar(i)), + KExprNode.Srt(&l) => + store(KExprNode.Srt(store(level_inst_params(l, params)))), + KExprNode.Const(idx, lvls) => + store(KExprNode.Const(idx, level_list_inst(lvls, params))), + KExprNode.App(f, a) => + store(KExprNode.App(expr_inst_levels(f, params), expr_inst_levels(a, params))), + KExprNode.Lam(ty, body) => + store(KExprNode.Lam(expr_inst_levels(ty, params), expr_inst_levels(body, params))), + KExprNode.Forall(ty, body) => + store(KExprNode.Forall(expr_inst_levels(ty, params), expr_inst_levels(body, params))), + KExprNode.Let(ty, val, body) => + store(KExprNode.Let( + expr_inst_levels(ty, params), + expr_inst_levels(val, params), + expr_inst_levels(body, params))), + KExprNode.Lit(lit) => store(KExprNode.Lit(lit)), + KExprNode.Proj(tidx, fidx, e1) => + store(KExprNode.Proj(tidx, fidx, expr_inst_levels(e1, params))), + } + } + + -- ============================================================================ + -- Literal equality + -- ============================================================================ + + fn klimbs_eq(a: KLimbs, b: KLimbs) -> G { + match load(a) { + ListNode.Nil => + match load(b) { + ListNode.Nil => 1, + _ => 0, + }, + ListNode.Cons(la, ra) => + match load(b) { + ListNode.Nil => 0, + ListNode.Cons(lb, rb) => + let eq = u64_eq(la, lb); + match eq { + 1 => klimbs_eq(ra, rb), + 0 => 0, + }, + }, + } + } + + fn bytestream_eq(a: ByteStream, b: ByteStream) -> G { + match load(a) { + ListNode.Nil => + match load(b) { + ListNode.Nil => 1, + _ => 0, + }, + ListNode.Cons(ba, ra) => + match load(b) { + ListNode.Nil => 0, + ListNode.Cons(bb, rb) => + match ba - bb { + 0 => bytestream_eq(ra, rb), + _ => 0, + }, + }, + } + } + + fn literal_eq(a: KLiteral, b: KLiteral) -> G { + match a { + KLiteral.Nat(na) => + match b { + KLiteral.Nat(nb) => klimbs_eq(na, nb), + _ => 0, + }, + KLiteral.Str(sa) => + match b { + KLiteral.Str(sb) => bytestream_eq(sa, sb), + _ => 0, + }, + } + } +⟧ + +end IxVM + +end diff --git a/Ix/IxVM/Kernel/Primitive.lean b/Ix/IxVM/Kernel/Primitive.lean new file mode 100644 index 00000000..ed571090 --- /dev/null +++ b/Ix/IxVM/Kernel/Primitive.lean @@ -0,0 +1,2405 @@ +module +public import Ix.Aiur.Meta +public import Ix.IxVM.KernelTypes + +set_option maxRecDepth 8192 + +public section + +namespace IxVM + +/-! ## Nat / String primitives + +Mirror: `src/ix/kernel/primitive.rs` (PrimAddrs struct + canonical +addresses) and `src/ix/kernel/whnf.rs:500-700` (Nat-on-literals +short-circuit dispatch). + +`KLimbs = List` little-endian (`KernelTypes.lean:28`). Rust +counterpart is `num_bigint::BigUint`. Klimbs ops here mirror BigUint +semantics on bounded inputs. +-/ + +set_option maxRecDepth 16384 in +def primitive := ⟦ + -- ============================================================================ + -- PrimIdxs: positional kernel const indices for dispatchable primitives. + -- + -- Mirror: src/ix/kernel/primitive.rs::PrimAddrs (canonical addresses). + -- The Aiur kernel uses positional Const(idx) so we resolve addresses to + -- positions at ingress time. Slots: + -- 0: nat + -- 1: nat_zero + -- 2: nat_succ + -- 3: nat_pred + -- 4: nat_add + -- 5: nat_sub + -- 6: nat_mul + -- 7: nat_beq + -- 8: nat_ble + -- 9: str + -- A slot value of `0 - 1` (max G) means "primitive not present in the + -- current kernel const list". + -- ============================================================================ + + -- Canonical Anon-mode blake3 addresses (mirror of primitive.rs PrimAddrs). + fn quot_type_addr() -> [G; 32] { + [0xab, 0x68, 0x2c, 0x17, 0x78, 0xa1, 0x7b, 0xbe, + 0xae, 0x40, 0x32, 0x97, 0x4d, 0xf3, 0x64, 0x47, + 0xce, 0x8b, 0xfc, 0xab, 0x67, 0x64, 0xa3, 0x6d, + 0x37, 0x85, 0x66, 0xe3, 0xad, 0x63, 0xca, 0xb8] + } + + fn quot_ctor_addr() -> [G; 32] { + [0x88, 0x26, 0x66, 0x77, 0xfe, 0xe7, 0x74, 0xd1, + 0x09, 0x86, 0x7e, 0x4b, 0x22, 0x40, 0x28, 0x1a, + 0xa2, 0xee, 0x12, 0xd9, 0x79, 0x20, 0xc1, 0x17, + 0x1c, 0xf5, 0xc1, 0xf6, 0xc8, 0x7d, 0xec, 0xf6] + } + + fn quot_lift_addr() -> [G; 32] { + [0xaa, 0x57, 0xe8, 0xc3, 0xf4, 0xf9, 0xe1, 0xcf, + 0x6b, 0x02, 0xa0, 0x38, 0xac, 0x15, 0x81, 0x98, + 0xc3, 0xaf, 0x4b, 0x28, 0xd6, 0x1c, 0xea, 0x79, + 0x95, 0xbf, 0x5c, 0xa7, 0xc7, 0xb8, 0x2c, 0x29] + } + + fn quot_ind_addr() -> [G; 32] { + [0x12, 0x49, 0x84, 0xbc, 0xb9, 0x52, 0x08, 0xa0, + 0xf3, 0x0b, 0xb6, 0x9d, 0x67, 0x36, 0xd3, 0xd5, + 0x94, 0x04, 0xe1, 0x15, 0xe2, 0x20, 0x20, 0x43, + 0xfd, 0xa3, 0xd3, 0x4e, 0x01, 0xb0, 0xad, 0x16] + } + + fn bit_vec_addr() -> [G; 32] { + [0xcf, 0x55, 0x11, 0x5c, 0x75, 0x34, 0x3f, 0x82, + 0x4f, 0xdd, 0x93, 0x21, 0x78, 0xb0, 0xcb, 0xc7, + 0x5a, 0x86, 0xe5, 0x05, 0x2d, 0xe9, 0x3d, 0xb9, + 0x8f, 0x05, 0xb3, 0x78, 0x85, 0xff, 0xb0, 0x9b] + } + + fn bit_vec_to_nat_addr() -> [G; 32] { + [0x78, 0x34, 0x86, 0x5c, 0x1c, 0x6c, 0xd9, 0x63, + 0xb9, 0x36, 0x5c, 0xb0, 0x65, 0x00, 0x62, 0x38, + 0x80, 0xde, 0x4d, 0x99, 0x30, 0x34, 0x3e, 0x96, + 0xe1, 0x9e, 0x62, 0xa0, 0x26, 0xe7, 0xca, 0xce] + } + + fn bit_vec_of_nat_addr() -> [G; 32] { + [0xa0, 0x8a, 0xcf, 0x4c, 0xed, 0xb4, 0xc0, 0x5e, + 0xdd, 0xb5, 0x5b, 0xff, 0x36, 0x6c, 0xd9, 0x52, + 0xd5, 0xb7, 0xb8, 0x86, 0x02, 0xc3, 0xfc, 0x6d, + 0x87, 0x5e, 0x8e, 0xa7, 0x32, 0xa3, 0xc2, 0xf4] + } + + fn bit_vec_ult_addr() -> [G; 32] { + [0x6a, 0x3f, 0x26, 0x2c, 0x2f, 0x4a, 0x2c, 0x51, + 0x7a, 0x61, 0x6f, 0xba, 0xe5, 0x4a, 0x31, 0xec, + 0xcb, 0x85, 0x99, 0x8a, 0xd9, 0xc1, 0xf9, 0x3b, + 0xe8, 0xcc, 0x59, 0x0d, 0x97, 0x11, 0x7c, 0x04] + } + + fn decidable_decide_addr() -> [G; 32] { + [0x6d, 0xda, 0xae, 0xd2, 0x63, 0x74, 0x0b, 0x5d, + 0x5d, 0x67, 0xe6, 0xc1, 0x2e, 0xcf, 0xad, 0xb2, + 0x4a, 0xd8, 0x86, 0x7d, 0x4a, 0x09, 0xfe, 0x78, + 0x4b, 0x59, 0xda, 0xc7, 0xf7, 0x27, 0x54, 0xab] + } + + fn lt_lt_addr() -> [G; 32] { + [0x01, 0xd8, 0x71, 0xbc, 0xdf, 0xb2, 0xe7, 0x69, + 0xe1, 0xac, 0xa0, 0x0e, 0x7a, 0x3b, 0x3a, 0x21, + 0xa8, 0xd9, 0x02, 0xcc, 0x27, 0x37, 0x07, 0xc8, + 0x92, 0xeb, 0x86, 0x7b, 0x7f, 0xc7, 0x8a, 0xe2] + } + + fn bool_type_addr() -> [G; 32] { + [0x64, 0x05, 0xa4, 0x55, 0xba, 0x70, 0xc2, 0xb2, + 0x17, 0x9c, 0x79, 0x66, 0xc6, 0xf6, 0x10, 0xbf, + 0x34, 0x17, 0xbd, 0x0f, 0x3d, 0xd2, 0xba, 0x7a, + 0x52, 0x25, 0x33, 0xc2, 0xcd, 0x9e, 0x1d, 0x0b] + } + + fn eq_addr() -> [G; 32] { + [0x9c, 0x0a, 0xf2, 0xa3, 0x93, 0xcb, 0x5c, 0x08, + 0x35, 0xe4, 0x4e, 0x60, 0xe4, 0xc3, 0xe6, 0x8e, + 0xeb, 0x26, 0x6f, 0xd1, 0x6a, 0xff, 0xad, 0x32, + 0x16, 0x09, 0x6a, 0x35, 0xfe, 0x91, 0xb9, 0xc1] + } + + fn eq_refl_addr() -> [G; 32] { + [0x1e, 0x25, 0x11, 0x98, 0xf3, 0x06, 0x25, 0x62, + 0x8e, 0x2e, 0xb0, 0x98, 0x3f, 0x7b, 0xe9, 0xef, + 0xe8, 0xd7, 0x19, 0xa1, 0x04, 0xa8, 0x61, 0xf2, + 0xbe, 0xf2, 0xf4, 0x7e, 0xab, 0xee, 0xd4, 0xf9] + } + + fn nat_dec_le_addr() -> [G; 32] { + [0xe0, 0x8c, 0x51, 0x41, 0xc4, 0x4b, 0x27, 0x65, + 0x39, 0x57, 0xae, 0x00, 0xa9, 0x26, 0xa2, 0xdd, + 0x68, 0xdc, 0xd7, 0x77, 0x9c, 0x4f, 0xdf, 0x85, + 0x0e, 0x66, 0x8f, 0xdc, 0x92, 0xb4, 0x08, 0xde] + } + + fn nat_dec_eq_addr() -> [G; 32] { + [0x38, 0x32, 0x3f, 0xd9, 0xe1, 0x7e, 0x9d, 0x1f, + 0x17, 0x53, 0x6d, 0xbb, 0x7f, 0x19, 0x6b, 0x94, + 0xb5, 0xba, 0x19, 0xe4, 0xbf, 0x62, 0x5d, 0x9e, + 0x7c, 0x60, 0x7c, 0x47, 0x36, 0x5c, 0x15, 0xad] + } + + fn nat_dec_lt_addr() -> [G; 32] { + [0xf4, 0x45, 0x08, 0x4f, 0x68, 0x05, 0xfa, 0xf9, + 0xbe, 0x62, 0xaa, 0x32, 0x84, 0x15, 0x65, 0x13, + 0x43, 0xc9, 0x8f, 0xfe, 0x52, 0xdb, 0x15, 0x9d, + 0xfb, 0x1b, 0x9a, 0x14, 0xcb, 0x28, 0xcf, 0x23] + } + + fn int_dec_eq_addr() -> [G; 32] { + [0x42, 0xd9, 0xb7, 0xa9, 0x4a, 0xef, 0xc7, 0x7a, + 0x66, 0x16, 0x93, 0x6b, 0xe3, 0x12, 0x64, 0xea, + 0xf8, 0xbe, 0xd7, 0xbd, 0x80, 0xf5, 0xd3, 0x49, + 0x67, 0xfc, 0x42, 0xaf, 0xaf, 0x29, 0xa7, 0xfd] + } + + fn int_dec_le_addr() -> [G; 32] { + [0xee, 0x03, 0x70, 0xe4, 0x26, 0xa4, 0x00, 0xc8, + 0xb1, 0x67, 0x82, 0xfa, 0xbf, 0xa0, 0xe4, 0x3f, + 0xf8, 0x7e, 0xca, 0xc1, 0xa0, 0xc1, 0xc7, 0x65, + 0xcc, 0x51, 0x79, 0xfc, 0x42, 0x3a, 0xb1, 0xbd] + } + + fn int_dec_lt_addr() -> [G; 32] { + [0x15, 0x07, 0x0e, 0x92, 0x02, 0x04, 0x27, 0x23, + 0x69, 0xf0, 0xf2, 0xe8, 0x0f, 0xf3, 0xf5, 0x03, + 0x5c, 0x05, 0xb3, 0x9e, 0xfa, 0x71, 0x4e, 0xc8, + 0xe6, 0xbb, 0xfc, 0xe9, 0x95, 0x06, 0x37, 0xaf] + } + + fn int_of_nat_addr() -> [G; 32] { + [0x46, 0xb5, 0xeb, 0x67, 0x68, 0xc1, 0xf4, 0x95, + 0x87, 0xd6, 0x53, 0xc1, 0x2e, 0x37, 0x33, 0x89, + 0x12, 0x15, 0x33, 0x86, 0x83, 0x2f, 0x0f, 0xd0, + 0xe4, 0x72, 0x48, 0x4e, 0x26, 0x32, 0x26, 0x32] + } + + fn int_neg_succ_addr() -> [G; 32] { + [0x25, 0xbb, 0xcd, 0x75, 0x6b, 0x52, 0xeb, 0x78, + 0xbc, 0xe1, 0x70, 0x41, 0x0d, 0xef, 0xa4, 0xc1, + 0x5b, 0x23, 0x8d, 0xed, 0xef, 0x5f, 0x7b, 0x89, + 0x69, 0x16, 0x21, 0xdc, 0xbe, 0x91, 0x97, 0x80] + } + + -- Address constants below registered in Rust primitive.rs but not wired + -- into any Aiur dispatch path. Commented out per "no unused code". + -- Restore + plug if a reduction/canonicalization tier needs them. + /- + fn int_addr() -> [G; 32] { + [0xe7, 0xdc, 0x2d, 0x5a, 0x2e, 0x15, 0x3e, 0x1a, + 0xb0, 0xc7, 0x87, 0x97, 0xbc, 0xbf, 0xd5, 0x3a, + 0x2c, 0x01, 0xff, 0x40, 0x91, 0x88, 0x77, 0xcf, + 0xad, 0x8a, 0xde, 0x8c, 0x41, 0x69, 0xa4, 0x3a] + } + + fn int_add_addr() -> [G; 32] { + [0xd8, 0xe6, 0xcd, 0xc9, 0x88, 0xd4, 0x28, 0x8e, + 0x48, 0xcc, 0x60, 0x92, 0x73, 0x0b, 0xc5, 0x38, + 0x71, 0x76, 0xcf, 0xf6, 0x59, 0x24, 0x71, 0xa3, + 0x28, 0xcc, 0x43, 0x54, 0xf1, 0x87, 0x84, 0x12] + } + + fn int_sub_addr() -> [G; 32] { + [0x93, 0xb2, 0xd1, 0x2d, 0x77, 0x97, 0xfd, 0x62, + 0xc2, 0x0b, 0xec, 0x25, 0x53, 0x36, 0xc1, 0xe9, + 0x1c, 0xa1, 0xce, 0xf7, 0xa6, 0x95, 0x10, 0x71, + 0x29, 0x6f, 0xc1, 0xab, 0x5b, 0xd1, 0xd8, 0xc8] + } + + fn int_mul_addr() -> [G; 32] { + [0x9a, 0xd6, 0xee, 0x18, 0xef, 0x6d, 0x7d, 0x74, + 0xbb, 0xe4, 0x49, 0xab, 0x61, 0xaa, 0x31, 0xf8, + 0x4a, 0x0e, 0x78, 0x95, 0x1e, 0x95, 0x60, 0xd2, + 0x8f, 0xd8, 0x2e, 0x0c, 0x3b, 0x07, 0x1d, 0x01] + } + + fn int_neg_addr() -> [G; 32] { + [0x8c, 0x3f, 0x64, 0xe6, 0xb5, 0xba, 0xaa, 0xa1, + 0x25, 0xf0, 0x63, 0x7d, 0x7a, 0x82, 0x4d, 0xf6, + 0x27, 0xdb, 0xed, 0xe0, 0x11, 0x59, 0x68, 0xf3, + 0xc8, 0x0c, 0x55, 0xe0, 0x22, 0x55, 0x44, 0x62] + } + + fn int_emod_addr() -> [G; 32] { + [0x7c, 0xdb, 0x11, 0x27, 0x25, 0xd3, 0xa4, 0xf5, + 0x42, 0xbf, 0xb0, 0xcd, 0x30, 0x92, 0x68, 0x64, + 0x1b, 0xd8, 0x9d, 0xdc, 0x98, 0x90, 0xc7, 0x22, + 0x1e, 0xd0, 0x1f, 0x99, 0xb6, 0xa0, 0x0b, 0x63] + } + + fn int_ediv_addr() -> [G; 32] { + [0xba, 0x19, 0x4c, 0x0a, 0x36, 0x74, 0xe6, 0x7b, + 0x99, 0x68, 0xd0, 0xa6, 0x5c, 0xdd, 0xa3, 0xa4, + 0xdd, 0xb9, 0xdc, 0xdc, 0xe4, 0x8a, 0xd6, 0xc6, + 0x2e, 0x91, 0xd4, 0x78, 0xa1, 0x0a, 0x3d, 0xdd] + } + + fn int_bmod_addr() -> [G; 32] { + [0xc8, 0x43, 0x1b, 0x7a, 0xdb, 0x91, 0x89, 0x67, + 0xaa, 0x05, 0xba, 0x6f, 0xd8, 0x29, 0x7f, 0x33, + 0xe9, 0x7d, 0x67, 0x00, 0x3e, 0x41, 0x38, 0x02, + 0x1d, 0x91, 0x2e, 0xa9, 0x2c, 0xc1, 0x88, 0x7f] + } + + fn int_bdiv_addr() -> [G; 32] { + [0xab, 0x72, 0x47, 0x72, 0x54, 0xd1, 0xca, 0x47, + 0x38, 0x12, 0x3a, 0xd6, 0x12, 0xea, 0xe4, 0xdf, + 0xb9, 0x12, 0x6e, 0xf7, 0x83, 0x10, 0xed, 0x7d, + 0x2e, 0xbd, 0xe8, 0x10, 0x09, 0x63, 0xbf, 0xb1] + } + + fn int_nat_abs_addr() -> [G; 32] { + [0x60, 0x66, 0x2e, 0x33, 0x22, 0x4f, 0x55, 0xbe, + 0x9e, 0x36, 0x76, 0x83, 0x37, 0x8c, 0x7b, 0xf6, + 0x09, 0x3c, 0x12, 0x5c, 0x04, 0xff, 0x7c, 0x4e, + 0x3e, 0xca, 0x37, 0x01, 0x12, 0xe1, 0xc5, 0x62] + } + + fn int_pow_addr() -> [G; 32] { + [0x0d, 0xfe, 0x8f, 0x22, 0xbd, 0x6c, 0xb6, 0x7d, + 0x53, 0x8a, 0x2f, 0x01, 0x8f, 0x0e, 0x40, 0x6f, + 0xc0, 0xb5, 0xd7, 0x30, 0xca, 0xa6, 0x3e, 0x1a, + 0x79, 0x8d, 0xfa, 0x9a, 0xd7, 0x8b, 0xab, 0x07] + } + + fn bool_no_confusion_addr() -> [G; 32] { + [0x47, 0x3b, 0x2c, 0x94, 0x8d, 0xdb, 0xce, 0x4d, + 0xdb, 0x4b, 0x36, 0x9e, 0x5c, 0xf6, 0x19, 0x9f, + 0xf1, 0x85, 0xb6, 0x4e, 0x9f, 0xbb, 0x1e, 0x90, + 0x90, 0x1d, 0x74, 0x6d, 0xe5, 0x51, 0x90, 0xef] + } + + fn char_mk_addr() -> [G; 32] { + [0xe6, 0x22, 0x38, 0xc5, 0x4b, 0x91, 0x39, 0x5c, + 0x2c, 0x06, 0x19, 0x2c, 0xfc, 0xcb, 0x5e, 0x80, + 0xfc, 0xe4, 0x1e, 0xd1, 0x1d, 0x1b, 0xf6, 0xdb, + 0x14, 0x2d, 0x2c, 0x39, 0xd7, 0xc8, 0x1a, 0x20] + } + + fn nat_bitwise_addr() -> [G; 32] { + [0xf2, 0x1d, 0x74, 0x7a, 0xca, 0x3e, 0x08, 0xf5, + 0x29, 0x00, 0x93, 0xbf, 0x8f, 0x40, 0x20, 0x83, + 0x8d, 0x8e, 0x17, 0x42, 0xa7, 0x8b, 0x3e, 0x1f, + 0x48, 0xd8, 0x3e, 0xf1, 0x59, 0x39, 0x5e, 0x6a] + } + + fn nat_rec_addr() -> [G; 32] { + [0x6e, 0x85, 0x5f, 0x04, 0x48, 0x5d, 0xf8, 0xd9, + 0x77, 0x67, 0xf8, 0xaa, 0x89, 0xf2, 0x23, 0xbc, + 0xac, 0x97, 0x7e, 0x2a, 0x15, 0x5c, 0x45, 0xc6, + 0x6d, 0x6e, 0x09, 0x4e, 0xc3, 0x16, 0x31, 0x94] + } + + fn nat_cases_on_addr() -> [G; 32] { + [0x9a, 0x6b, 0x32, 0xaf, 0x19, 0x4f, 0xdf, 0x0b, + 0x44, 0x76, 0x33, 0x07, 0x7d, 0x9f, 0xa8, 0x9c, + 0x24, 0x9d, 0x6d, 0x7d, 0xf2, 0x43, 0xd3, 0x00, + 0xb8, 0x9d, 0xd9, 0xb1, 0x4d, 0x92, 0xbb, 0x03] + } + + fn list_addr() -> [G; 32] { + [0xab, 0xed, 0x9f, 0xf1, 0xab, 0xa4, 0x63, 0x4a, + 0xbc, 0x0b, 0xd3, 0xaf, 0x76, 0xca, 0x54, 0x42, + 0x85, 0xa3, 0x2d, 0xcf, 0xe4, 0x3d, 0xc2, 0x7b, + 0x12, 0x9a, 0xea, 0x88, 0x67, 0x45, 0x76, 0x20] + } + + fn string_addr() -> [G; 32] { + [0xcb, 0x1b, 0xca, 0x7f, 0xc5, 0xdb, 0xb1, 0xbd, + 0xfb, 0xf6, 0x31, 0x9d, 0xf8, 0x9d, 0xa9, 0xfd, + 0xa3, 0xa6, 0x79, 0xd2, 0x25, 0x54, 0xb8, 0xa9, + 0xd5, 0xdd, 0x46, 0x63, 0xc0, 0xa9, 0x73, 0x12] + } + + fn string_mk_addr() -> [G; 32] { + [0x63, 0xd9, 0x5a, 0x0f, 0xd6, 0xa1, 0x14, 0x43, + 0x48, 0xd0, 0xf2, 0x0e, 0x20, 0xcc, 0x5c, 0x3a, + 0xf6, 0x1a, 0xc9, 0x55, 0x92, 0x3f, 0x45, 0xf4, + 0x2a, 0x78, 0x2d, 0xe9, 0x33, 0xaa, 0xd5, 0x94] + } + + fn of_nat_of_nat_addr() -> [G; 32] { + [0x8f, 0xdc, 0x86, 0x9f, 0x7b, 0x7a, 0xa2, 0xb7, + 0xb5, 0x92, 0x9b, 0xa2, 0x42, 0xed, 0x89, 0x9c, + 0xe2, 0xd7, 0xc5, 0xd4, 0x2d, 0xf1, 0xd4, 0xe2, + 0x39, 0x36, 0x90, 0xcf, 0xa8, 0x5e, 0x94, 0xd2] + } + + fn eager_reduce_addr() -> [G; 32] { + -- Aiur is permanently eager (no fvars, no fuel); this address is + -- registered for parity with Rust but never matched. + [0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00] + } + + fn pprod_addr() -> [G; 32] { + [0x6e, 0x99, 0xb0, 0x86, 0x70, 0x0f, 0x29, 0x01, + 0x80, 0x4a, 0x10, 0x7c, 0xad, 0x5e, 0xf0, 0xfe, + 0x87, 0x80, 0x77, 0xb1, 0x72, 0x3f, 0x4b, 0x82, + 0x46, 0x15, 0xdd, 0x02, 0x1d, 0x4d, 0x51, 0x57] + } + + fn pprod_mk_addr() -> [G; 32] { + [0x00, 0xdd, 0xf2, 0x6e, 0xfd, 0x5f, 0x7e, 0x5e, + 0xee, 0x55, 0x61, 0xc2, 0x46, 0x7b, 0x16, 0xac, + 0x85, 0x6e, 0xfc, 0xb3, 0xa1, 0x22, 0x65, 0x44, + 0x48, 0x76, 0x45, 0xdd, 0x46, 0x20, 0x85, 0x96] + } + -/ + + fn fin_addr() -> [G; 32] { + [0x27, 0x2a, 0xa9, 0xe1, 0x6c, 0x03, 0xe9, 0xad, + 0x73, 0x37, 0xe7, 0x06, 0xd7, 0x3e, 0xfd, 0x14, + 0xcc, 0xf1, 0xda, 0x10, 0xe2, 0xf8, 0x36, 0x7d, + 0xd3, 0x43, 0x74, 0xb6, 0x0e, 0x15, 0x56, 0xfa] + } + + fn decidable_rec_addr() -> [G; 32] { + [0xf3, 0x23, 0xa5, 0x49, 0xad, 0x4d, 0xf6, 0xb2, + 0xf3, 0x28, 0x99, 0x23, 0x7a, 0x28, 0x11, 0x36, + 0xf3, 0x4d, 0x43, 0x1e, 0xd7, 0x2b, 0x33, 0x85, + 0x7c, 0x08, 0x5e, 0x6c, 0x4d, 0x85, 0x27, 0x38] + } + + fn decidable_is_true_addr() -> [G; 32] { + [0x3a, 0xe2, 0xc7, 0x1d, 0xa2, 0xbf, 0x34, 0x17, + 0x9a, 0x5a, 0x88, 0x08, 0x85, 0x7c, 0x34, 0xa3, + 0xb7, 0x66, 0x2f, 0xf5, 0x65, 0x4d, 0x8c, 0x24, + 0x7c, 0x43, 0xe8, 0x5a, 0x7c, 0xde, 0x49, 0x3f] + } + + fn decidable_is_false_addr() -> [G; 32] { + [0x10, 0xac, 0x5f, 0x48, 0x79, 0x8b, 0x3f, 0xf0, + 0x1b, 0x0f, 0x74, 0xc0, 0xb5, 0x44, 0xd2, 0x27, + 0x96, 0xc9, 0x77, 0x5f, 0x6d, 0x43, 0xd3, 0x28, + 0x31, 0x6b, 0xbb, 0x3a, 0xa1, 0x63, 0x89, 0x99] + } + + fn nat_le_of_ble_eq_true_addr() -> [G; 32] { + [0x7e, 0x5d, 0x1f, 0x11, 0x18, 0xa8, 0x9f, 0x77, + 0xf8, 0x9d, 0x46, 0x9a, 0x27, 0x73, 0x1a, 0x75, + 0x4d, 0xe3, 0x36, 0xa0, 0x5e, 0x33, 0xf3, 0x83, + 0x05, 0x6b, 0xc9, 0x2b, 0x36, 0x94, 0x78, 0x12] + } + + fn nat_not_le_of_not_ble_eq_true_addr() -> [G; 32] { + [0xc1, 0xe2, 0x3b, 0x8d, 0xaf, 0xb3, 0x77, 0x8b, + 0x99, 0x63, 0x12, 0x06, 0x8a, 0x2b, 0xec, 0x3d, + 0xcb, 0xcc, 0x72, 0x13, 0x2e, 0xfb, 0xf4, 0x3c, + 0x23, 0x5e, 0x57, 0x30, 0x84, 0x66, 0x82, 0x41] + } + + fn nat_eq_of_beq_eq_true_addr() -> [G; 32] { + [0xb9, 0xac, 0xc8, 0x1f, 0x28, 0x01, 0xaf, 0x89, + 0xb9, 0x5e, 0x09, 0x62, 0xaa, 0x9d, 0x73, 0x90, + 0xa3, 0xac, 0xfe, 0x8f, 0xb7, 0x60, 0x55, 0x9a, + 0x81, 0x1a, 0x82, 0xed, 0x74, 0x43, 0xdb, 0xb5] + } + + fn nat_ne_of_beq_eq_false_addr() -> [G; 32] { + [0x24, 0x87, 0x79, 0x88, 0x41, 0x09, 0xee, 0xd0, + 0x06, 0x00, 0xa0, 0xbd, 0x96, 0x8f, 0x74, 0x0d, + 0xb7, 0xf3, 0xd9, 0x24, 0xfb, 0x2b, 0x17, 0x06, + 0xab, 0x55, 0x2e, 0x78, 0x76, 0x06, 0x28, 0x55] + } + + fn reduce_bool_addr() -> [G; 32] { + [0x6e, 0x45, 0x3a, 0x7c, 0xed, 0xaf, 0xe2, 0xed, + 0xbb, 0xc1, 0xf0, 0x50, 0x34, 0x42, 0xbe, 0x49, + 0x9e, 0x4c, 0xbf, 0x18, 0xa6, 0xc0, 0x0d, 0xc9, + 0x9f, 0x39, 0x03, 0xee, 0x7f, 0x05, 0xdb, 0xaf] + } + + fn reduce_nat_addr() -> [G; 32] { + [0x54, 0x19, 0x18, 0x7f, 0xbf, 0x67, 0xef, 0x1c, + 0x4f, 0xf9, 0xab, 0x0b, 0xe1, 0xb0, 0x1d, 0x46, + 0x31, 0xa2, 0x70, 0x64, 0x7f, 0xfe, 0x43, 0x4b, + 0xf7, 0xe1, 0xf7, 0x88, 0xb3, 0xc8, 0x1d, 0xd4] + } + + fn system_platform_num_bits_addr() -> [G; 32] { + [0xd4, 0x83, 0x96, 0x64, 0x38, 0xad, 0x47, 0xce, + 0x41, 0x55, 0xb3, 0x48, 0x58, 0x19, 0xa3, 0x77, + 0xe2, 0x26, 0x05, 0xb5, 0x9a, 0x1a, 0xaf, 0xd0, + 0xb6, 0x81, 0xcb, 0x38, 0xac, 0xa8, 0x31, 0x07] + } + + fn system_platform_get_num_bits_addr() -> [G; 32] { + [0xad, 0x44, 0xc9, 0x04, 0x49, 0xfa, 0xf8, 0x6f, + 0x63, 0xc1, 0x70, 0xf0, 0x92, 0xe2, 0x24, 0x9b, + 0xcc, 0xab, 0x1e, 0x74, 0x1c, 0x1f, 0xe1, 0x0d, + 0xf8, 0x4c, 0x95, 0xb4, 0x4b, 0x38, 0x43, 0x71] + } + + fn subtype_val_addr() -> [G; 32] { + [0xad, 0x58, 0xc3, 0x65, 0x60, 0x44, 0xd7, 0xfa, + 0xef, 0x69, 0x76, 0x37, 0xf5, 0x16, 0xd7, 0x26, + 0x74, 0xd3, 0x5b, 0x18, 0x66, 0x3c, 0xb2, 0x63, + 0xf7, 0xcc, 0xca, 0x8c, 0xdd, 0x2e, 0x6f, 0x00] + } + + fn punit_size_of_1_addr() -> [G; 32] { + [0x8c, 0x2c, 0xbf, 0xe3, 0x28, 0x91, 0x0b, 0xfe, + 0x7f, 0xeb, 0x60, 0x07, 0x2b, 0x46, 0xf7, 0x48, + 0x76, 0x92, 0xcb, 0x37, 0x59, 0x96, 0x81, 0xb1, + 0x37, 0xa3, 0x1d, 0xd9, 0x9e, 0x70, 0x8f, 0x03] + } + + fn size_of_size_of_addr() -> [G; 32] { + [0x71, 0x05, 0xea, 0xf4, 0xc5, 0x2c, 0xe3, 0xa1, + 0x93, 0x72, 0xa8, 0x7f, 0xac, 0x57, 0xa8, 0xf9, + 0x59, 0x8a, 0x24, 0x63, 0x34, 0xce, 0x6e, 0xff, + 0xae, 0xe3, 0xe4, 0x8e, 0x7e, 0x6d, 0x3a, 0xad] + } + + fn punit_addr() -> [G; 32] { + [0x16, 0xa2, 0xdc, 0x76, 0xa2, 0xcf, 0xcc, 0x94, + 0x40, 0xf4, 0x43, 0xc6, 0x66, 0x53, 0x6f, 0x2f, + 0xa9, 0x9c, 0x02, 0x50, 0xb6, 0x42, 0xfd, 0x39, + 0x71, 0xfb, 0xad, 0x25, 0xd5, 0x31, 0x26, 0x2a] + } + + fn unit_addr() -> [G; 32] { + [0x21, 0x1b, 0xf5, 0xed, 0x2f, 0x4c, 0x51, 0xd4, + 0x57, 0x50, 0xe7, 0x5b, 0x89, 0x1f, 0xa2, 0x67, + 0xdb, 0x4d, 0x4e, 0x6f, 0x46, 0xc2, 0x07, 0x92, + 0x82, 0xfa, 0x2b, 0xe3, 0xe8, 0x87, 0x81, 0xa1] + } + + fn nat_addr() -> [G; 32] { + [0xfc, 0x0e, 0x1e, 0x91, 0x2f, 0x2d, 0x7f, 0x12, + 0x04, 0x9a, 0x5b, 0x31, 0x5d, 0x76, 0xee, 0xc2, + 0x95, 0x62, 0xe3, 0x4d, 0xc3, 0x9e, 0xbc, 0xa2, + 0x52, 0x87, 0xae, 0x58, 0x80, 0x7d, 0xb1, 0x37] + } + + fn nat_zero_addr() -> [G; 32] { + [0xfa, 0xc8, 0x2f, 0x0d, 0x25, 0x55, 0xd6, 0xa6, + 0x3e, 0x1b, 0x8a, 0x1f, 0xe8, 0xd8, 0x6b, 0xd2, + 0x93, 0x19, 0x7f, 0x39, 0xc3, 0x96, 0xfd, 0xc2, + 0x3c, 0x12, 0x75, 0xc6, 0x0f, 0x18, 0x2b, 0x37] + } + + fn nat_succ_addr() -> [G; 32] { + [0x71, 0x90, 0xce, 0x56, 0xf6, 0xa2, 0xa8, 0x47, + 0xb9, 0x44, 0xa3, 0x55, 0xe3, 0xec, 0x59, 0x5a, + 0x40, 0x36, 0xfb, 0x07, 0xe3, 0xc3, 0xdb, 0x9d, + 0x90, 0x64, 0xfc, 0x04, 0x1b, 0xe7, 0x2b, 0x64] + } + + fn nat_pred_addr() -> [G; 32] { + [0x6b, 0x59, 0xcf, 0x44, 0x97, 0x81, 0xf0, 0x7b, + 0x04, 0x20, 0x7d, 0x66, 0x59, 0x78, 0xb5, 0xc5, + 0xef, 0x96, 0x88, 0xaf, 0xa7, 0x44, 0x85, 0x90, + 0xa6, 0x8f, 0x7d, 0xa7, 0xff, 0x88, 0xc5, 0x16] + } + + fn nat_add_addr() -> [G; 32] { + [0xf9, 0x41, 0x92, 0x05, 0x8e, 0x41, 0xbc, 0x29, + 0xe8, 0x89, 0x24, 0xd8, 0x57, 0xa6, 0xbd, 0x33, + 0xf8, 0xb3, 0xe0, 0xa9, 0x0f, 0x87, 0x86, 0x82, + 0x82, 0x70, 0xd1, 0xcc, 0x1d, 0xd0, 0xad, 0xc6] + } + + fn nat_sub_addr() -> [G; 32] { + [0xfa, 0x98, 0xda, 0xbf, 0x44, 0xd2, 0xa6, 0x30, + 0x7b, 0x49, 0x0a, 0xc9, 0xe8, 0x11, 0x43, 0x3e, + 0xfc, 0x2f, 0x95, 0x89, 0x96, 0xc6, 0x7b, 0xe1, + 0x39, 0x8c, 0xb4, 0xd1, 0xb2, 0x64, 0xcf, 0x39] + } + + fn nat_mul_addr() -> [G; 32] { + [0x9b, 0x5c, 0x57, 0xea, 0x1c, 0xf2, 0xfb, 0x1d, + 0xe6, 0x7e, 0xe5, 0xbe, 0xc1, 0x5e, 0x36, 0x0d, + 0x20, 0xa9, 0x63, 0x59, 0x90, 0x27, 0x30, 0x14, + 0xe6, 0x78, 0x51, 0xe0, 0x49, 0xff, 0x36, 0x19] + } + + fn nat_pow_addr() -> [G; 32] { + [0xd0, 0x15, 0x98, 0x7b, 0xb1, 0x0d, 0xd2, 0x28, + 0x63, 0xdd, 0xc4, 0x11, 0x60, 0xd2, 0x7d, 0xd3, + 0xd1, 0xea, 0x74, 0xf7, 0x54, 0xfb, 0x24, 0x12, + 0x43, 0x24, 0x36, 0xf3, 0xea, 0x5b, 0x50, 0x71] + } + + fn nat_gcd_addr() -> [G; 32] { + [0xee, 0x8b, 0xa9, 0x21, 0x6b, 0x3f, 0xc8, 0x1e, + 0x79, 0x68, 0x58, 0x6b, 0x43, 0xce, 0xbe, 0xa1, + 0x5d, 0x0e, 0x14, 0x3d, 0x5d, 0x4b, 0x1f, 0xde, + 0x1b, 0xd3, 0x01, 0xa7, 0x40, 0x93, 0xf6, 0x06] + } + + fn nat_mod_addr() -> [G; 32] { + [0x8e, 0xf8, 0xb2, 0x8b, 0x4e, 0x9e, 0x0a, 0x59, + 0xf3, 0x82, 0x2e, 0x24, 0x3e, 0x71, 0x29, 0x9f, + 0x06, 0xbb, 0x6e, 0x7a, 0xfd, 0xb6, 0xcd, 0xd9, + 0x79, 0x76, 0xfb, 0x29, 0x0b, 0x66, 0x7b, 0xb4] + } + + fn nat_div_addr() -> [G; 32] { + [0xfa, 0x58, 0x37, 0x94, 0xc8, 0xef, 0x36, 0x8e, + 0xff, 0x68, 0x81, 0xe8, 0x16, 0xa4, 0xe8, 0x89, + 0xf9, 0x50, 0x61, 0x11, 0x6c, 0xe4, 0x9b, 0x15, + 0x40, 0x56, 0xd3, 0x8f, 0xce, 0x4b, 0x7f, 0x52] + } + + fn nat_land_addr() -> [G; 32] { + [0xa0, 0xdb, 0x90, 0xe6, 0x8e, 0xe3, 0xb7, 0xa1, + 0x66, 0xe3, 0x5f, 0x61, 0x9b, 0xd7, 0xb0, 0x2c, + 0x08, 0x96, 0xef, 0xd6, 0x0e, 0xb4, 0x69, 0x14, + 0xff, 0x3e, 0x4f, 0xb8, 0x12, 0x52, 0xfb, 0x94] + } + + fn nat_lor_addr() -> [G; 32] { + [0xd1, 0x44, 0x19, 0xaa, 0xa4, 0x7a, 0x03, 0xbf, + 0x9a, 0x46, 0x93, 0x8b, 0xf7, 0x2e, 0x40, 0xf9, + 0x6c, 0xab, 0x85, 0x3f, 0x9c, 0xc5, 0x86, 0x98, + 0x79, 0xe7, 0x69, 0x9f, 0x45, 0x17, 0x17, 0x73] + } + + fn nat_xor_addr() -> [G; 32] { + [0xae, 0x68, 0xfd, 0x41, 0x6e, 0xcb, 0x9c, 0xe2, + 0x06, 0x12, 0x27, 0x2d, 0x43, 0xc2, 0xf8, 0x6e, + 0xaf, 0x21, 0xd9, 0x54, 0x7f, 0x56, 0x59, 0x68, + 0x39, 0x1e, 0x9e, 0x12, 0xe3, 0x93, 0x72, 0xdc] + } + + fn nat_shift_left_addr() -> [G; 32] { + [0xf6, 0x06, 0xb7, 0xc2, 0x31, 0x80, 0xa2, 0x0a, + 0xce, 0x60, 0xfe, 0x24, 0xd5, 0x2b, 0xc0, 0xea, + 0x38, 0x54, 0x69, 0x8d, 0x2d, 0x14, 0xda, 0x05, + 0xc4, 0x83, 0x7a, 0x97, 0xe1, 0xab, 0x44, 0x69] + } + + fn nat_shift_right_addr() -> [G; 32] { + [0xd8, 0x60, 0xb5, 0x60, 0x15, 0x6d, 0xa6, 0x8e, + 0x80, 0x1c, 0x8b, 0xd5, 0x1d, 0x89, 0x2e, 0x55, + 0x7f, 0xbe, 0x35, 0x26, 0xd7, 0xd1, 0x98, 0x69, + 0x6f, 0xfb, 0x4d, 0x55, 0x1a, 0xe0, 0x4b, 0xb7] + } + + fn nat_beq_addr() -> [G; 32] { + [0xe8, 0xb7, 0x14, 0x9d, 0x8a, 0x7d, 0x12, 0x41, + 0x4b, 0x06, 0x25, 0x2f, 0x31, 0x8d, 0x40, 0x82, + 0x04, 0x72, 0x3c, 0xa4, 0xc0, 0x2f, 0x3a, 0x38, + 0xed, 0xfa, 0x37, 0x79, 0x24, 0x48, 0xc0, 0xda] + } + + fn nat_ble_addr() -> [G; 32] { + [0x22, 0x75, 0x08, 0x0a, 0x89, 0xc3, 0x27, 0x90, + 0x4e, 0x3a, 0xd1, 0x27, 0xba, 0x44, 0x37, 0x0a, + 0x7c, 0x6c, 0x1b, 0xef, 0x3a, 0xa7, 0x47, 0x92, + 0x07, 0x9f, 0x8f, 0x31, 0x59, 0x63, 0x69, 0x57] + } + + fn str_addr() -> [G; 32] { + [0xcb, 0x1b, 0xca, 0x7f, 0xc5, 0xdb, 0xb1, 0xbd, + 0xfb, 0xf6, 0x31, 0x9d, 0xf8, 0x9d, 0xa9, 0xfd, + 0xa3, 0xa6, 0x79, 0xd2, 0x25, 0x54, 0xb8, 0xa9, + 0xd5, 0xdd, 0x46, 0x63, 0xc0, 0xa9, 0x73, 0x12] + } + + fn string_utf8_byte_size_addr() -> [G; 32] { + [0x11, 0xea, 0x14, 0x32, 0x56, 0x2b, 0x11, 0x32, + 0x85, 0x3f, 0x17, 0x3f, 0xda, 0x9a, 0xdd, 0x59, + 0x1b, 0x06, 0x06, 0xa8, 0xde, 0xe3, 0x6b, 0x00, + 0xf7, 0x1b, 0xec, 0x29, 0x67, 0xfb, 0x64, 0x47] + } + + fn string_back_addr() -> [G; 32] { + [0x11, 0xba, 0xba, 0x55, 0xcb, 0xdf, 0x36, 0x49, + 0xfc, 0x1b, 0x69, 0x6c, 0x2e, 0x77, 0x56, 0x96, + 0xe9, 0x95, 0xc3, 0x8e, 0xf3, 0x13, 0xcf, 0x27, + 0x65, 0x53, 0xe1, 0x89, 0x8d, 0xa4, 0x5e, 0x0f] + } + + fn string_legacy_back_addr() -> [G; 32] { + [0x99, 0x8c, 0x3e, 0x64, 0x0c, 0x8b, 0x3a, 0x35, + 0xc6, 0x27, 0x20, 0x0d, 0xcd, 0x69, 0x4f, 0x67, + 0xf8, 0xb1, 0xd4, 0x1e, 0x68, 0x76, 0x0c, 0x90, + 0xe3, 0x61, 0xda, 0x24, 0x73, 0x4d, 0x39, 0xbc] + } + + fn string_to_byte_array_addr() -> [G; 32] { + [0x65, 0xf6, 0x44, 0x28, 0x6b, 0xc4, 0x94, 0x64, + 0xcc, 0x7a, 0x36, 0xb7, 0xd7, 0x95, 0x2f, 0x85, + 0x43, 0xab, 0x67, 0x56, 0x4c, 0xd5, 0x09, 0xee, + 0x87, 0x8a, 0x95, 0x37, 0x56, 0x09, 0x06, 0x9b] + } + + fn byte_array_empty_addr() -> [G; 32] { + [0xd9, 0x74, 0x17, 0xc4, 0x92, 0x06, 0xc6, 0x1f, + 0xe2, 0x8c, 0xbb, 0x7a, 0x0b, 0x60, 0x95, 0xf7, + 0x22, 0xcd, 0xfb, 0xc2, 0x13, 0xe0, 0x34, 0xaa, + 0x59, 0xde, 0x51, 0xb9, 0x21, 0x8a, 0xf0, 0x74] + } + + fn char_of_nat_addr() -> [G; 32] { + [0x7a, 0x57, 0x54, 0x38, 0x6b, 0x30, 0xbb, 0x86, + 0xf0, 0xb6, 0xf7, 0x0f, 0xd3, 0x68, 0xbb, 0x50, + 0xe6, 0x03, 0x27, 0x3a, 0x50, 0xad, 0x79, 0xd8, + 0xc1, 0x7f, 0xc3, 0xcb, 0x59, 0xf8, 0x0f, 0xac] + } + + fn char_type_addr() -> [G; 32] { + [0x38, 0xaa, 0x12, 0x05, 0x9f, 0xad, 0x3a, 0xfa, + 0x1e, 0x1e, 0x87, 0x40, 0xdc, 0x94, 0x70, 0xa4, + 0x7c, 0x26, 0x98, 0x63, 0x50, 0xf6, 0xcb, 0x3b, + 0xea, 0x1f, 0xae, 0x12, 0x76, 0xd7, 0xb5, 0xf1] + } + + fn string_of_list_addr() -> [G; 32] { + [0x63, 0xd9, 0x5a, 0x0f, 0xd6, 0xa1, 0x14, 0x43, + 0x48, 0xd0, 0xf2, 0x0e, 0x20, 0xcc, 0x5c, 0x3a, + 0xf6, 0x1a, 0xc9, 0x55, 0x92, 0x3f, 0x45, 0xf4, + 0x2a, 0x78, 0x2d, 0xe9, 0x33, 0xaa, 0xd5, 0x94] + } + + fn list_nil_addr() -> [G; 32] { + [0x0e, 0xbe, 0x34, 0x5d, 0xc4, 0x69, 0x17, 0xc8, + 0x24, 0xb6, 0xc3, 0xf6, 0xc4, 0x2b, 0x10, 0x1f, + 0x2a, 0xc8, 0xc0, 0xe2, 0xc9, 0x9f, 0x03, 0x3a, + 0x0e, 0xe3, 0xc6, 0x0a, 0xcb, 0x9c, 0xd8, 0x4d] + } + + fn list_cons_addr() -> [G; 32] { + [0xf7, 0x98, 0x42, 0xf1, 0x02, 0x06, 0x59, 0x89, + 0x29, 0xe6, 0xba, 0x60, 0xce, 0x3e, 0xba, 0xa0, + 0x0d, 0x11, 0xf2, 0x01, 0xc9, 0x9e, 0x80, 0x28, + 0x5f, 0x46, 0xcc, 0x0e, 0x90, 0x93, 0x28, 0x32] + } + + fn bool_true_addr() -> [G; 32] { + [0x42, 0x0d, 0xea, 0xd2, 0x16, 0x8a, 0xbd, 0x16, + 0xa7, 0x05, 0x0e, 0xdf, 0xd8, 0xe1, 0x7d, 0x45, + 0x15, 0x52, 0x37, 0xd3, 0x11, 0x87, 0x82, 0xd0, + 0xe6, 0x8b, 0x6d, 0xe8, 0x77, 0x42, 0xcb, 0x8d] + } + + fn bool_false_addr() -> [G; 32] { + [0xc1, 0x27, 0xf8, 0x9f, 0x92, 0xe0, 0x48, 0x1f, + 0x7a, 0x3e, 0x06, 0x31, 0xc5, 0x61, 0x5f, 0xe7, + 0xf6, 0xcb, 0xbf, 0x43, 0x9d, 0x5f, 0xd7, 0xeb, + 0xa4, 0x00, 0xfb, 0x06, 0x03, 0xae, 0xdf, 0x2f] + } + + -- Mirror: `u64_add` in ByteStream.lean expanded to expose final + -- carry. Used by klimbs_succ / klimbs_add. + -- + -- TODO: delete this once `ByteStream.lean::u64_add` is patched to + -- return `(U64, G)` and existing call sites updated. Tracking this + -- as a follow-up because the patch ripples beyond the kernel + -- (Blake3 / IxonSerialize / ByteStream itself). + fn u64_add_with_carry(a: U64, b: U64) -> (U64, G) { + let [a0, a1, a2, a3, a4, a5, a6, a7] = a; + let [b0, b1, b2, b3, b4, b5, b6, b7] = b; + let (s0, c1) = u8_add(a0, b0); + let (t1, o1) = u8_add(a1, b1); + let (s1, c1a) = u8_add(t1, c1); + let c2 = u8_xor(o1, c1a); + let (t2, o2) = u8_add(a2, b2); + let (s2, c2a) = u8_add(t2, c2); + let c3 = u8_xor(o2, c2a); + let (t3, o3) = u8_add(a3, b3); + let (s3, c3a) = u8_add(t3, c3); + let c4 = u8_xor(o3, c3a); + let (t4, o4) = u8_add(a4, b4); + let (s4, c4a) = u8_add(t4, c4); + let c5 = u8_xor(o4, c4a); + let (t5, o5) = u8_add(a5, b5); + let (s5, c5a) = u8_add(t5, c5); + let c6 = u8_xor(o5, c5a); + let (t6, o6) = u8_add(a6, b6); + let (s6, c6a) = u8_add(t6, c6); + let c7 = u8_xor(o6, c6a); + let (t7, o7) = u8_add(a7, b7); + let (s7, c7a) = u8_add(t7, c7); + let final_carry = u8_xor(o7, c7a); + ([s0, s1, s2, s3, s4, s5, s6, s7], final_carry) + } + + -- Mirror: BigUint::succ. Increment a KLimbs by 1; ripple carry. + fn klimbs_succ(n: KLimbs) -> KLimbs { + match load(n) { + ListNode.Nil => + store(ListNode.Cons([1, 0, 0, 0, 0, 0, 0, 0], store(ListNode.Nil))), + ListNode.Cons(limb, rest) => + let pair = u64_add_with_carry(limb, [1, 0, 0, 0, 0, 0, 0, 0]); + match pair { + (sum, carry) => + match carry { + 0 => store(ListNode.Cons(sum, rest)), + _ => store(ListNode.Cons(sum, klimbs_succ(rest))), + }, + }, + } + } + + -- Mirror: BigUint::add. Limb-wise add with ripple carry. + -- KLimbs are little-endian; head = least significant. + -- Asymmetric lengths handled by terminating on shorter list and + -- propagating carry into the longer. + fn klimbs_add_carry(a: KLimbs, b: KLimbs, carry: G) -> KLimbs { + match load(a) { + ListNode.Nil => + match carry { + 0 => b, + _ => klimbs_succ(b), + }, + ListNode.Cons(la, ra) => + match load(b) { + ListNode.Nil => + match carry { + 0 => a, + _ => klimbs_succ(a), + }, + ListNode.Cons(lb, rb) => + let pair1 = u64_add_with_carry(la, lb); + match pair1 { + (sum1, carry1) => + let pair2 = u64_add_with_carry(sum1, [carry, 0, 0, 0, 0, 0, 0, 0]); + match pair2 { + (sum2, carry2) => + let total_carry = g_or(carry1, carry2); + store(ListNode.Cons(sum2, klimbs_add_carry(ra, rb, total_carry))), + }, + }, + }, + } + } + + fn klimbs_add(a: KLimbs, b: KLimbs) -> KLimbs { + klimbs_add_carry(a, b, 0) + } + + -- Mirror: byte-wise u64_sub with explicit final borrow. + fn u64_sub_with_borrow(a: U64, b: U64) -> (U64, G) { + let [a0, a1, a2, a3, a4, a5, a6, a7] = a; + let [b0, b1, b2, b3, b4, b5, b6, b7] = b; + let (r0, br1) = u8_sub(a0, b0); + let (t1, u_t1) = u8_sub(a1, b1); + let (r1, u_r1) = u8_sub(t1, br1); + let br2 = g_or(u_t1, u_r1); + let (t2, u_t2) = u8_sub(a2, b2); + let (r2, u_r2) = u8_sub(t2, br2); + let br3 = g_or(u_t2, u_r2); + let (t3, u_t3) = u8_sub(a3, b3); + let (r3, u_r3) = u8_sub(t3, br3); + let br4 = g_or(u_t3, u_r3); + let (t4, u_t4) = u8_sub(a4, b4); + let (r4, u_r4) = u8_sub(t4, br4); + let br5 = g_or(u_t4, u_r4); + let (t5, u_t5) = u8_sub(a5, b5); + let (r5, u_r5) = u8_sub(t5, br5); + let br6 = g_or(u_t5, u_r5); + let (t6, u_t6) = u8_sub(a6, b6); + let (r6, u_r6) = u8_sub(t6, br6); + let br7 = g_or(u_t6, u_r6); + let (t7, u_t7) = u8_sub(a7, b7); + let (r7, u_r7) = u8_sub(t7, br7); + let final_borrow = g_or(u_t7, u_r7); + ([r0, r1, r2, r3, r4, r5, r6, r7], final_borrow) + } + + -- Mirror: BigUint::sub with saturating-at-zero (Lean Nat.sub semantics). + -- a - b clamped to 0 when b > a. + -- + -- Walk both lists in parallel limb-by-limb with borrow ripple. If the + -- final borrow is 1 OR `b` has more limbs than `a`, return 0 (Nil). + -- Otherwise normalize trailing zero limbs. + fn klimbs_sub_borrow(a: KLimbs, b: KLimbs, borrow: G) -> (KLimbs, G) { + match load(a) { + ListNode.Nil => + match load(b) { + ListNode.Nil => + -- 0 - 0 - borrow: borrow=1 → underflow. + (store(ListNode.Nil), borrow), + ListNode.Cons(_, _) => + -- 0 - non-empty: definite underflow (b > 0 + carries). + (store(ListNode.Nil), 1), + }, + ListNode.Cons(la, ra) => + match load(b) { + ListNode.Nil => + -- a - 0 - borrow: subtract borrow from la, propagate. + match borrow { + 0 => (a, 0), + _ => + let pair = u64_sub_with_borrow(la, [1, 0, 0, 0, 0, 0, 0, 0]); + match pair { + (diff, br) => + let pair2 = klimbs_sub_borrow(ra, store(ListNode.Nil), br); + match pair2 { + (rest_res, br2) => + (store(ListNode.Cons(diff, rest_res)), br2), + }, + }, + }, + ListNode.Cons(lb, rb) => + let pair1 = u64_sub_with_borrow(la, lb); + match pair1 { + (sum1, br1) => + let pair2 = u64_sub_with_borrow(sum1, [borrow, 0, 0, 0, 0, 0, 0, 0]); + match pair2 { + (sum2, br2) => + let total = g_or(br1, br2); + let rec_pair = klimbs_sub_borrow(ra, rb, total); + match rec_pair { + (rest_res, br_final) => + (store(ListNode.Cons(sum2, rest_res)), br_final), + }, + }, + }, + }, + } + } + + -- Strip trailing zero limbs (canonicalize `[k, 0, 0]` → `[k]`). + fn klimbs_normalize(n: KLimbs) -> KLimbs { + match load(n) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(limb, rest) => + let normalized_rest = klimbs_normalize(rest); + match load(normalized_rest) { + ListNode.Nil => + match u64_is_zero(limb) { + 1 => store(ListNode.Nil), + 0 => store(ListNode.Cons(limb, store(ListNode.Nil))), + }, + _ => store(ListNode.Cons(limb, normalized_rest)), + }, + } + } + + fn klimbs_sub(a: KLimbs, b: KLimbs) -> KLimbs { + let pair = klimbs_sub_borrow(a, b, 0); + match pair { + (result, borrow) => + match borrow { + 1 => store(ListNode.Nil), + 0 => klimbs_normalize(result), + }, + } + } + + -- Mirror: Nat.le. Returns 1 if a ≤ b, 0 otherwise. + -- Uses saturating sub: a ≤ b iff (a - b) saturates to 0. + fn klimbs_le(a: KLimbs, b: KLimbs) -> G { + let diff = klimbs_sub(a, b); + match load(diff) { + ListNode.Nil => 1, + _ => 0, + } + } + + -- Mirror: Nat.pred. Saturating decrement; pred(0) = 0. + fn klimbs_dec(a: KLimbs) -> KLimbs { + let one = store(ListNode.Cons([1, 0, 0, 0, 0, 0, 0, 0], store(ListNode.Nil))); + klimbs_sub(a, one) + } + + -- TODO(u8_mul_gadget): replace `divmod_256` + byte-schoolbook `u64_mul` + -- with a proper u8_mul Aiur gadget once it lands. Tracking on a separate + -- branch. + -- Returns (remainder, quotient) where remainder = x mod 256, quotient = x / 256. + fn divmod_256(x: G, q: G) -> (G, G) { + match u32_less_than(x, 256) { + 1 => (x, q), + 0 => divmod_256(x - 256, q + 1), + } + } + + -- u64×u64 → (lo: U64, hi: U64) via byte schoolbook. + fn u64_mul(a: U64, b: U64) -> (U64, U64) { + let [a0, a1, a2, a3, a4, a5, a6, a7] = a; + let [b0, b1, b2, b3, b4, b5, b6, b7] = b; + let pp0 = a0*b0; + let pp1 = a0*b1 + a1*b0; + let pp2 = a0*b2 + a1*b1 + a2*b0; + let pp3 = a0*b3 + a1*b2 + a2*b1 + a3*b0; + let pp4 = a0*b4 + a1*b3 + a2*b2 + a3*b1 + a4*b0; + let pp5 = a0*b5 + a1*b4 + a2*b3 + a3*b2 + a4*b1 + a5*b0; + let pp6 = a0*b6 + a1*b5 + a2*b4 + a3*b3 + a4*b2 + a5*b1 + a6*b0; + let pp7 = a0*b7 + a1*b6 + a2*b5 + a3*b4 + a4*b3 + a5*b2 + a6*b1 + a7*b0; + let pp8 = a1*b7 + a2*b6 + a3*b5 + a4*b4 + a5*b3 + a6*b2 + a7*b1; + let pp9 = a2*b7 + a3*b6 + a4*b5 + a5*b4 + a6*b3 + a7*b2; + let pp10 = a3*b7 + a4*b6 + a5*b5 + a6*b4 + a7*b3; + let pp11 = a4*b7 + a5*b6 + a6*b5 + a7*b4; + let pp12 = a5*b7 + a6*b6 + a7*b5; + let pp13 = a6*b7 + a7*b6; + let pp14 = a7*b7; + match divmod_256(pp0, 0) { + (r0, c1) => + match divmod_256(pp1 + c1, 0) { + (r1, c2) => + match divmod_256(pp2 + c2, 0) { + (r2, c3) => + match divmod_256(pp3 + c3, 0) { + (r3, c4) => + match divmod_256(pp4 + c4, 0) { + (r4, c5) => + match divmod_256(pp5 + c5, 0) { + (r5, c6) => + match divmod_256(pp6 + c6, 0) { + (r6, c7) => + match divmod_256(pp7 + c7, 0) { + (r7, c8) => + match divmod_256(pp8 + c8, 0) { + (r8, c9) => + match divmod_256(pp9 + c9, 0) { + (r9, c10) => + match divmod_256(pp10 + c10, 0) { + (r10, c11) => + match divmod_256(pp11 + c11, 0) { + (r11, c12) => + match divmod_256(pp12 + c12, 0) { + (r12, c13) => + match divmod_256(pp13 + c13, 0) { + (r13, c14) => + match divmod_256(pp14 + c14, 0) { + (r14, c15) => + match divmod_256(c15, 0) { + (r15, _) => + ([r0, r1, r2, r3, r4, r5, r6, r7], + [r8, r9, r10, r11, r12, r13, r14, r15]), + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + } + } + + -- Mirror: BigUint::mul. Limb-wise schoolbook multiply. + fn klimbs_mul(a: KLimbs, b: KLimbs) -> KLimbs { + klimbs_mul_outer(a, b, store(ListNode.Nil), 0) + } + + fn klimbs_mul_outer(a: KLimbs, b: KLimbs, acc: KLimbs, shift: G) -> KLimbs { + match load(a) { + ListNode.Nil => acc, + ListNode.Cons(a_limb, rest) => + let prod = klimbs_mul_single(a_limb, b, [0, 0, 0, 0, 0, 0, 0, 0], store(ListNode.Nil)); + let shifted = klimbs_shl_limbs(prod, shift); + let new_acc = klimbs_add(acc, shifted); + klimbs_mul_outer(rest, b, new_acc, shift + 1), + } + } + + fn klimbs_mul_single(a_limb: U64, b: KLimbs, carry: U64, acc: KLimbs) -> KLimbs { + match load(b) { + ListNode.Nil => + match u64_is_zero(carry) { + 1 => acc, + 0 => list_snoc(acc, carry), + }, + ListNode.Cons(b_limb, rest) => + match u64_mul(a_limb, b_limb) { + (lo, hi) => + match u64_add_with_carry(lo, carry) { + (sum, carry_out) => + match u64_add_with_carry(hi, [carry_out, 0, 0, 0, 0, 0, 0, 0]) { + (new_carry, _) => + let new_acc = list_snoc(acc, sum); + klimbs_mul_single(a_limb, rest, new_carry, new_acc), + }, + }, + }, + } + } + + fn klimbs_shl_limbs(x: KLimbs, shift: G) -> KLimbs { + match shift { + 0 => x, + _ => + let prepended = store(ListNode.Cons([0, 0, 0, 0, 0, 0, 0, 0], x)); + klimbs_shl_limbs(prepended, shift - 1), + } + } + + fn klimbs_is_zero(x: KLimbs) -> G { + match load(klimbs_normalize(x)) { + ListNode.Nil => 1, + _ => 0, + } + } + + -- Mirror: BigUint::div_mod via repeated subtraction. Returns (quotient, + -- remainder). For divisor 0, follows Lean Nat semantics: a / 0 = 0, + -- a % 0 = a. + fn klimbs_div_mod(a: KLimbs, b: KLimbs) -> (KLimbs, KLimbs) { + match klimbs_is_zero(b) { + 1 => (store(ListNode.Nil), a), + 0 => klimbs_div_mod_go(a, b, store(ListNode.Nil)), + } + } + + fn klimbs_div_mod_go(a: KLimbs, b: KLimbs, q: KLimbs) -> (KLimbs, KLimbs) { + match klimbs_le(b, a) { + 0 => (q, a), + 1 => klimbs_div_mod_go(klimbs_sub(a, b), b, klimbs_succ(q)), + } + } + + fn klimbs_div(a: KLimbs, b: KLimbs) -> KLimbs { + match klimbs_div_mod(a, b) { (q, _) => q, } + } + + fn klimbs_mod(a: KLimbs, b: KLimbs) -> KLimbs { + match klimbs_div_mod(a, b) { (_, r) => r, } + } + + fn klimbs_gcd(a: KLimbs, b: KLimbs) -> KLimbs { + match klimbs_is_zero(b) { + 1 => a, + 0 => klimbs_gcd(b, klimbs_mod(a, b)), + } + } + + fn klimbs_pow(base: KLimbs, exp: KLimbs) -> KLimbs { + match klimbs_is_zero(exp) { + 1 => store(ListNode.Cons([1, 0, 0, 0, 0, 0, 0, 0], store(ListNode.Nil))), + 0 => klimbs_mul(base, klimbs_pow(base, klimbs_dec(exp))), + } + } + + -- Byte-wise AND on two U64 limbs. + fn u64_and(a: U64, b: U64) -> U64 { + let [a0, a1, a2, a3, a4, a5, a6, a7] = a; + let [b0, b1, b2, b3, b4, b5, b6, b7] = b; + [u8_and(a0, b0), u8_and(a1, b1), u8_and(a2, b2), u8_and(a3, b3), + u8_and(a4, b4), u8_and(a5, b5), u8_and(a6, b6), u8_and(a7, b7)] + } + + fn u64_or(a: U64, b: U64) -> U64 { + let [a0, a1, a2, a3, a4, a5, a6, a7] = a; + let [b0, b1, b2, b3, b4, b5, b6, b7] = b; + [u8_or(a0, b0), u8_or(a1, b1), u8_or(a2, b2), u8_or(a3, b3), + u8_or(a4, b4), u8_or(a5, b5), u8_or(a6, b6), u8_or(a7, b7)] + } + + fn u64_xor_kbits(a: U64, b: U64) -> U64 { + let [a0, a1, a2, a3, a4, a5, a6, a7] = a; + let [b0, b1, b2, b3, b4, b5, b6, b7] = b; + [u8_xor(a0, b0), u8_xor(a1, b1), u8_xor(a2, b2), u8_xor(a3, b3), + u8_xor(a4, b4), u8_xor(a5, b5), u8_xor(a6, b6), u8_xor(a7, b7)] + } + + -- Mirror: BigUint::bitand. Walks parallel limbs; result length = min(len(a), len(b)). + fn klimbs_land(a: KLimbs, b: KLimbs) -> KLimbs { + match load(a) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(la, ra) => + match load(b) { + ListNode.Nil => store(ListNode.Nil), + ListNode.Cons(lb, rb) => + store(ListNode.Cons(u64_and(la, lb), klimbs_land(ra, rb))), + }, + } + } + + -- Mirror: BigUint::bitor. Result length = max(len(a), len(b)); shorter is zero-padded. + fn klimbs_lor(a: KLimbs, b: KLimbs) -> KLimbs { + match load(a) { + ListNode.Nil => b, + ListNode.Cons(la, ra) => + match load(b) { + ListNode.Nil => a, + ListNode.Cons(lb, rb) => + store(ListNode.Cons(u64_or(la, lb), klimbs_lor(ra, rb))), + }, + } + } + + -- Mirror: BigUint::bitxor. Result length = max(len(a), len(b)); zero-padded shorter. + fn klimbs_xor_op(a: KLimbs, b: KLimbs) -> KLimbs { + match load(a) { + ListNode.Nil => b, + ListNode.Cons(la, ra) => + match load(b) { + ListNode.Nil => a, + ListNode.Cons(lb, rb) => + store(ListNode.Cons(u64_xor_kbits(la, lb), klimbs_xor_op(ra, rb))), + }, + } + } + + -- Shift left by n bits via repeated multiplication by 2. + fn klimbs_shl(a: KLimbs, n: KLimbs) -> KLimbs { + let two = store(ListNode.Cons([2, 0, 0, 0, 0, 0, 0, 0], store(ListNode.Nil))); + klimbs_mul(a, klimbs_pow(two, n)) + } + + -- Shift right by n bits via integer division by 2^n. + fn klimbs_shr(a: KLimbs, n: KLimbs) -> KLimbs { + let two = store(ListNode.Cons([2, 0, 0, 0, 0, 0, 0, 0], store(ListNode.Nil))); + klimbs_div(a, klimbs_pow(two, n)) + } + + -- ============================================================================ + -- Lit(Nat) extraction + dispatch + -- ============================================================================ + + -- Find target's positional idx in addrs. Returns (1, idx) if found, + -- (0, _) if not. Used by Nat literal coercion in iota. + fn find_addr_idx_safe(target: [G; 32], addrs: List‹[G; 32]›, i: G) -> (G, G) { + match load(addrs) { + ListNode.Nil => (0, 0), + ListNode.Cons(a, rest) => + match address_eq(target, a) { + 1 => (1, i), + 0 => find_addr_idx_safe(target, rest, i + 1), + }, + } + } + + -- Convert a KLimbs n into a chain `App(Const(succ), App(Const(succ), + -- ... Const(zero)))` for n calls of succ. Used by nat-literal-to-ctor + -- coercion in iota. + fn klimbs_to_ctor_form(n: KLimbs, zero_idx: G, succ_idx: G) -> KExpr { + match load(n) { + ListNode.Nil => + store(KExprNode.Const(zero_idx, store(ListNode.Nil))), + ListNode.Cons(_, _) => + let dec = klimbs_dec(n); + let pred = klimbs_to_ctor_form(dec, zero_idx, succ_idx); + let succ_const = store(KExprNode.Const(succ_idx, store(ListNode.Nil))); + store(KExprNode.App(succ_const, pred)), + } + } + + -- If `e` is `Lit(Nat(klimbs))` and addrs contains both Nat.zero and + -- Nat.succ, expand to ctor chain. Else return `e` unchanged. Mirror: + -- src/ix/kernel/whnf.rs:929-946 nat_to_constructor. + fn nat_lit_to_ctor_or_self(e: KExpr, addrs: List‹[G; 32]›) -> KExpr { + match load(e) { + KExprNode.Lit(lit) => + match lit { + KLiteral.Nat(klimbs) => + let z = find_addr_idx_safe(nat_zero_addr(), addrs, 0); + let s = find_addr_idx_safe(nat_succ_addr(), addrs, 0); + match z { + (1, zero_idx) => + match s { + (1, succ_idx) => klimbs_to_ctor_form(klimbs, zero_idx, succ_idx), + _ => e, + }, + _ => e, + }, + _ => e, + }, + _ => e, + } + } + + -- Mirror: src/ix/kernel/whnf.rs::extract_nat_value. Accepts: + -- * `Lit(Nat klimbs)` → klimbs + -- * `Const(Nat.zero)` → 0 + -- * `App(Const(Nat.succ), x)` → 1 + extract(x) + fn try_extract_nat(e: KExpr, addrs: List‹[G; 32]›) -> (G, KLimbs) { + match load(e) { + KExprNode.Lit(lit) => + match lit { + KLiteral.Nat(limbs) => (1, limbs), + _ => (0, store(ListNode.Nil)), + }, + KExprNode.Const(idx, _) => + let const_addr = list_lookup(addrs, idx); + match address_eq(const_addr, nat_zero_addr()) { + 1 => (1, store(ListNode.Nil)), + 0 => (0, store(ListNode.Nil)), + }, + KExprNode.App(f, a) => + match load(f) { + KExprNode.Const(idx, _) => + let head_addr_ = list_lookup(addrs, idx); + match address_eq(head_addr_, nat_succ_addr()) { + 1 => + match try_extract_nat(a, addrs) { + (1, pred_limbs) => (1, klimbs_succ(pred_limbs)), + _ => (0, store(ListNode.Nil)), + }, + 0 => (0, store(ListNode.Nil)), + }, + _ => (0, store(ListNode.Nil)), + }, + _ => (0, store(ListNode.Nil)), + } + } + + -- Wrap a KLimbs in `Lit(Nat(...))`. + fn mk_nat_lit(n: KLimbs) -> KExpr { + store(KExprNode.Lit(KLiteral.Nat(n))) + } + + -- Mirror: src/ix/kernel/whnf.rs:500-700 Nat-on-literals dispatch. + -- Address-keyed (no positional prims): given the head Const's blake3 + -- address and the unreduced spine, fold a Nat primitive op when both + -- required args are literals. Returns (1, reduced) on hit, (0, _) on miss. + fn try_nat_dispatch(head_addr: [G; 32], spine: List‹KExpr›, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + let spine_len = list_length(spine); + let is_pred = address_eq(head_addr, nat_pred_addr()); + let is_succ = address_eq(head_addr, nat_succ_addr()); + match is_succ { + 1 => + -- Mirror: whnf.rs:1789-1822 try_reduce_nat_succ_iter. Single arg; + -- whnf, fold to Lit(n+1) on hit. + match u32_less_than(spine_len, 1) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let a0_w = whnf(list_lookup(spine, 0), types, top, addrs); + match try_extract_nat(a0_w, addrs) { + (1, na) => + let post = list_drop(spine, 1); + (1, apply_spine(mk_nat_lit(klimbs_succ(na)), post)), + _ => (0, store(KExprNode.BVar(0))), + }, + }, + 0 => + match is_pred { + 1 => + match u32_less_than(spine_len, 1) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let a0_w = whnf(list_lookup(spine, 0), types, top, addrs); + match try_extract_nat(a0_w, addrs) { + (1, na) => + let post = list_drop(spine, 1); + (1, apply_spine(mk_nat_lit(klimbs_dec(na)), post)), + _ => (0, store(KExprNode.BVar(0))), + }, + }, + 0 => + -- Binary ops: require 2 args. + match u32_less_than(spine_len, 2) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let a0_w = whnf(list_lookup(spine, 0), types, top, addrs); + let a1_w = whnf(list_lookup(spine, 1), types, top, addrs); + let pa = try_extract_nat(a0_w, addrs); + let pb = try_extract_nat(a1_w, addrs); + match pa { + (1, na) => + match pb { + (1, nb) => + match try_nat_binop_addr(head_addr, na, nb, addrs) { + (1, result) => + let post = list_drop(spine, 2); + (1, apply_spine(result, post)), + (0, _) => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + }, + }, + } + } + + -- Dispatch a Nat binary op by head address. Bool result for beq/ble + -- wraps via Bool.true / Bool.false ctors (mk_bool). + fn try_nat_binop_addr(head_addr: [G; 32], a: KLimbs, b: KLimbs, + addrs: List‹[G; 32]›) -> (G, KExpr) { + match address_eq(head_addr, nat_add_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_add(a, b)))), + 0 => + match address_eq(head_addr, nat_sub_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_sub(a, b)))), + 0 => + match address_eq(head_addr, nat_mul_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_mul(a, b)))), + 0 => + match address_eq(head_addr, nat_div_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_div(a, b)))), + 0 => + match address_eq(head_addr, nat_mod_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_mod(a, b)))), + 0 => + match address_eq(head_addr, nat_gcd_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_gcd(a, b)))), + 0 => + match address_eq(head_addr, nat_pow_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_pow(a, b)))), + 0 => + match address_eq(head_addr, nat_land_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_land(a, b)))), + 0 => + match address_eq(head_addr, nat_lor_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_lor(a, b)))), + 0 => + match address_eq(head_addr, nat_xor_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_xor_op(a, b)))), + 0 => + match address_eq(head_addr, nat_shift_left_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_shl(a, b)))), + 0 => + match address_eq(head_addr, nat_shift_right_addr()) { + 1 => (1, mk_nat_lit(klimbs_normalize(klimbs_shr(a, b)))), + 0 => + match address_eq(head_addr, nat_beq_addr()) { + 1 => (1, mk_bool(klimbs_eq(a, b), addrs)), + 0 => + match address_eq(head_addr, nat_ble_addr()) { + 1 => (1, mk_bool(klimbs_le(a, b), addrs)), + 0 => (0, store(KExprNode.BVar(0))), + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + } + } + + -- Encode a boolean as `Const(Bool.true)` / `Const(Bool.false)` when + -- those ctors are present in addrs. Falls back to `Lit(Nat(0|1))` + -- when not (e.g., kernel const list lacks Bool — should not happen + -- in practice for typed beq/ble dispatch). + fn mk_bool(g: G, addrs: List‹[G; 32]›) -> KExpr { + let target = match g { + 0 => bool_false_addr(), + _ => bool_true_addr(), + }; + let pair = find_addr_idx_safe(target, addrs, 0); + match pair { + (1, idx) => store(KExprNode.Const(idx, store(ListNode.Nil))), + (0, _) => + match g { + 0 => store(KExprNode.Lit(KLiteral.Nat(store(ListNode.Nil)))), + _ => + store(KExprNode.Lit(KLiteral.Nat( + store(ListNode.Cons([1, 0, 0, 0, 0, 0, 0, 0], store(ListNode.Nil)))))), + }, + } + } + + -- Single-limb KLimbs to G value (low 4 bytes). Used for bitvec width + -- where width ≤ 2^24 fits in 24 bits. Returns 0 for empty KLimbs. + fn klimbs_lo_g(n: KLimbs) -> G { + match load(n) { + ListNode.Nil => 0, + ListNode.Cons(limb, _) => + let [b0, b1, b2, b3, _, _, _, _] = limb; + b0 + 256*b1 + 65536*b2 + 16777216*b3, + } + } + + -- Mirror: src/ix/kernel/whnf.rs:2546-2579 fn try_reduce_bitvec_to_nat. + -- `BitVec.toNat width (BitVec.ofNat width' n)` → `Lit(Nat (n mod 2^width))`. + -- Width must be ≤ 2^24 to bound klimbs_pow cost. + fn try_reduce_bit_vec_to_nat(spine: List‹KExpr›, types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + match u32_less_than(list_length(spine), 2) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let width_e = list_lookup(spine, 0); + let val_e = list_lookup(spine, 1); + let val_w = whnf(val_e, types, top, addrs); + -- Mirror: src/ix/kernel/whnf.rs:2581-2602 bitvec_of_nat_args. + -- Accepts both `BitVec.ofNat(W, N)` and `OfNat.ofNat(BitVec W, N)`. + let pair = bitvec_of_nat_args(val_w, addrs); + match pair { + (0, _, _) => (0, store(KExprNode.BVar(0))), + (1, val_width, n_e) => + let n_w = whnf(n_e, types, top, addrs); + let np = try_extract_nat(n_w, addrs); + match np { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, n_klimbs) => + let width_w = whnf(val_width, types, top, addrs); + let wp = try_extract_nat(width_w, addrs); + match wp { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, w_klimbs) => + let two = store(ListNode.Cons([2, 0, 0, 0, 0, 0, 0, 0], store(ListNode.Nil))); + let modulus = klimbs_pow(two, w_klimbs); + let result = klimbs_mod(n_klimbs, modulus); + (1, mk_nat_lit(result)), + }, + }, + }, + } + } + + -- Mirror: src/ix/kernel/whnf.rs:2581-2602 fn bitvec_of_nat_args. + -- Returns (1, width_e, n_e) if `e` is `BitVec.ofNat W N` or + -- `OfNat.ofNat (BitVec W) N _inst`. Else (0, _, _). + fn bitvec_of_nat_args(e: KExpr, addrs: List‹[G; 32]›) -> (G, KExpr, KExpr) { + match collect_spine_simple(e) { + (head, args) => + match load(head) { + KExprNode.Const(idx, _) => + let head_addr = list_lookup(addrs, idx); + match address_eq(head_addr, bit_vec_of_nat_addr()) { + 1 => + match list_length(args) - 2 { + 0 => (1, list_lookup(args, 0), list_lookup(args, 1)), + _ => (0, store(KExprNode.BVar(0)), store(KExprNode.BVar(0))), + }, + 0 => + let of_nat_addr = [0x8f, 0xdc, 0x86, 0x9f, 0x7b, 0x7a, 0xa2, 0xb7, + 0xb5, 0x92, 0x9b, 0xa2, 0x42, 0xed, 0x89, 0x9c, + 0xe2, 0xd7, 0xc5, 0xd4, 0x2d, 0xf1, 0xd4, 0xe2, + 0x39, 0x36, 0x90, 0xcf, 0xa8, 0x5e, 0x94, 0xd2]; + match address_eq(head_addr, of_nat_addr) { + 0 => (0, store(KExprNode.BVar(0)), store(KExprNode.BVar(0))), + 1 => + match u32_less_than(list_length(args), 2) { + 1 => (0, store(KExprNode.BVar(0)), store(KExprNode.BVar(0))), + 0 => + let ty_arg = list_lookup(args, 0); + match collect_spine_simple(ty_arg) { + (ty_head, ty_args) => + match load(ty_head) { + KExprNode.Const(ty_idx, _) => + let ty_addr = list_lookup(addrs, ty_idx); + match address_eq(ty_addr, bit_vec_addr()) { + 0 => (0, store(KExprNode.BVar(0)), store(KExprNode.BVar(0))), + 1 => + match list_length(ty_args) - 1 { + 0 => (1, list_lookup(ty_args, 0), list_lookup(args, 1)), + _ => (0, store(KExprNode.BVar(0)), store(KExprNode.BVar(0))), + }, + }, + _ => (0, store(KExprNode.BVar(0)), store(KExprNode.BVar(0))), + }, + }, + }, + }, + }, + _ => (0, store(KExprNode.BVar(0)), store(KExprNode.BVar(0))), + }, + } + } + + -- Mirror: src/ix/kernel/whnf.rs:2465-2506 fn try_reduce_bitvec_ult. + -- `BitVec.ult width lhs rhs` → `Bool.true/false`. Both sides converted + -- to nat via bit_vec_to_nat, then compared with `<` (= Nat.ble (lhs+1) rhs). + fn try_reduce_bit_vec_ult(spine: List‹KExpr›, types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + match u32_less_than(list_length(spine), 3) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let width_e = list_lookup(spine, 0); + let lhs_e = list_lookup(spine, 1); + let rhs_e = list_lookup(spine, 2); + -- Build BitVec.toNat width lhs / rhs and reduce. + let lhs_pair = bv_to_nat_via(width_e, lhs_e, types, top, addrs); + let rhs_pair = bv_to_nat_via(width_e, rhs_e, types, top, addrs); + match lhs_pair { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, lhs_n) => + match rhs_pair { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, rhs_n) => + -- lhs < rhs iff klimbs_le(lhs+1, rhs) iff !klimbs_le(rhs, lhs) + let r = 1 - klimbs_le(rhs_n, lhs_n); + (1, mk_bool(r, addrs)), + }, + }, + } + } + + -- Helper: invoke bit_vec_to_nat reduction on (width, val) pair, return + -- extracted nat KLimbs. Returns (1, klimbs) or (0, _). + fn bv_to_nat_via(width_e: KExpr, val_e: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KLimbs) { + let spine = store(ListNode.Cons(width_e, + store(ListNode.Cons(val_e, store(ListNode.Nil))))); + let r = try_reduce_bit_vec_to_nat(spine, types, top, addrs); + match r { + (0, _) => (0, store(ListNode.Nil)), + (1, lit_e) => + match load(lit_e) { + KExprNode.Lit(lit) => + match lit { + KLiteral.Nat(klimbs) => (1, klimbs), + _ => (0, store(ListNode.Nil)), + }, + _ => (0, store(ListNode.Nil)), + }, + } + } + + -- Top-level bitvec dispatch: routes head_addr to the right reduction. + fn try_bitvec_dispatch(head_addr: [G; 32], spine: List‹KExpr›, + types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + match address_eq(head_addr, bit_vec_to_nat_addr()) { + 1 => try_reduce_bit_vec_to_nat(spine, types, top, addrs), + 0 => + match address_eq(head_addr, bit_vec_ult_addr()) { + 1 => try_reduce_bit_vec_ult(spine, types, top, addrs), + 0 => + -- decide (LT.lt BitVec width a b) → bitvec_ult. + -- Mirror: src/ix/kernel/whnf.rs:2455-2460. + match address_eq(head_addr, decidable_decide_addr()) { + 1 => try_reduce_decide_bitvec_lt(spine, types, top, addrs), + 0 => (0, store(KExprNode.BVar(0))), + }, + }, + } + } + + -- `decide (LT.lt BitVec a b) inst` → if LT.lt's type arg is BitVec, reduce + -- via bit_vec_ult. Mirror: src/ix/kernel/whnf.rs:2508-2529 fn try_reduce_bitvec_lt_prop. + fn try_reduce_decide_bitvec_lt(spine: List‹KExpr›, types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + match u32_less_than(list_length(spine), 2) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let prop = list_lookup(spine, 0); + match collect_spine_simple(prop) { + (lt_head, lt_args) => + match load(lt_head) { + KExprNode.Const(lt_idx, _) => + let lt_addr = list_lookup(addrs, lt_idx); + match address_eq(lt_addr, lt_lt_addr()) { + 0 => (0, store(KExprNode.BVar(0))), + 1 => + match u32_less_than(list_length(lt_args), 4) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let ty_arg = list_lookup(lt_args, 0); + match collect_spine_simple(ty_arg) { + (ty_head, ty_args) => + match load(ty_head) { + KExprNode.Const(ty_idx, _) => + let ty_addr = list_lookup(addrs, ty_idx); + match address_eq(ty_addr, bit_vec_addr()) { + 0 => (0, store(KExprNode.BVar(0))), + 1 => + match u32_less_than(list_length(ty_args), 1) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let width = list_lookup(ty_args, 0); + let lhs = list_lookup(lt_args, 2); + let rhs = list_lookup(lt_args, 3); + let inner_spine = store(ListNode.Cons(width, + store(ListNode.Cons(lhs, + store(ListNode.Cons(rhs, store(ListNode.Nil))))))); + try_reduce_bit_vec_ult(inner_spine, types, top, addrs), + }, + }, + _ => (0, store(KExprNode.BVar(0))), + }, + }, + }, + }, + _ => (0, store(KExprNode.BVar(0))), + }, + }, + } + } + + -- Mirror: src/ix/kernel/whnf.rs:2637-2755 fn try_reduce_native. + -- Handles compiler-emitted Nat/Bool reductions: + -- • `Lean.reduceBool c` / `Lean.reduceNat c`: unfold + accept ctor/lit. + -- • `System.Platform.numBits` (no args) → `Lit(Nat 64)`. + -- • `Subtype.val (System.Platform.getNumBits ())` → `Lit(Nat 64)`. + -- • `SizeOf.sizeOf Unit/PUnit ...` → `Lit(Nat 1)`. + -- • `PUnit.SizeOf.1 ...` → `Lit(Nat 1)`. + fn try_reduce_native(head_addr: [G; 32], spine: List‹KExpr›, + types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + -- Nullary System.Platform.numBits + match address_eq(head_addr, system_platform_num_bits_addr()) { + 1 => (1, mk_nat_literal_64()), + 0 => + -- subtype_val (... getNumBits ()): 3 args, args[2] is `getNumBits ()`. + match address_eq(head_addr, subtype_val_addr()) { + 1 => try_reduce_subtype_val(spine, addrs), + 0 => + -- size_of_size_of Unit/PUnit ...: 3 args, first is type. + match address_eq(head_addr, size_of_size_of_addr()) { + 1 => try_reduce_size_of_unit(spine, addrs), + 0 => + -- PUnit's stored sizeOf instance. + match address_eq(head_addr, punit_size_of_1_addr()) { + 1 => (1, mk_nat_one()), + 0 => + let is_rb = address_eq(head_addr, reduce_bool_addr()); + let is_rn = address_eq(head_addr, reduce_nat_addr()); + match is_rb + is_rn { + 0 => (0, store(KExprNode.BVar(0))), + _ => + match u32_less_than(list_length(spine), 1) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let arg = list_lookup(spine, 0); + let result = whnf(arg, types, top, addrs); + match is_rb { + 1 => check_native_bool(result, addrs), + 0 => check_native_nat(result), + }, + }, + }, + }, + }, + }, + } + } + + fn mk_nat_literal_64() -> KExpr { + let limbs = store(ListNode.Cons([64, 0, 0, 0, 0, 0, 0, 0], store(ListNode.Nil))); + store(KExprNode.Lit(KLiteral.Nat(limbs))) + } + + fn mk_nat_one() -> KExpr { + let limbs = store(ListNode.Cons([1, 0, 0, 0, 0, 0, 0, 0], store(ListNode.Nil))); + store(KExprNode.Lit(KLiteral.Nat(limbs))) + } + + -- Subtype.val A P (System.Platform.getNumBits ()) → 64. + -- Spine: [A, P, val_arg]. If val_arg's spine head = getNumBits, return 64. + fn try_reduce_subtype_val(spine: List‹KExpr›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + match u32_less_than(list_length(spine), 3) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let val_arg = list_lookup(spine, 2); + match collect_spine_simple(val_arg) { + (head, _) => + match load(head) { + KExprNode.Const(idx, _) => + let head_addr = list_lookup(addrs, idx); + match address_eq(head_addr, system_platform_get_num_bits_addr()) { + 1 => (1, mk_nat_literal_64()), + 0 => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + }, + } + } + + -- size_of_size_of Unit/PUnit ... → 1. First arg is the type. + fn try_reduce_size_of_unit(spine: List‹KExpr›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + match u32_less_than(list_length(spine), 1) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let ty_arg = list_lookup(spine, 0); + match collect_spine_simple(ty_arg) { + (head, _) => + match load(head) { + KExprNode.Const(idx, _) => + let head_addr = list_lookup(addrs, idx); + let is_unit = address_eq(head_addr, unit_addr()); + let is_punit = address_eq(head_addr, punit_addr()); + match is_unit + is_punit { + 0 => (0, store(KExprNode.BVar(0))), + _ => (1, mk_nat_one()), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + }, + } + } + + -- For reduce_bool: result must be Const(bool_true|bool_false). + fn check_native_bool(e: KExpr, addrs: List‹[G; 32]›) -> (G, KExpr) { + match load(e) { + KExprNode.Const(idx, _) => + let head_addr = list_lookup(addrs, idx); + let is_t = address_eq(head_addr, bool_true_addr()); + let is_f = address_eq(head_addr, bool_false_addr()); + match is_t + is_f { + 0 => (0, store(KExprNode.BVar(0))), + _ => (1, e), + }, + _ => (0, store(KExprNode.BVar(0))), + } + } + + -- For reduce_nat: result must be Lit(Nat). + fn check_native_nat(e: KExpr) -> (G, KExpr) { + match load(e) { + KExprNode.Lit(lit) => + match lit { + KLiteral.Nat(_) => (1, e), + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + } + } + + -- Mirror: src/ix/kernel/whnf.rs:2173-2329 fn try_reduce_decidable. + -- Handles `Nat.decLe`/`decEq`/`decLt`. Constructs `Decidable.isTrue/isFalse` + -- proof terms with `Eq.refl Bool Bool.true/false` witnesses. + -- + -- For decLt: rewrites to `decLe (n+1) m` and recurses through whnf. + -- For decLe / decEq: extracts n, m as Nat lits; computes verdict; builds + -- proof term using compiler-emitted helper fns. + -- + -- Prop extraction: invokes `k_infer` on the call expression's spine head + -- + spine to recover the `Decidable prop` type. `prop` is the type's + -- first arg. Requires `types` to be the local context so k_infer is + -- valid under binders. + fn try_reduce_decidable(head_addr: [G; 32], head_idx: G, + head_lvls: List‹&KLevel›, + spine: List‹KExpr›, + types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + let is_dec_le = address_eq(head_addr, nat_dec_le_addr()); + let is_dec_eq = address_eq(head_addr, nat_dec_eq_addr()); + let is_dec_lt = address_eq(head_addr, nat_dec_lt_addr()); + let is_int_dec_le = address_eq(head_addr, int_dec_le_addr()); + let is_int_dec_eq = address_eq(head_addr, int_dec_eq_addr()); + let is_int_dec_lt = address_eq(head_addr, int_dec_lt_addr()); + match is_int_dec_le + is_int_dec_eq + is_int_dec_lt { + 1 => try_normalize_int_decidable(head_idx, head_lvls, spine, types, top, addrs), + _ => + match is_dec_le + is_dec_eq + is_dec_lt { + 0 => (0, store(KExprNode.BVar(0))), + _ => + match u32_less_than(list_length(spine), 2) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => decidable_dispatch(is_dec_le, is_dec_eq, is_dec_lt, head_idx, head_lvls, + spine, types, top, addrs), + }, + }, + } + } + + -- Try-match `App(Const(int_of_nat | int_neg_succ), Lit(Nat _))`. Returns + -- `(found, sign, limbs)` where sign=0 for nonneg (Int.ofNat), 1 for + -- Int.negSucc. Mirror: src/ix/kernel/whnf.rs::extract_int_lit. + fn try_extract_int(e: KExpr, addrs: List‹[G; 32]›) -> (G, G, KLimbs) { + match load(e) { + KExprNode.App(f, a) => + match load(f) { + KExprNode.Const(idx, _) => + let head_addr = list_lookup(addrs, idx); + let is_ofnat = address_eq(head_addr, int_of_nat_addr()); + let is_negsucc = address_eq(head_addr, int_neg_succ_addr()); + match is_ofnat + is_negsucc { + 0 => (0, 0, store(ListNode.Nil)), + _ => + match load(a) { + KExprNode.Lit(lit) => + match lit { + KLiteral.Nat(limbs) => (1, is_negsucc, limbs), + _ => (0, 0, store(ListNode.Nil)), + }, + _ => (0, 0, store(ListNode.Nil)), + }, + }, + _ => (0, 0, store(ListNode.Nil)), + }, + _ => (0, 0, store(ListNode.Nil)), + } + } + + -- Build canonical `App(Const(int_of_nat | int_neg_succ), Lit(Nat n))`. + fn intern_int_lit(sign: G, limbs: KLimbs, addrs: List‹[G; 32]›) -> (G, KExpr) { + let target = match sign { + 0 => int_of_nat_addr(), + _ => int_neg_succ_addr(), + }; + match find_addr_idx_safe(target, addrs, 0) { + (1, idx) => + let head = store(KExprNode.Const(idx, store(ListNode.Nil))); + (1, store(KExprNode.App(head, mk_nat_lit(limbs)))), + (0, _) => (0, store(KExprNode.BVar(0))), + } + } + + -- Mirror: src/ix/kernel/whnf.rs:2331-2370 fn try_normalize_int_decidable. + -- For Int.decEq/decLe/decLt: whnf both args, extract Int literals, + -- rebuild canonical form `App(Const(int_dec_*), int_of_nat n, int_neg_succ k, ...)`. + -- Bails if both args already canonical (no normalization needed). + fn try_normalize_int_decidable(head_idx: G, head_lvls: List‹&KLevel›, + spine: List‹KExpr›, types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + match u32_less_than(list_length(spine), 2) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let a0 = list_lookup(spine, 0); + let a1 = list_lookup(spine, 1); + match try_extract_int(a0, addrs) { + (1, _, _) => + match try_extract_int(a1, addrs) { + -- Both already canonical Int lits — no normalization needed. + (1, _, _) => (0, store(KExprNode.BVar(0))), + _ => normalize_int_dec_rebuild(head_idx, head_lvls, spine, a0, a1, types, top, addrs), + }, + _ => normalize_int_dec_rebuild(head_idx, head_lvls, spine, a0, a1, types, top, addrs), + }, + } + } + + fn normalize_int_dec_rebuild(head_idx: G, head_lvls: List‹&KLevel›, + spine: List‹KExpr›, a0: KExpr, a1: KExpr, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + let wa = whnf(a0, types, top, addrs); + let wb = whnf(a1, types, top, addrs); + match try_extract_int(wa, addrs) { + (1, sa, na) => + match try_extract_int(wb, addrs) { + (1, sb, nb) => + match intern_int_lit(sa, na, addrs) { + (1, a_e) => + match intern_int_lit(sb, nb, addrs) { + (1, b_e) => + let head = store(KExprNode.Const(head_idx, head_lvls)); + let r1 = store(KExprNode.App(head, a_e)); + let r2 = store(KExprNode.App(r1, b_e)); + let post = list_drop(spine, 2); + (1, apply_spine(r2, post)), + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + } + } + + fn decidable_dispatch(is_dec_le: G, is_dec_eq: G, is_dec_lt: G, + head_idx: G, head_lvls: List‹&KLevel›, + spine: List‹KExpr›, types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + -- decLt n m → decLe (n+1) m: rewrite spine. + match is_dec_lt { + 1 => + let n_e = list_lookup(spine, 0); + let m_e = list_lookup(spine, 1); + let n_w = whnf(n_e, types, top, addrs); + let np = try_extract_nat(n_w, addrs); + match np { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, n_klimbs) => + let succ_n = klimbs_succ(n_klimbs); + let succ_n_lit = mk_nat_lit(succ_n); + let dec_le_const = store(KExprNode.Const(0, store(ListNode.Nil))); + -- Find dec_le_idx via address. + let pair = find_addr_idx_safe(nat_dec_le_addr(), addrs, 0); + match pair { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, dec_le_idx) => + let head = store(KExprNode.Const(dec_le_idx, store(ListNode.Nil))); + let app1 = store(KExprNode.App(head, succ_n_lit)); + let app2 = store(KExprNode.App(app1, m_e)); + let post = list_drop(spine, 2); + let result = apply_spine(app2, post); + (1, result), + }, + }, + 0 => + decidable_dispatch_le_eq(is_dec_le, is_dec_eq, head_idx, head_lvls, + spine, types, top, addrs), + } + } + + fn decidable_dispatch_le_eq(is_dec_le: G, is_dec_eq: G, + head_idx: G, head_lvls: List‹&KLevel›, + spine: List‹KExpr›, types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + let n_e = list_lookup(spine, 0); + let m_e = list_lookup(spine, 1); + let n_w = whnf(n_e, types, top, addrs); + let m_w = whnf(m_e, types, top, addrs); + let np = try_extract_nat(n_w, addrs); + let mp = try_extract_nat(m_w, addrs); + match np { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, n_kl) => + match mp { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, m_kl) => + let verdict = match is_dec_le { + 1 => klimbs_le(n_kl, m_kl), + 0 => klimbs_eq(n_kl, m_kl), + }; + decidable_build_proof(is_dec_le, is_dec_eq, verdict, n_e, m_e, + head_idx, head_lvls, spine, types, top, addrs), + }, + } + } + + -- Build Decidable.isTrue/isFalse proof term. Prop slot recovered via + -- k_infer over the original call expression in `decidable_finish`. + fn decidable_build_proof(is_dec_le: G, is_dec_eq: G, verdict: G, + n_e: KExpr, m_e: KExpr, + head_idx: G, head_lvls: List‹&KLevel›, + spine: List‹KExpr›, + types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + -- Build `Eq.refl.{1} Bool Bool.true_or_false`. + let eq_refl_pair = find_addr_idx_safe(eq_refl_addr(), addrs, 0); + match eq_refl_pair { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, eq_refl_idx) => + let bool_pair = find_addr_idx_safe(bool_type_addr(), addrs, 0); + match bool_pair { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, bool_idx) => + let bool_lit_pair = match verdict { + 1 => find_addr_idx_safe(bool_true_addr(), addrs, 0), + 0 => find_addr_idx_safe(bool_false_addr(), addrs, 0), + }; + match bool_lit_pair { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, bool_lit_idx) => + let one_lvl = store(KLevel.Succ(store(KLevel.Zero))); + let lvls = store(ListNode.Cons(one_lvl, store(ListNode.Nil))); + let eq_refl_const = store(KExprNode.Const(eq_refl_idx, lvls)); + let bool_const = store(KExprNode.Const(bool_idx, store(ListNode.Nil))); + let bool_lit_const = store(KExprNode.Const(bool_lit_idx, store(ListNode.Nil))); + let r1 = store(KExprNode.App(eq_refl_const, bool_const)); + let refl_proof = store(KExprNode.App(r1, bool_lit_const)); + -- proof_fn: nat_le_of_ble_eq_true / nat_eq_of_beq_eq_true / + -- nat_ne_of_beq_eq_false based on (is_dec_le, verdict). + let proof_fn_addr = match is_dec_le { + 1 => + match verdict { + 1 => nat_le_of_ble_eq_true_addr(), + 0 => nat_not_le_of_not_ble_eq_true_addr(), + }, + 0 => + match verdict { + 1 => nat_eq_of_beq_eq_true_addr(), + 0 => nat_ne_of_beq_eq_false_addr(), + }, + }; + let proof_fn_pair = find_addr_idx_safe(proof_fn_addr, addrs, 0); + match proof_fn_pair { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, proof_fn_idx) => + -- Bail decLe-false: needs Bool.noConfusion proof not yet + -- exposed. Mirror Rust whnf.rs:2317-2322. + let bail = is_dec_le * (1 - verdict); + match bail { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let proof_const = store(KExprNode.Const(proof_fn_idx, store(ListNode.Nil))); + let p1 = store(KExprNode.App(proof_const, n_e)); + let p2 = store(KExprNode.App(p1, m_e)); + let proof = store(KExprNode.App(p2, refl_proof)); + decidable_finish(verdict, proof, head_idx, head_lvls, + spine, types, top, addrs), + }, + }, + }, + }, + } + } + + fn decidable_finish(verdict: G, proof: KExpr, head_idx: G, + head_lvls: List‹&KLevel›, spine: List‹KExpr›, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + let dec_addr = match verdict { + 1 => decidable_is_true_addr(), + 0 => decidable_is_false_addr(), + }; + let dec_pair = find_addr_idx_safe(dec_addr, addrs, 0); + match dec_pair { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, dec_idx) => + -- Reconstruct head Const + first 2 spine args, k_infer to get + -- `Decidable prop`, extract prop. Mirrors Rust producing the + -- elaborator-shaped `Decidable.isTrue (n ≤ m) ...` form. + let head_const = store(KExprNode.Const(head_idx, head_lvls)); + let two_args = list_take(spine, 2); + let call_expr = apply_spine(head_const, two_args); + let call_ty = k_infer(call_expr, types, top, addrs); + let call_ty_w = whnf(call_ty, types, top, addrs); + match collect_spine(call_ty_w) { + (_, dec_args) => + -- Guard against malformed inferred type. Rust returns + -- `Ok(None)` when the spine is empty; Aiur bails to (0, _) + -- so caller falls through to delta unfolding. + match u32_less_than(list_length(dec_args), 1) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let prop = list_lookup(dec_args, 0); + let dec_const = store(KExprNode.Const(dec_idx, store(ListNode.Nil))); + let r1 = store(KExprNode.App(dec_const, prop)); + let r2 = store(KExprNode.App(r1, proof)); + let post = list_drop(spine, 2); + (1, apply_spine(r2, post)), + }, + }, + } + } + + -- Mirror: src/ix/kernel/whnf.rs:2755-2807 fn try_reduce_string + + -- char_of_nat_expr. + -- + -- Dispatches on `String.utf8ByteSize`, `String.back`, `String.legacy_back`, + -- `String.toByteArray`. All require a single Lit(Str) arg. + fn try_str_dispatch(head_addr: [G; 32], spine: List‹KExpr›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + let spine_len = list_length(spine); + let lt1 = u32_less_than(spine_len, 1); + match lt1 { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let a0 = list_lookup(spine, 0); + match load(a0) { + KExprNode.Lit(lit) => + match lit { + KLiteral.Str(bs) => + match address_eq(head_addr, string_utf8_byte_size_addr()) { + 1 => + let len = list_length_u64(bs); + let limbs = store(ListNode.Cons(len, store(ListNode.Nil))); + (1, store(KExprNode.Lit(KLiteral.Nat(limbs)))), + 0 => + match address_eq(head_addr, string_to_byte_array_addr()) { + 1 => try_str_to_byte_array(bs, addrs), + 0 => + let is_back = address_eq(head_addr, string_back_addr()); + let is_legacy = address_eq(head_addr, string_legacy_back_addr()); + match is_back + is_legacy { + 0 => (0, store(KExprNode.BVar(0))), + _ => try_str_back(bs, addrs), + }, + }, + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + } + } + + -- Empty Lit(Str) → Const(byte_array_empty). Non-empty: defer. + fn try_str_to_byte_array(bs: ByteStream, addrs: List‹[G; 32]›) -> (G, KExpr) { + match load(bs) { + ListNode.Nil => + match find_addr_idx_safe(byte_array_empty_addr(), addrs, 0) { + (1, idx) => (1, store(KExprNode.Const(idx, store(ListNode.Nil)))), + (0, _) => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + } + } + + -- String.back/legacy_back over Lit(Str(s)) → + -- App(Const(char_of_nat), Lit(Nat(last_codepoint))). Empty → 65 ('A'). + fn try_str_back(bs: ByteStream, addrs: List‹[G; 32]›) -> (G, KExpr) { + match find_addr_idx_safe(char_of_nat_addr(), addrs, 0) { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, idx) => + let cp = utf8_last_codepoint(bs); + let cp_limbs = klimbs_from_g(cp); + let cp_lit = store(KExprNode.Lit(KLiteral.Nat(cp_limbs))); + let con = store(KExprNode.Const(idx, store(ListNode.Nil))); + (1, store(KExprNode.App(con, cp_lit))), + } + } + + -- Convert G value (≤ 2^32) into single-limb KLimbs via byte decomp. + fn klimbs_from_g(x: G) -> KLimbs { + match divmod_256(x, 0) { + (b0, q1) => + match divmod_256(q1, 0) { + (b1, q2) => + match divmod_256(q2, 0) { + (b2, q3) => + match divmod_256(q3, 0) { + (b3, _q4) => + store(ListNode.Cons([b0, b1, b2, b3, 0, 0, 0, 0], + store(ListNode.Nil))), + }, + }, + }, + } + } + + -- Walk byte stream forward decoding UTF-8 codepoints; return last. + -- Empty → 65 ('A') per Rust default. + fn utf8_last_codepoint(bs: ByteStream) -> G { + utf8_last_go(bs, 65) + } + + fn utf8_last_go(bs: ByteStream, prev: G) -> G { + match load(bs) { + ListNode.Nil => prev, + ListNode.Cons(b0, rest) => + match utf8_decode_one(b0, rest) { + (cp, remaining) => utf8_last_go(remaining, cp), + }, + } + } + + -- Decode one UTF-8 codepoint. Honors length prefix bits: + -- 0xxxxxxx → 1 byte; 110xxxxx 10xxxxxx → 2; 1110xxxx 10*2 → 3; + -- 11110xxx 10*3 → 4 bytes. + fn utf8_decode_one(b0: G, rest: ByteStream) -> (G, ByteStream) { + match u8_less_than(b0, 128) { + 1 => (b0, rest), + 0 => + match u8_less_than(b0, 224) { + 1 => + match load(rest) { + ListNode.Cons(b1, r1) => + let cp = (b0 - 192) * 64 + (b1 - 128); + (cp, r1), + }, + 0 => + match u8_less_than(b0, 240) { + 1 => + match load(rest) { + ListNode.Cons(b1, r1) => + match load(r1) { + ListNode.Cons(b2, r2) => + let cp = (b0 - 224) * 4096 + (b1 - 128) * 64 + (b2 - 128); + (cp, r2), + }, + }, + 0 => + match load(rest) { + ListNode.Cons(b1, r1) => + match load(r1) { + ListNode.Cons(b2, r2) => + match load(r2) { + ListNode.Cons(b3, r3) => + let cp = (b0 - 240) * 262144 + (b1 - 128) * 4096 + + (b2 - 128) * 64 + (b3 - 128); + (cp, r3), + }, + }, + }, + }, + }, + } + } + + -- Mirror: src/ix/kernel/def_eq.rs:1025-1060 fn str_lit_to_constructor. + -- Expand a Lit(Str(bs)) to ctor form + -- `String.ofList (List.cons.{0} Char (Char.ofNat c) (... List.nil.{0} Char))`. + -- Returns (1, expanded) when all required ctor addrs in `addrs`, else (0, _). + fn str_lit_to_ctor(bs: ByteStream, addrs: List‹[G; 32]›) -> (G, KExpr) { + match find_addr_idx_safe(list_nil_addr(), addrs, 0) { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, nil_idx) => + match find_addr_idx_safe(list_cons_addr(), addrs, 0) { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, cons_idx) => + match find_addr_idx_safe(char_type_addr(), addrs, 0) { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, char_idx) => + match find_addr_idx_safe(char_of_nat_addr(), addrs, 0) { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, con_idx) => + match find_addr_idx_safe(string_of_list_addr(), addrs, 0) { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, str_idx) => + let zero_lvl = store(KLevel.Zero); + let ulvls = store(ListNode.Cons(zero_lvl, store(ListNode.Nil))); + let nil_const = store(KExprNode.Const(nil_idx, ulvls)); + let cons_const = store(KExprNode.Const(cons_idx, ulvls)); + let char_const = store(KExprNode.Const(char_idx, store(ListNode.Nil))); + let con_const = store(KExprNode.Const(con_idx, store(ListNode.Nil))); + let str_const = store(KExprNode.Const(str_idx, store(ListNode.Nil))); + let nil_app = store(KExprNode.App(nil_const, char_const)); + let cons_partial = store(KExprNode.App(cons_const, char_const)); + let list_expr = build_char_list(bs, nil_app, cons_partial, con_const); + (1, store(KExprNode.App(str_const, list_expr))), + }, + }, + }, + }, + } + } + + fn build_char_list(bs: ByteStream, nil_app: KExpr, + cons_partial: KExpr, con_const: KExpr) -> KExpr { + match load(bs) { + ListNode.Nil => nil_app, + ListNode.Cons(b0, rest) => + match utf8_decode_one(b0, rest) { + (cp, remaining) => + let cp_limbs = klimbs_from_g(cp); + let cp_lit = store(KExprNode.Lit(KLiteral.Nat(cp_limbs))); + let char_val = store(KExprNode.App(con_const, cp_lit)); + let with_head = store(KExprNode.App(cons_partial, char_val)); + let tail = build_char_list(remaining, nil_app, cons_partial, con_const); + store(KExprNode.App(with_head, tail)), + }, + } + } + + -- If `e` is `Lit(Str(bs))` and addrs has required ctors, expand to ctor form. + -- Else return `e` unchanged. Used pre-iota. + fn str_lit_to_ctor_or_self(e: KExpr, addrs: List‹[G; 32]›) -> KExpr { + match load(e) { + KExprNode.Lit(lit) => + match lit { + KLiteral.Str(bs) => + match str_lit_to_ctor(bs, addrs) { + (1, expanded) => expanded, + (0, _) => e, + }, + _ => e, + }, + _ => e, + } + } + + -- Mirror: src/ix/kernel/whnf.rs:1531-1585 try_reduce_projection_definition + -- + projection_definition_info. + -- + -- Recognizes Defn-kind constants whose body is `λ x_1 ... x_n → Prj S i (BVar k)` + -- and shortcuts the unfolding to a direct `Prj S i args[arity-1-k]`. Pure + -- performance: standard delta+beta still produces same result. + fn try_reduce_projection_definition(head_idx: G, spine: List‹KExpr›, + top: List‹&KConstantInfo›) -> (G, KExpr) { + match load(list_lookup(top, head_idx)) { + KConstantInfo.Defn(_, _, value, _, _) => + match projection_definition_info(value, 0) { + (1, arity, struct_idx, field, struct_arg_idx) => + match u32_less_than(list_length(spine), arity) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let target_arg = list_lookup(spine, struct_arg_idx); + let proj_expr = store(KExprNode.Proj(struct_idx, field, target_arg)); + let post = list_drop(spine, arity); + (1, apply_spine(proj_expr, post)), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + } + } + + -- Mirror: src/ix/kernel/whnf.rs:1441-1500 try_reduce_fin_val_decidable_rec + -- + project_decidable_fin_val_minor. + -- + -- Pushes `Fin.val` projection inside `Decidable.rec` minors when the + -- structure type is Fin and field 0. Allows iota to fire once major is + -- a concrete `Decidable.isTrue/isFalse`. + fn try_reduce_fin_val_decidable_rec(tidx: G, field: G, head: KExpr, + args: List‹KExpr›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + let tidx_addr = list_lookup(addrs, tidx); + match address_eq(tidx_addr, fin_addr()) { + 0 => (0, store(KExprNode.BVar(0))), + 1 => + match field { + 0 => + match load(head) { + KExprNode.Const(rec_idx, rec_us) => + let rec_addr_ = list_lookup(addrs, rec_idx); + match address_eq(rec_addr_, decidable_rec_addr()) { + 0 => (0, store(KExprNode.BVar(0))), + 1 => + match u32_less_than(list_length(args), 5) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let a0 = list_lookup(args, 0); + let a1 = list_lookup(args, 1); + let a2 = list_lookup(args, 2); + let a3 = list_lookup(args, 3); + let a4 = list_lookup(args, 4); + match load(a1) { + KExprNode.Lam(motive_dom, _) => + -- Inline project_decidable_fin_val_minor twice. + match load(a2) { + KExprNode.Lam(fdom, fbody) => + let fproj = store(KExprNode.Proj(tidx, field, fbody)); + let false_minor = store(KExprNode.Lam(fdom, fproj)); + match load(a3) { + KExprNode.Lam(tdom, tbody) => + let tproj = store(KExprNode.Proj(tidx, field, tbody)); + let true_minor = store(KExprNode.Lam(tdom, tproj)); + match find_addr_idx_safe(nat_addr(), addrs, 0) { + (1, nat_idx) => + let nat_ty = store(KExprNode.Const(nat_idx, store(ListNode.Nil))); + let new_motive = store(KExprNode.Lam(motive_dom, nat_ty)); + let head_const = store(KExprNode.Const(rec_idx, rec_us)); + let r1 = store(KExprNode.App(head_const, a0)); + let r2 = store(KExprNode.App(r1, new_motive)); + let r3 = store(KExprNode.App(r2, false_minor)); + let r4 = store(KExprNode.App(r3, true_minor)); + let r5 = store(KExprNode.App(r4, a4)); + let post = list_drop(args, 5); + (1, apply_spine(r5, post)), + (0, _) => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + }, + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + } + } + + -- Returns (found, arity, struct_idx, field, struct_arg_idx). Walks Lam + -- bindings counting arity; expects body to be `Prj S i (BVar k)` with + -- `k < arity`. struct_arg_idx = (arity - 1) - k. + fn projection_definition_info(val: KExpr, arity: G) -> (G, G, G, G, G) { + match load(val) { + KExprNode.Lam(_, body) => + projection_definition_info(body, arity + 1), + KExprNode.Proj(struct_idx, field, projected) => + match load(projected) { + KExprNode.BVar(i) => + match u32_less_than(i, arity) { + 1 => (1, arity, struct_idx, field, (arity - 1) - i), + _ => (0, 0, 0, 0, 0), + }, + _ => (0, 0, 0, 0, 0), + }, + _ => (0, 0, 0, 0, 0), + } + } +⟧ + +end IxVM + +end diff --git a/Ix/IxVM/Kernel/Subst.lean b/Ix/IxVM/Kernel/Subst.lean new file mode 100644 index 00000000..b3776274 --- /dev/null +++ b/Ix/IxVM/Kernel/Subst.lean @@ -0,0 +1,195 @@ +module +public import Ix.Aiur.Meta +public import Ix.IxVM.KernelTypes + +public section + +namespace IxVM + +/-! ## Substitution primitives over `KExpr` + +Mirror: src/ix/kernel/subst.rs (full file). + +Two core operations: + +* `expr_lift(e, shift, cutoff)` — shift `BVar(i)` to `BVar(i+shift)` when + `i ≥ cutoff`. Crossing a binder bumps `cutoff` by 1. No-op when + `shift = 0`. + +* `expr_inst1(body, arg, depth)` — substitute `BVar(depth)` with + `expr_lift(arg, depth, 0)`. Decrement `BVar(i)` for `i > depth`. + Crossing a binder bumps `depth` by 1. + +Aiur memoization caches calls automatically; no explicit cache table. + +`KExprNode` lives in `Ix/IxVM/KernelTypes.lean`: +``` +enum KExprNode { + BVar(G), Srt(&KLevel), Const(G, List‹&KLevel›), + App(KExpr, KExpr), Lam(KExpr, KExpr), Forall(KExpr, KExpr), + Let(KExpr, KExpr, KExpr), Lit(KLiteral), Proj(G, G, KExpr), +} +``` +-/ + +def subst := ⟦ + -- ============================================================================ + -- expr_lbr + -- + -- Loose-bvar count: `1 + max(BVar idx)` over loose (non-binder-captured) + -- variables in `e`. Returns 0 iff `e` is closed (no loose BVars). Aiur + -- memoization caches per node, so repeat calls on the same KExpr ptr are + -- O(1). First traversal cost matches a single subst walk; the payoff is + -- short-circuiting subst that would otherwise re-walk closed subtrees. + -- + -- Mirror: src/ix/kernel/expr.rs::lbr field on KExprNode (precomputed at + -- intern time in Rust; computed on demand here, memoized via `store`). + -- ============================================================================ + fn expr_lbr(e: KExpr) -> G { + match load(e) { + KExprNode.BVar(i) => i + 1, + KExprNode.Srt(_) => 0, + KExprNode.Const(_, _) => 0, + KExprNode.Lit(_) => 0, + KExprNode.App(f, a) => lbr_max(expr_lbr(f), expr_lbr(a)), + KExprNode.Lam(ty, body) => + lbr_max(expr_lbr(ty), lbr_dec(expr_lbr(body))), + KExprNode.Forall(ty, body) => + lbr_max(expr_lbr(ty), lbr_dec(expr_lbr(body))), + KExprNode.Let(ty, val, body) => + lbr_max(lbr_max(expr_lbr(ty), expr_lbr(val)), + lbr_dec(expr_lbr(body))), + KExprNode.Proj(_, _, e1) => expr_lbr(e1), + } + } + + fn lbr_max(a: G, b: G) -> G { + match u32_less_than(a, b) { + 1 => b, + 0 => a, + } + } + + fn lbr_dec(n: G) -> G { + match n { + 0 => 0, + _ => n - 1, + } + } + + -- ============================================================================ + -- expr_lift + -- + -- Shift `BVar(i)` → `BVar(i + shift)` when `i ≥ cutoff`. Recursion bumps + -- `cutoff` by 1 when crossing a binder (Lam/Forall/Let). + -- + -- Mirrors `src/ix/kernel/subst.rs::lift_no_intern` (line 364-415). + -- Fast path: when `expr_lbr(e) <= cutoff`, no loose BVar at or above the + -- cutoff exists, so `e` is unchanged. + -- ============================================================================ + fn expr_lift(e: KExpr, shift: G, cutoff: G) -> KExpr { + match shift { + 0 => e, + _ => + let l = expr_lbr(e); + match u32_less_than(cutoff, l) { + 0 => e, + 1 => expr_lift_walk(e, shift, cutoff), + }, + } + } + + fn expr_lift_walk(e: KExpr, shift: G, cutoff: G) -> KExpr { + match load(e) { + KExprNode.BVar(i) => + let lt = u32_less_than(i, cutoff); + match lt { + 1 => e, + 0 => store(KExprNode.BVar(i + shift)), + }, + KExprNode.Srt(l) => store(KExprNode.Srt(l)), + KExprNode.Const(idx, lvls) => store(KExprNode.Const(idx, lvls)), + KExprNode.App(f, a) => + store(KExprNode.App( + expr_lift(f, shift, cutoff), + expr_lift(a, shift, cutoff))), + KExprNode.Lam(ty, body) => + store(KExprNode.Lam( + expr_lift(ty, shift, cutoff), + expr_lift(body, shift, cutoff + 1))), + KExprNode.Forall(ty, body) => + store(KExprNode.Forall( + expr_lift(ty, shift, cutoff), + expr_lift(body, shift, cutoff + 1))), + KExprNode.Let(ty, val, body) => + store(KExprNode.Let( + expr_lift(ty, shift, cutoff), + expr_lift(val, shift, cutoff), + expr_lift(body, shift, cutoff + 1))), + KExprNode.Lit(lit) => store(KExprNode.Lit(lit)), + KExprNode.Proj(tidx, fidx, e1) => + store(KExprNode.Proj(tidx, fidx, expr_lift(e1, shift, cutoff))), + } + } + + -- ============================================================================ + -- expr_inst1 + -- + -- Substitute `BVar(depth)` with `expr_lift(arg, depth, 0)` and decrement + -- `BVar(i)` for `i > depth`. Crossing a binder bumps `depth`. + -- + -- Mirrors the single-arg form of `src/ix/kernel/subst.rs::instantiate_rev` + -- with a list of one element. For the lambda-eta case in def_eq we want + -- `body[BVar(0) := arg]` which is `expr_inst1(body, arg, 0)`. + -- ============================================================================ + fn expr_inst1(e: KExpr, arg: KExpr, depth: G) -> KExpr { + -- Fast path: when `expr_lbr(e) <= depth`, no BVar at or above depth + -- exists in `e`, so the substitution is a no-op. + let l = expr_lbr(e); + match u32_less_than(depth, l) { + 0 => e, + 1 => expr_inst1_walk(e, arg, depth), + } + } + + fn expr_inst1_walk(e: KExpr, arg: KExpr, depth: G) -> KExpr { + match load(e) { + KExprNode.BVar(i) => + let lt = u32_less_than(i, depth); + match lt { + 1 => e, + 0 => + match i - depth { + 0 => expr_lift(arg, depth, 0), + _ => store(KExprNode.BVar(i - 1)), + }, + }, + KExprNode.Srt(l) => store(KExprNode.Srt(l)), + KExprNode.Const(idx, lvls) => store(KExprNode.Const(idx, lvls)), + KExprNode.App(f, a) => + store(KExprNode.App( + expr_inst1(f, arg, depth), + expr_inst1(a, arg, depth))), + KExprNode.Lam(ty, body) => + store(KExprNode.Lam( + expr_inst1(ty, arg, depth), + expr_inst1(body, arg, depth + 1))), + KExprNode.Forall(ty, body) => + store(KExprNode.Forall( + expr_inst1(ty, arg, depth), + expr_inst1(body, arg, depth + 1))), + KExprNode.Let(ty, val, body) => + store(KExprNode.Let( + expr_inst1(ty, arg, depth), + expr_inst1(val, arg, depth), + expr_inst1(body, arg, depth + 1))), + KExprNode.Lit(lit) => store(KExprNode.Lit(lit)), + KExprNode.Proj(tidx, fidx, e1) => + store(KExprNode.Proj(tidx, fidx, expr_inst1(e1, arg, depth))), + } + } +⟧ + +end IxVM + +end diff --git a/Ix/IxVM/Kernel/Whnf.lean b/Ix/IxVM/Kernel/Whnf.lean new file mode 100644 index 00000000..41201ab9 --- /dev/null +++ b/Ix/IxVM/Kernel/Whnf.lean @@ -0,0 +1,745 @@ +module +public import Ix.Aiur.Meta +public import Ix.IxVM.KernelTypes +public import Ix.IxVM.Kernel.Subst +public import Ix.IxVM.Kernel.Levels +public import Ix.IxVM.Kernel.Primitive + +public section + +namespace IxVM + +/-! ## Weak Head Normal Form over `KExpr` + +Mirrors `src/ix/kernel/whnf.rs`. + +Reduces an expression to head-canonical form. The head after WHNF is one of: + +* `Sort(level)` — universe. +* `Lam(ty, body)` — function value. +* `Forall(ty, body)` — function type. +* `Const(idx, lvls)` applied to args where the constant is non-reducible + (Axiom, Indc, Ctor, Recr stuck on non-Ctor major). +* `BVar(i)` applied to args (open term). +* `Let(...)` — never (always reduces via zeta). +* `Lit(...)` — already a value. +* `Proj(...)` stuck on non-Ctor. + +This file implements the essential phases: + +1. **delta**: `Const(idx)` where the constant is `Defn` or `Thm` → + unfold value, instantiate level params, reapply spine, recurse. +2. **beta**: `App(Lam(_, body), arg)` → `body[BVar(0) := arg]`, recurse. +3. **zeta**: `Let(_, val, body)` → `body[BVar(0) := val]`, recurse. +4. **proj**: `Proj(tidx, fidx, e)` where `e` reduces to a `Ctor` → + pull field via the ctor's spine, recurse. +5. **iota**: `Const(Recr) spine` where `spine[major_idx]` reduces to a + `Ctor` → look up rule by ctor's `cidx`, instantiate RHS with rec + levels, apply the params/motives/minors prefix of the spine, then + ctor's tail fields, then post-major spine; recurse. +6. **nat-prim** (`prims` argument): when `Const(idx)` head matches a + primitive Nat op slot in `prims` and the spine carries `Lit(Nat(_))` + args, fold to the literal result. + +`addrs: List‹[G; 32]›` carries primitive positional indices threaded from +ingress (slot mapping in `Primitive.lean`). +-/ + +def whnf := ⟦ + -- ============================================================================ + -- Spine collection + -- ============================================================================ + fn collect_spine_go(e: KExpr, acc: List‹KExpr›) -> (KExpr, List‹KExpr›) { + match load(e) { + KExprNode.App(f, a) => + collect_spine_go(f, store(ListNode.Cons(a, acc))), + _ => (e, acc), + } + } + + fn collect_spine(e: KExpr) -> (KExpr, List‹KExpr›) { + collect_spine_go(e, store(ListNode.Nil)) + } + + fn apply_spine(head: KExpr, spine: List‹KExpr›) -> KExpr { + match load(spine) { + ListNode.Nil => head, + ListNode.Cons(a, rest) => + apply_spine(store(KExprNode.App(head, a)), rest), + } + } + + -- ============================================================================ + -- Beta + -- ============================================================================ + fn beta_step(lam: KExpr, arg: KExpr) -> KExpr { + match load(lam) { + KExprNode.Lam(_, body) => expr_inst1(body, arg, 0), + } + } + + -- ============================================================================ + -- WHNF main loop with `prims` for nat-primitive dispatch. + -- ============================================================================ + + fn whnf_apply_beta(spine: List‹KExpr›, lam: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> KExpr { + match load(spine) { + ListNode.Nil => lam, + ListNode.Cons(a, rest) => + let next = beta_step(lam, a); + whnf_with_spine(next, rest, types, top, addrs), + } + } + + fn whnf_with_spine(head: KExpr, spine: List‹KExpr›, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> KExpr { + match load(head) { + KExprNode.App(f, a) => + -- Head is itself App-spine (post-beta result). Collect its spine + -- and prepend to the outer spine, then recurse. Without this, + -- post-delta beta-reduced bodies like `Nat.rec PUnit step n` + -- stay stuck and iota never fires. + match collect_spine(head) { + (inner_head, inner_spine) => + whnf_with_spine(inner_head, list_concat(inner_spine, spine), types, top, addrs), + }, + KExprNode.Lam(ty, body) => + whnf_apply_beta(spine, head, types, top, addrs), + KExprNode.Const(idx, lvls) => + let head_addr = list_lookup(addrs, idx); + let ci = load(list_lookup(top, idx)); + -- Recr / Quot heads can never match a primitive address (Nat ops, + -- Str ops, BitVec, native, decidable, proj-def all live as Ctor or + -- Defn). Skip the primitive dispatch chain for those. + match ci { + KConstantInfo.Rec(num_lvls, _, num_params, num_indices, num_motives, num_minors, rules, k_flag, _, _) => + let iota = try_iota(lvls, spine, num_lvls, num_params, num_indices, num_motives, num_minors, rules, k_flag, types, top, addrs); + match iota { + (1, reduced2) => whnf(reduced2, types, top, addrs), + (0, _) => apply_spine(head, spine), + }, + KConstantInfo.Quot(_, _, kind) => + let qiota = try_quot_iota(kind, spine, types, top, addrs); + match qiota { + (1, reduced_q) => whnf(reduced_q, types, top, addrs), + (0, _) => apply_spine(head, spine), + }, + _ => + let nat_pair = try_nat_dispatch(head_addr, spine, types, top, addrs); + match nat_pair { + (1, reduced) => whnf(reduced, types, top, addrs), + (0, _) => + let str_pair = try_str_dispatch(head_addr, spine, addrs); + match str_pair { + (1, reduced_s) => whnf(reduced_s, types, top, addrs), + (0, _) => + let bv_pair = try_bitvec_dispatch(head_addr, spine, types, top, addrs); + match bv_pair { + (1, reduced_b) => whnf(reduced_b, types, top, addrs), + (0, _) => + let nat_pair2 = try_reduce_native(head_addr, spine, types, top, addrs); + match nat_pair2 { + (1, reduced_n) => whnf(reduced_n, types, top, addrs), + (0, _) => + let dec_pair = try_reduce_decidable(head_addr, idx, lvls, spine, types, top, addrs); + match dec_pair { + (1, reduced_d) => whnf(reduced_d, types, top, addrs), + (0, _) => + let proj_def_pair = try_reduce_projection_definition(idx, spine, top); + match proj_def_pair { + (1, reduced_pd) => whnf(reduced_pd, types, top, addrs), + (0, _) => + -- Mirror src/ix/kernel/whnf.rs:756-774 + -- (`delta_unfold_one`): unfold any Defn + -- regardless of `ReducibilityHints`. The + -- hint is consulted by lazy-delta's + -- `delta_rank` for def-eq priority, not + -- as a gate on plain whnf delta. Without + -- unfolding here, ctor field types + -- written via opaque defs (e.g. + -- `constType (n α) (n α)`) stay stuck + -- and `check_positivity_aug` misclassifies. + match ci { + KConstantInfo.Defn(_, _, value, _, _) => + let body = expr_inst_levels(value, lvls); + whnf_with_spine(body, spine, types, top, addrs), + KConstantInfo.Thm(_, _, _) => apply_spine(head, spine), + _ => apply_spine(head, spine), + }, + }, + }, + }, + }, + }, + }, + }, + KExprNode.Let(_, val, body) => + let next = expr_inst1(body, val, 0); + whnf_with_spine(next, spine, types, top, addrs), + KExprNode.Proj(tidx, fidx, inner) => + let inner_whnf = whnf(inner, types, top, addrs); + let inner_pair = collect_spine(inner_whnf); + match inner_pair { + (inner_head, inner_args) => + -- Mirror: whnf.rs:1441-1500 try_reduce_fin_val_decidable_rec. + -- Pushes Fin.val inside Decidable.rec minors; allows iota. + let fvd_pair = try_reduce_fin_val_decidable_rec(tidx, fidx, inner_head, inner_args, addrs); + match fvd_pair { + (1, rewritten) => whnf_with_spine(rewritten, spine, types, top, addrs), + (0, _) => + match load(inner_head) { + KExprNode.Const(cidx, _) => + let cci = load(list_lookup(top, cidx)); + match cci { + KConstantInfo.Ctor(_, _, _, _, nparams, _, _) => + let field = list_lookup_or_nil(inner_args, nparams + fidx); + whnf_with_spine(field, spine, types, top, addrs), + _ => + let stuck = store(KExprNode.Proj(tidx, fidx, inner_whnf)); + apply_spine(stuck, spine), + }, + _ => + let stuck = store(KExprNode.Proj(tidx, fidx, inner_whnf)); + apply_spine(stuck, spine), + }, + }, + }, + _ => apply_spine(head, spine), + } + } + + -- No fuel limit (unlike Rust's `MAX_WHNF_FUEL = 10_000` in + -- `src/ix/kernel/tc.rs`). In a zk prover context, divergent input simply + -- fails to produce a proof — the caller guarantees termination, so a + -- soundness-preserving early abort is unnecessary. + fn whnf(e: KExpr, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> KExpr { + -- Fast path: trivial whnf normal forms. Srt / Lit / Lam / Forall / BVar + -- never reduce — skip collect_spine + dispatch. + match load(e) { + KExprNode.Srt(_) => e, + KExprNode.Lit(_) => e, + KExprNode.Lam(_, _) => e, + KExprNode.Forall(_, _) => e, + KExprNode.BVar(_) => e, + _ => + let pair = collect_spine(e); + match pair { + (head, spine) => whnf_with_spine(head, spine, types, top, addrs), + }, + } + } + + -- ============================================================================ + -- Iota (recursor on ctor) + -- ============================================================================ + fn try_iota(lvls: List‹&KLevel›, spine: List‹KExpr›, + num_lvls: G, num_params: G, num_indices: G, + num_motives: G, num_minors: G, + rules: List‹KRecRule›, k_flag: G, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> (G, KExpr) { + let major_idx = num_params + num_motives + num_minors + num_indices; + let spine_len = list_length(spine); + let major_lt = u32_less_than(major_idx, spine_len); + match major_lt { + 0 => (0, store(KExprNode.BVar(0))), + 1 => + let lvls_len = list_length(lvls); + match lvls_len - num_lvls { + 0 => + try_iota_with_major(lvls, spine, num_params, num_motives, num_minors, + major_idx, rules, k_flag, types, top, addrs), + _ => (0, store(KExprNode.BVar(0))), + }, + } + } + + -- Mirror src/ix/kernel/whnf.rs:1113-1143 cleanup_nat_offset_major. + -- If `e` is `Nat.add base lit` with lit > 0, return one-layer-exposed + -- `Nat.succ pred` where pred = base if lit == 1, else `Nat.add base (lit-1)`. + -- Else returns `e` unchanged. Skips Nat literals (already evaluable). + fn cleanup_nat_offset_major(e: KExpr, addrs: List‹[G; 32]›) -> KExpr { + match load(e) { + KExprNode.Lit(_) => e, + _ => + match collect_spine(e) { + (head, args) => + match load(head) { + KExprNode.Const(_, _) => + match try_match_nat_add(head, args, addrs) { + (1, base, lit) => + let lit_norm = klimbs_normalize(lit); + match klimbs_is_zero(lit_norm) { + 1 => e, + 0 => build_succ_offset(base, lit_norm, addrs), + }, + _ => e, + }, + _ => e, + }, + }, + } + } + + -- If head is Const(nat_add) and args has length 2 and args[1] is Lit Nat, + -- return (1, base, limbs). Else (0, _, _). + fn try_match_nat_add(head: KExpr, args: List‹KExpr›, + addrs: List‹[G; 32]›) -> (G, KExpr, KLimbs) { + match load(head) { + KExprNode.Const(idx, _) => + let head_addr = list_lookup(addrs, idx); + match address_eq(head_addr, nat_add_addr()) { + 0 => (0, head, store(ListNode.Nil)), + 1 => + match list_length(args) - 2 { + 0 => + let lhs = list_lookup(args, 0); + let rhs = list_lookup(args, 1); + match load(rhs) { + KExprNode.Lit(lit) => + match lit { + KLiteral.Nat(limbs) => (1, lhs, limbs), + _ => (0, head, store(ListNode.Nil)), + }, + _ => (0, head, store(ListNode.Nil)), + }, + _ => (0, head, store(ListNode.Nil)), + }, + }, + _ => (0, head, store(ListNode.Nil)), + } + } + + -- Build `Nat.succ pred` where pred = base if lit==1, else `Nat.add base (lit-1)`. + fn build_succ_offset(base: KExpr, lit: KLimbs, + addrs: List‹[G; 32]›) -> KExpr { + let succ_pair = find_addr_idx_safe(nat_succ_addr(), addrs, 0); + match succ_pair { + (0, _) => store(KExprNode.App(base, base)), -- impossible: Nat.succ must be in env + (1, succ_idx) => + let one = store(ListNode.Cons([1, 0, 0, 0, 0, 0, 0, 0], store(ListNode.Nil))); + let lit_minus_one = klimbs_sub(lit, one); + let pred_lit_norm = klimbs_normalize(lit_minus_one); + let succ_const = store(KExprNode.Const(succ_idx, store(ListNode.Nil))); + match klimbs_is_zero(pred_lit_norm) { + 1 => store(KExprNode.App(succ_const, base)), + 0 => + let add_pair = find_addr_idx_safe(nat_add_addr(), addrs, 0); + match add_pair { + (0, _) => store(KExprNode.App(succ_const, base)), + (1, add_idx) => + let add_const = store(KExprNode.Const(add_idx, store(ListNode.Nil))); + let pred_lit_expr = store(KExprNode.Lit(KLiteral.Nat(pred_lit_norm))); + let pred = store(KExprNode.App( + store(KExprNode.App(add_const, base)), + pred_lit_expr)); + store(KExprNode.App(succ_const, pred)), + }, + }, + } + } + + fn try_iota_with_major(lvls: List‹&KLevel›, spine: List‹KExpr›, + num_params: G, num_motives: G, num_minors: G, + major_idx: G, rules: List‹KRecRule›, k_flag: G, + types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> (G, KExpr) { + -- Mirror: src/ix/kernel/whnf.rs:1824-1869 try_reduce_nat_succ_linear_rec. + -- Fast path: `Nat.rec _ base (λ _ ih => succ ih) (Lit n)` → `base + n`. + -- O(1) instead of O(n) iota expansion of literal succ chain. + let lin = try_nat_linear_rec(spine, num_params, num_motives, num_minors, + major_idx, types, top, addrs); + match lin { + (1, r) => (1, r), + (0, _) => + let raw_major = list_lookup(spine, major_idx); + -- K-target shortcut: when recursor's k_flag is set, attempt to + -- synthesize a nullary ctor application from the spine's params, + -- enabling iota even when the major isn't reduced to a Ctor. + -- Mirror: src/ix/kernel/whnf.rs:1315-1380 synth_ctor_when_k. + let major = match k_flag { + 1 => + let synth = try_synth_k_ctor(raw_major, num_params, rules, + types, top, addrs); + match synth { + (1, ctor_app) => ctor_app, + (0, _) => raw_major, + }, + 0 => raw_major, + }; + -- Pre-WHNF cleanup: if major is `Nat.add base lit` with lit > 0, expose + -- one Nat.succ layer to enable iota without unfolding Nat.add into a + -- chain of `lit` intermediate literals. Mirror whnf.rs:904-907,1113-1143 + -- cleanup_nat_offset_major. + let major_clean1 = cleanup_nat_offset_major(major, addrs); + let major_whnf_raw = whnf(major_clean1, types, top, addrs); + let major_clean2 = cleanup_nat_offset_major(major_whnf_raw, addrs); + -- Coerce Nat / Str literals to ctor chain so iota fires + -- (mirror whnf.rs:929-946 nat, whnf.rs:953-960 string). + let major_whnf_nat = nat_lit_to_ctor_or_self(major_clean2, addrs); + let major_whnf = str_lit_to_ctor_or_self(major_whnf_nat, addrs); + let major_pair = collect_spine(major_whnf); + match major_pair { + (ctor_head, ctor_args) => + match load(ctor_head) { + KExprNode.Const(ctor_idx, _) => + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, _, _, cidx, _, ctor_nfields, _) => + let rules_len = list_length(rules); + let cidx_in_range = u32_less_than(cidx, rules_len); + match cidx_in_range { + 0 => (0, store(KExprNode.BVar(0))), + 1 => + let ctor_args_len = list_length(ctor_args); + let too_few_fields = u32_less_than(ctor_args_len, ctor_nfields); + match too_few_fields { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let rule = list_lookup(rules, cidx); + match rule { + KRecRule.Mk(_, _, rhs) => + let pmm_end = num_params + num_motives + num_minors; + let pmm = list_take(spine, pmm_end); + let field_start = ctor_args_len - ctor_nfields; + let field_args = list_drop(ctor_args, field_start); + let post_major = list_drop(spine, major_idx + 1); + let rhs_inst = expr_inst_levels(rhs, lvls); + let r1 = apply_spine(rhs_inst, pmm); + let r2 = apply_spine(r1, field_args); + let r3 = apply_spine(r2, post_major); + (1, r3), + }, + }, + }, + _ => + -- Not a Ctor; fall through to struct-eta-iota. + try_struct_eta_iota(spine, num_params, num_motives, num_minors, + major_idx, rules, lvls, types, top, addrs), + }, + _ => + -- head not a Const; fall through to struct-eta-iota. + try_struct_eta_iota(spine, num_params, num_motives, num_minors, + major_idx, rules, lvls, types, top, addrs), + }, + }, + } + } + + -- Mirror: src/ix/kernel/whnf.rs:1824-1869 try_reduce_nat_succ_linear_rec. + -- Detects `Nat.rec _ base step (Lit n)` with step = `λ _ ih => Nat.succ ih` + -- and reduces directly to `Lit (base + n)`. Returns (1, result) on hit, + -- (0, _) on miss. + fn try_nat_linear_rec(spine: List‹KExpr›, num_params: G, num_motives: G, + num_minors: G, major_idx: G, + types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + match u32_less_than(num_minors, 2) { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let raw_major = list_lookup(spine, major_idx); + let major_w = whnf(raw_major, types, top, addrs); + match try_extract_nat(major_w, addrs) { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, n_klimbs) => + let base_idx = num_params + num_motives; + let raw_step = list_lookup(spine, base_idx + 1); + let step_w = whnf(raw_step, types, top, addrs); + match is_nat_succ_ih_step(step_w, addrs) { + 0 => (0, store(KExprNode.BVar(0))), + 1 => + let raw_base = list_lookup(spine, base_idx); + let base_w = whnf(raw_base, types, top, addrs); + match try_extract_nat(base_w, addrs) { + (0, _) => (0, store(KExprNode.BVar(0))), + (1, b_klimbs) => + let post = list_drop(spine, major_idx + 1); + (1, apply_spine(mk_nat_lit(klimbs_add(b_klimbs, n_klimbs)), post)), + }, + }, + }, + } + } + + -- 1 iff `step` whnf-shape is `λ _ (λ _ (Nat.succ #0))`. + fn is_nat_succ_ih_step(step: KExpr, addrs: List‹[G; 32]›) -> G { + match load(step) { + KExprNode.Lam(_, body1) => + match load(body1) { + KExprNode.Lam(_, body2) => + match collect_spine(body2) { + (head, args) => + match load(head) { + KExprNode.Const(idx, _) => + let head_addr = list_lookup(addrs, idx); + match address_eq(head_addr, nat_succ_addr()) { + 0 => 0, + 1 => + match list_length(args) - 1 { + 0 => + match load(list_lookup(args, 0)) { + KExprNode.BVar(i) => eq_zero(i), + _ => 0, + }, + _ => 0, + }, + }, + _ => 0, + }, + }, + _ => 0, + }, + _ => 0, + } + } + + -- Mirror: src/ix/kernel/whnf.rs:1244-1301 try_struct_eta_iota. + -- Fires when major doesn't reduce to ctor BUT inductive is struct-like + -- (1 ctor, 0 indices, non-recursive). Synthesizes ctor_app via field + -- projections from the major. Refuses on Prop-typed structures (Rust + -- whnf.rs:1283 H3 Prop guard via lean4lean toCtorWhenStruct:51). + fn try_struct_eta_iota(spine: List‹KExpr›, + num_params: G, num_motives: G, num_minors: G, + major_idx: G, rules: List‹KRecRule›, + lvls: List‹&KLevel›, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> (G, KExpr) { + let n_rules = list_length(rules); + match n_rules { + 1 => + let rule = list_lookup(rules, 0); + match rule { + KRecRule.Mk(ctor_idx, n_fields, rhs) => + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, _, induct_idx, _, _, _, _) => + let ind_ci = load(list_lookup(top, induct_idx)); + match ind_ci { + KConstantInfo.Induct(_, _, _, n_indices, ctor_indices, is_rec, _, _, _, _) => + let n_ctors = list_length(ctor_indices); + match n_ctors { + 1 => + match n_indices { + 0 => + match is_rec { + 0 => + let major = list_lookup(spine, major_idx); + -- Prop guard: refuse if major's type is Prop. + let major_ty = k_infer(major, types, top, addrs); + let prop_p = is_prop_type(major_ty, types, top, addrs); + match prop_p { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let rhs_inst = expr_inst_levels(rhs, lvls); + let pmm_end = (num_params + num_motives) + num_minors; + let pmm = list_take(spine, pmm_end); + let after_pmm = apply_spine(rhs_inst, pmm); + let with_projs = apply_n_projs(after_pmm, induct_idx, major, n_fields, 0); + let post_major = list_drop(spine, major_idx + 1); + let result = apply_spine(with_projs, post_major); + (1, result), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + }, + }, + }, + _ => (0, store(KExprNode.BVar(0))), + } + } + + -- Mirror: Lean kernel Quot.lift / Quot.ind iota. + -- Quot.lift α r β f sound (Quot.mk α' r' a) = f a + -- Quot.ind α r motive m (Quot.mk α' r' a) = m a + -- + -- spine layout: + -- Lift: [α, r, β, f, sound, q] (6 args) + -- Ind: [α, r, motive, m, q] (5 args) + -- + -- For both: locate q (last arg), whnf it, expect head Const that's + -- Quot(_, _, Ctor) (= Quot.mk). Extract `a` (last arg of Quot.mk's + -- spine), apply to f or m. + fn try_quot_iota(kind: QuotKind, spine: List‹KExpr›, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> (G, KExpr) { + match kind { + QuotKind.Lift => try_quot_lift(spine, types, top, addrs), + QuotKind.Ind => try_quot_ind(spine, types, top, addrs), + _ => (0, store(KExprNode.BVar(0))), + } + } + + fn try_quot_lift(spine: List‹KExpr›, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> (G, KExpr) { + let n = list_length(spine); + let lt6 = u32_less_than(n, 6); + match lt6 { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let f = list_lookup(spine, 3); + let q = list_lookup(spine, 5); + let a_opt = quot_extract_arg(q, types, top, addrs); + match a_opt { + (1, a) => + let post = list_drop(spine, 6); + let result = apply_spine(store(KExprNode.App(f, a)), post); + (1, result), + (0, _) => (0, store(KExprNode.BVar(0))), + }, + } + } + + fn try_quot_ind(spine: List‹KExpr›, types: List‹KExpr›, + top: List‹&KConstantInfo›, addrs: List‹[G; 32]›) -> (G, KExpr) { + let n = list_length(spine); + let lt5 = u32_less_than(n, 5); + match lt5 { + 1 => (0, store(KExprNode.BVar(0))), + 0 => + let m = list_lookup(spine, 3); + let q = list_lookup(spine, 4); + let a_opt = quot_extract_arg(q, types, top, addrs); + match a_opt { + (1, a) => + let post = list_drop(spine, 5); + let result = apply_spine(store(KExprNode.App(m, a)), post); + (1, result), + (0, _) => (0, store(KExprNode.BVar(0))), + }, + } + } + + -- WHNF q; if q reduces to `App-spine(Const(Quot.mk), [α, r, a])`, + -- return (1, a). Else (0, _). + fn quot_extract_arg(q: KExpr, types: List‹KExpr›, top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + let q_whnf = whnf(q, types, top, addrs); + let pair = collect_spine(q_whnf); + match pair { + (head, args) => + match load(head) { + KExprNode.Const(idx, _) => + let ci = load(list_lookup(top, idx)); + match ci { + KConstantInfo.Quot(_, _, kind) => + match kind { + QuotKind.Ctor => + -- Mirror: src/ix/kernel/whnf.rs:2410 Quot.mk has exactly 3 args. + -- Strict equality required: extra spine args are ill-typed + -- and must not silently reduce. + let nargs = list_length(args); + match nargs - 3 { + 0 => + let a = list_lookup(args, 2); + (1, a), + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + } + } + + fn apply_n_projs(head: KExpr, induct_idx: G, major: KExpr, n_fields: G, i: G) -> KExpr { + match n_fields - i { + 0 => head, + _ => + let proj_node = store(KExprNode.Proj(induct_idx, i, major)); + apply_n_projs(store(KExprNode.App(head, proj_node)), induct_idx, major, n_fields, i + 1), + } + } + + -- Mirror: src/ix/kernel/whnf.rs:1315-1380 synth_ctor_when_k. + -- For K-eliminable recursors (1 ctor in Prop, no fields), synthesize + -- a nullary ctor application from the spine's params so iota fires. + -- Returns (1, ctor_app) on success, (0, _) on miss. Extracts ctor + -- universes + params from the major's inferred type (rather than slicing + -- recursor's lvls or spine), then verifies the synthesized ctor's type + -- is def-eq with major's type. + fn try_synth_k_ctor(raw_major: KExpr, num_params: G, + rules: List‹KRecRule›, types: List‹KExpr›, + top: List‹&KConstantInfo›, + addrs: List‹[G; 32]›) -> (G, KExpr) { + match load(rules) { + ListNode.Nil => (0, store(KExprNode.BVar(0))), + ListNode.Cons(rule, _) => + match rule { + KRecRule.Mk(ctor_idx, _, _) => + let ctor_ci = load(list_lookup(top, ctor_idx)); + match ctor_ci { + KConstantInfo.Ctor(_, _, induct_idx, _, _, _, _) => + let ind_ci = load(list_lookup(top, induct_idx)); + match ind_ci { + KConstantInfo.Induct(_, _, _, _, ctor_indices, _, _, _, _, _) => + let n_ctors = list_length(ctor_indices); + match n_ctors { + 0 => (0, store(KExprNode.BVar(0))), + _ => + let first_ctor = list_lookup(ctor_indices, 0); + let major_ty = k_infer(raw_major, types, top, addrs); + let major_ty_w = whnf(major_ty, types, top, addrs); + match collect_spine(major_ty_w) { + (ty_head, ty_args) => + match load(ty_head) { + KExprNode.Const(ty_ind_idx, ctor_us) => + match ty_ind_idx - induct_idx { + 0 => + let ctor_head = store(KExprNode.Const(first_ctor, ctor_us)); + let params = list_take(ty_args, num_params); + let ctor_app = apply_spine(ctor_head, params); + -- Verify: ctor's inferred type def-eq major's type. + let ctor_ty = k_infer(ctor_app, types, top, addrs); + match k_is_def_eq(major_ty_w, ctor_ty, types, top, addrs) { + 1 => (1, ctor_app), + 0 => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + _ => (0, store(KExprNode.BVar(0))), + }, + }, + }, + }, + }, + }, + } + } + + -- ============================================================================ + -- Helpers + -- ============================================================================ + fn list_lookup_or_nil(list: List‹KExpr›, idx: G) -> KExpr { + match load(list) { + ListNode.Nil => store(KExprNode.BVar(0)), + ListNode.Cons(v, rest) => + match idx { + 0 => v, + _ => list_lookup_or_nil(rest, idx - 1), + }, + } + } + + fn ensure_sort_post_whnf(e: KExpr) -> (G, &KLevel) { + match load(e) { + KExprNode.Srt(l) => (1, l), + _ => (0, store(KLevel.Zero)), + } + } + + fn ensure_forall_post_whnf(e: KExpr) -> (G, KExpr, KExpr) { + match load(e) { + KExprNode.Forall(ty, body) => (1, ty, body), + _ => (0, store(KExprNode.BVar(0)), store(KExprNode.BVar(0))), + } + } +⟧ + +end IxVM + +end diff --git a/Ix/IxVM/KernelTypes.lean b/Ix/IxVM/KernelTypes.lean index 25b93d67..515f1b58 100644 --- a/Ix/IxVM/KernelTypes.lean +++ b/Ix/IxVM/KernelTypes.lean @@ -78,7 +78,14 @@ def kernelTypes := ⟦ type KValEnv = List‹KVal› -- ============================================================================ - -- Recursor Rule: (ctor_const_idx, num_fields, rhs) + -- Recursor Rule + -- + -- Mirror: src/ix/kernel/constant.rs::RecRule { ctor, fields, rhs }. + -- Aiur keeps a global ctor idx in the first slot for direct lookup + -- convenience. Could be simplified to (fields, rhs) at the cost of an + -- ingress refactor. + -- + -- Layout: (global_ctor_idx, num_fields, rhs). -- ============================================================================ enum KRecRule { @@ -89,27 +96,36 @@ def kernelTypes := ⟦ -- Constant Info -- -- CIAxiom: (num_levels, type, is_unsafe) - -- CIDefn: (num_levels, type, value, hints, safety) + -- CIDefn: (num_levels, type, value, safety, hints) + -- + -- `hints` is a packed G encoding of `Lean.ReducibilityHints`: + -- 0 = Opaque (never unfold in whnf; lazy-delta-only) + -- 1 + h = Regular(h) (h up to 2^32 - 2; height drives lazy-delta priority) + -- 2^32 - 1 = Abbrev (always unfold, highest priority in lazy-delta) + -- Larger value = higher delta-rank (unfold first). + -- Plumbed via secondary IOBuffer key `[2] ++ addr` from Lean side. -- CIThm: (num_levels, type, value) -- CIOpaque: (num_levels, type, value, is_unsafe) -- CIQuot: (num_levels, type, kind) -- CIInduct: (num_levels, type, num_params, num_indices, - -- ctor_indices, is_rec, is_reflexive, is_unsafe) + -- ctor_indices, is_rec, is_reflexive, is_unsafe, nested) -- CICtor: (num_levels, type, induct_idx, cidx, -- num_params, num_fields, is_unsafe) -- CIRec: (num_levels, type, num_params, num_indices, - -- num_motives, num_minors, rules, k_flag, is_unsafe) + -- num_motives, num_minors, rules, k_flag, is_unsafe, block_addr) + -- block_addr = address of the Muts wrapper this Recursor lives in. Used + -- by canonical_block_sort to validate recursor-block ordering. -- ============================================================================ enum KConstantInfo { Axiom(G, KExpr, G), - Defn(G, KExpr, KExpr, DefinitionSafety), + Defn(G, KExpr, KExpr, DefinitionSafety, G), Thm(G, KExpr, KExpr), Opaque(G, KExpr, KExpr, G), Quot(G, KExpr, QuotKind), - Induct(G, KExpr, G, G, List‹G›, G, G, G), + Induct(G, KExpr, G, G, List‹G›, G, G, G, G, [G; 32]), Ctor(G, KExpr, G, G, G, G, G), - Rec(G, KExpr, G, G, G, G, List‹KRecRule›, G, G) + Rec(G, KExpr, G, G, G, G, List‹KRecRule›, G, G, [G; 32]) } ⟧ diff --git a/Kernel.lean b/Kernel.lean index fa2634d9..9e1855a1 100644 --- a/Kernel.lean +++ b/Kernel.lean @@ -48,17 +48,20 @@ where (← IO.getStdout).flush let testCase ← kernelCheck name env let funIdx := compiled.getFuncIdx testCase.functionName |>.get! - let (output, ioBuffer, queryCounts) := - compiled.bytecode.execute funIdx testCase.input testCase.inputIOBuffer - if output != testCase.expectedOutput then - IO.eprintln s!"{name}: output mismatch" - return 1 - if ioBuffer != testCase.expectedIOBuffer then - IO.eprintln s!"{name}: IOBuffer mismatch" + match compiled.bytecode.execute funIdx testCase.input testCase.inputIOBuffer with + | .error e => + IO.eprintln s!"{name}: Aiur execution error: {e}" return 1 - let stats := Aiur.computeStats compiled queryCounts - Aiur.printStats stats - pure 0 + | .ok (output, ioBuffer, queryCounts) => + if output != testCase.expectedOutput then + IO.eprintln s!"{name}: output mismatch" + return 1 + if ioBuffer != testCase.expectedIOBuffer then + IO.eprintln s!"{name}: IOBuffer mismatch" + return 1 + let stats := Aiur.computeStats compiled queryCounts + Aiur.printStats stats + pure 0 interpCheck decls name env : IO UInt32 := do IO.println s!"Interpreting {name}" (← IO.getStdout).flush diff --git a/Tests/Aiur/Common.lean b/Tests/Aiur/Common.lean index 682ff653..85c10b67 100644 --- a/Tests/Aiur/Common.lean +++ b/Tests/Aiur/Common.lean @@ -79,28 +79,29 @@ def AiurTestEnv.interpTest (env : AiurTestEnv) (testCase : AiurTestCase) def AiurTestEnv.runTestCase (env : AiurTestEnv) (testCase : AiurTestCase) : TestSeq := let label := testCase.label let funIdx := env.compiled.getFuncIdx testCase.functionName |>.get! - let (execOutput, execIOBuffer, _queryCounts) := env.compiled.bytecode.execute - funIdx testCase.input testCase.inputIOBuffer - let execOutputTest := test s!"Execute output matches for {label}" - (execOutput == testCase.expectedOutput) - let execIOTest := test s!"Execute IOBuffer matches for {label}" - (execIOBuffer == testCase.expectedIOBuffer) - let execTest := execOutputTest ++ execIOTest - let interpTest := - if testCase.interpret then env.interpTest testCase execOutput execIOBuffer - else .done - if testCase.executionOnly then execTest ++ interpTest - else - let (claim, proof, ioBuffer) := env.aiurSystem.prove - friParameters funIdx testCase.input testCase.inputIOBuffer - let claimTest := test s!"Claim matches for {label}" - (claim == Aiur.buildClaim funIdx testCase.input testCase.expectedOutput) - let ioTest := test s!"IOBuffer matches for {label}" - (ioBuffer == testCase.expectedIOBuffer) - let proof := .ofBytes proof.toBytes - let pvTest := withExceptOk s!"Prove/verify works for {label}" - (env.aiurSystem.verify friParameters claim proof) fun _ => .done - execTest ++ interpTest ++ claimTest ++ ioTest ++ pvTest + match env.compiled.bytecode.execute funIdx testCase.input testCase.inputIOBuffer with + | .error e => test s!"Execute succeeds for {label}: {e}" false + | .ok (execOutput, execIOBuffer, _queryCounts) => + let execOutputTest := test s!"Execute output matches for {label}" + (execOutput == testCase.expectedOutput) + let execIOTest := test s!"Execute IOBuffer matches for {label}" + (execIOBuffer == testCase.expectedIOBuffer) + let execTest := execOutputTest ++ execIOTest + let interpTest := + if testCase.interpret then env.interpTest testCase execOutput execIOBuffer + else .done + if testCase.executionOnly then execTest ++ interpTest + else + let (claim, proof, ioBuffer) := env.aiurSystem.prove + friParameters funIdx testCase.input testCase.inputIOBuffer + let claimTest := test s!"Claim matches for {label}" + (claim == Aiur.buildClaim funIdx testCase.input testCase.expectedOutput) + let ioTest := test s!"IOBuffer matches for {label}" + (ioBuffer == testCase.expectedIOBuffer) + let proof := .ofBytes proof.toBytes + let pvTest := withExceptOk s!"Prove/verify works for {label}" + (env.aiurSystem.verify friParameters claim proof) fun _ => .done + execTest ++ interpTest ++ claimTest ++ ioTest ++ pvTest def mkAiurTests (toplevelFn : Except Aiur.Global Aiur.Source.Toplevel) (cases : List AiurTestCase) : TestSeq := diff --git a/Tests/Ix/IxVM.lean b/Tests/Ix/IxVM.lean index 2183f10b..ef1df320 100644 --- a/Tests/Ix/IxVM.lean +++ b/Tests/Ix/IxVM.lean @@ -1,65 +1,137 @@ module public import Ix.Meta +public import Ix.IxVM.CheckHarness public import Tests.Aiur.Common +open IxVM.CheckHarness + +/-! # Aiur kernel test fixtures + +Test-purpose declarations exercised by the `ixvm` test runner. Layout: + + * `IxVMPrim` — theorems whose `rfl` proofs force the kernel to drive + each primitive reduction (Nat / String / BitVec / Decidable). + * `IxVMInd` — sample inductives covering mutual blocks and nested + parameters; their auto-generated recursors round-trip through the + Aiur kernel. + * `serdeNatAddComm` — Ixon serialize/deserialize round-trip via the + `ixon_serde_test` Aiur entrypoint. + * `kernelCheck` / `kernelChecks` — single-constant and curated-list + runners against the `kernel_check_test` Aiur entrypoint. +-/ + +/-! ## Primitive reduction theorems -/ + +namespace IxVMPrim + +-- Nat arithmetic (try_nat_dispatch / try_nat_binop_addr) +public theorem nat_add_lit : 100 + 200 = 300 := rfl +public theorem nat_sub_lit : 1000 - 250 = 750 := rfl +public theorem nat_mul_lit : Nat.mul 6 7 = 42 := rfl +public theorem nat_div_lit : Nat.div 100 7 = 14 := rfl +public theorem nat_mod_lit : Nat.mod 100 7 = 2 := rfl +public theorem nat_succ_lit : Nat.succ 41 = 42 := rfl +public theorem nat_pred_lit : Nat.pred 42 = 41 := rfl +public theorem nat_gcd_lit : Nat.gcd 144 60 = 12 := rfl + +-- Nat bitwise ops +public theorem nat_land_lit : Nat.land 0xff 0x0f = 0x0f := rfl +public theorem nat_lor_lit : Nat.lor 0xf0 0x0f = 0xff := rfl +public theorem nat_xor_lit : Nat.xor 0xff 0x0f = 0xf0 := rfl +public theorem nat_shl_lit : Nat.shiftLeft 1 8 = 256 := rfl +public theorem nat_shr_lit : Nat.shiftRight 256 4 = 16 := rfl + +-- Nat predicates (return Bool ctors) +public theorem nat_beq_lit : Nat.beq 42 42 = true := rfl +public theorem nat_ble_lit : Nat.ble 5 10 = true := rfl + +-- Decidable instances (try_reduce_decidable) +public theorem nat_dec_le : decide (5 ≤ 10) = true := rfl +public theorem nat_dec_lt : decide (5 < 10) = true := rfl +public theorem nat_dec_eq : decide (5 = 5 : Prop) = true := rfl + +-- String primitives (try_str_dispatch) +public theorem str_size_lit : "hello".utf8ByteSize = 5 := rfl + +-- BitVec primitives (try_bitvec_dispatch) +public theorem bv_to_nat_lit : (BitVec.ofNat 16 1234).toNat = 1234 := rfl + +end IxVMPrim + +/-! ## Inductive shape fixtures -/ + +namespace IxVMInd + +-- Mutual inductive (true mutual block: Even/Odd reference each other). +mutual + public inductive Even : Nat → Prop where + | zero : Even 0 + | succ : ∀ n, Odd n → Even (n + 1) + public inductive Odd : Nat → Prop where + | succ : ∀ n, Even n → Odd (n + 1) +end + +-- Nested inductive (Tree with List Tree → aux _nested.List_Tree). +public inductive Tree where + | mk : List Tree → Tree + +end IxVMInd + +/-! ## Test runners -/ + +/-- Round-trip `Nat.add_comm`'s Ixon env through the + `ixon_serde_test` entrypoint. -/ public def serdeNatAddComm (env : Lean.Environment) : IO AiurTestCase := do - let natAddCommName := ``Nat.add_comm - let constList := Lean.collectDependencies natAddCommName env.constants - let rawEnv ← Ix.CompileM.rsCompileEnvFFI constList - let ixonEnv := rawEnv.toEnv - let ixonConsts := ixonEnv.consts.valuesIter - let (ioBuffer, n) := ixonConsts.fold (init := (default, 0)) fun (ioBuffer, i) c => - let (_, bytes) := Ixon.Serialize.put c |>.run default - (ioBuffer.extend #[.ofNat i] (bytes.data.map .ofUInt8), i + 1) + let ixonEnv ← loadIxonEnv ``Nat.add_comm env + let (ioBuffer, n) := buildSerdeIOBuffer ixonEnv pure { functionName := `ixon_serde_test, label := "Ixon serde test" - input := #[.ofNat n], inputIOBuffer := ioBuffer, expectedIOBuffer := ioBuffer + input := #[.ofNat n], inputIOBuffer := ioBuffer + expectedIOBuffer := ioBuffer interpret := false, executionOnly := true } -public def kernelCheck (name : Lean.Name) (env : Lean.Environment) : IO AiurTestCase := do - let constList := Lean.collectDependencies name env.constants - let rawEnv ← Ix.CompileM.rsCompileEnvFFI constList - let ixonEnv := rawEnv.toEnv - - let mut ioBuffer : Aiur.IOBuffer := default - - -- Store ALL constants (including muts blocks) by Blake3 hash - for (addr, c) in ixonEnv.consts do - let (_, bytes) := Ixon.Serialize.put c |>.run default - let key : Array Aiur.G := addr.hash.data.map .ofUInt8 - ioBuffer := ioBuffer.extend key (bytes.data.map .ofUInt8) - - -- Store each blob: - -- 1. Raw bytes under prefixed key [1] ++ blake3_hash (for on-demand verified loading) - -- 2. Empty sentinel under plain blake3_hash (so io_get_info returns len=0, marking as blob) - for (addr, rawBytes) in ixonEnv.blobs do - let hashKey : Array Aiur.G := addr.hash.data.map .ofUInt8 - let prefixedKey : Array Aiur.G := #[1] ++ hashKey - ioBuffer := ioBuffer.extend prefixedKey (rawBytes.data.map fun b => .ofNat b.toNat) - ioBuffer := ioBuffer.extend hashKey #[] - - -- Get the blake3 address of `name` as the target - let targetAddr := match ixonEnv.getAddr? (Ix.Name.fromLeanName name) with - | some addr => addr - | none => panic! s!"{name} not found in Ixon environment" - let targetAddrBytes : Array Aiur.G := targetAddr.hash.data.map .ofUInt8 - +/-- Build a `kernel_check_test` invocation for `name` with full + transitive checking (`check_deps = 1`). -/ +public def kernelCheck (name : Lean.Name) (env : Lean.Environment) : + IO AiurTestCase := do + let ixonEnv ← loadIxonEnv name env + let ioBuffer := buildKernelCheckIOBuffer ixonEnv + let targetAddrBytes := kernelCheckTarget name ixonEnv pure { functionName := `kernel_check_test, label := s!"Kernel check {name}" - input := targetAddrBytes, inputIOBuffer := ioBuffer, expectedIOBuffer := ioBuffer, + input := targetAddrBytes.push 1, inputIOBuffer := ioBuffer + expectedIOBuffer := ioBuffer interpret := false, executionOnly := true } +/-- Names listed as strings to dodge name-quotation parser issues with + numeric components (e.g. `_private...0...`). -/ +private def kernelCheckNames : List String := [ + -- Stdlib + "HEq", "HEq.rec", "Eq.rec", + "Nat", "Nat.add", "Nat.add_comm", "Nat.decEq", "Nat.decLe", + "Nat.sub_le_of_le_add", + -- Primitive reduction theorems (`IxVMPrim`) + "IxVMPrim.nat_add_lit", "IxVMPrim.nat_sub_lit", "IxVMPrim.nat_mul_lit", + "IxVMPrim.nat_div_lit", "IxVMPrim.nat_mod_lit", "IxVMPrim.nat_succ_lit", + "IxVMPrim.nat_pred_lit", "IxVMPrim.nat_gcd_lit", + "IxVMPrim.nat_land_lit", "IxVMPrim.nat_lor_lit", "IxVMPrim.nat_xor_lit", + "IxVMPrim.nat_shl_lit", "IxVMPrim.nat_shr_lit", + "IxVMPrim.nat_beq_lit", "IxVMPrim.nat_ble_lit", + "IxVMPrim.nat_dec_le", "IxVMPrim.nat_dec_lt", "IxVMPrim.nat_dec_eq", + "IxVMPrim.str_size_lit", "IxVMPrim.bv_to_nat_lit", + -- Mutual block + multi-member recursors + "IxVMInd.Even", "IxVMInd.Odd", "IxVMInd.Even.rec", "IxVMInd.Odd.rec", + -- Nested inductive + aux recursor (Tree.mk : List Tree → Tree) + "IxVMInd.Tree", "IxVMInd.Tree.rec", + -- Edge cases from prelude + "String.Internal.append", + "_private.Init.Prelude.0.Lean.extractMainModule._unsafe_rec" +] + +private def nameOfString (str : String) : Lean.Name := + str.splitOn "." |>.foldl (init := .anonymous) fun acc s => + match s.toNat? with + | some n => .mkNum acc n + | none => .mkStr acc s + public def kernelChecks (env : Lean.Environment) : IO (List AiurTestCase) := - -- List in strings to prevent instantiation errors (e.g. with numerical limbs) - let constNamesStr := [ - "Nat.add_comm", - "Nat.sub_le_of_le_add", - "String.Internal.append", - "_private.Init.Prelude.0.Lean.extractMainModule._unsafe_rec", - ] - constNamesStr.map nameOfString |>.mapM (kernelCheck · env) -where - nameOfString str := - str.splitOn "." |>.foldl (init := .anonymous) - fun acc s => match s.toNat? with - | some n => .mkNum acc n - | none => .mkStr acc s + kernelCheckNames.map nameOfString |>.mapM (kernelCheck · env) diff --git a/Tests/Ix/Kernel/Arena.lean b/Tests/Ix/Kernel/Arena.lean new file mode 100644 index 00000000..86297e86 --- /dev/null +++ b/Tests/Ix/Kernel/Arena.lean @@ -0,0 +1,118 @@ +/- + Drives the Aiur kernel through every lean-kernel-arena tutorial + fixture (`Tests.Ix.Kernel.TutorialDefs` + `NatReduction`) using the + shared `IxVM.ixVM` toplevel + `kernel_check_test` entrypoint. + + Each fixture's outcome is classified against the test case's expected + outcome (good must typecheck; bad must be rejected via Aiur execution + error, where the error originates from an `assert_eq!` failure inside + the Aiur kernel source). + + Skips: + - test cases registered via `bad_raw_consts` (decls live in + `TutorialMeta.rawConstsExt`, not `env.constants`, so Aiur ingress + can't address them); + - renaming test cases (collision tests, not single-constant + typechecks); + - constants filtered by `compile_env` (ungrounded blocks); + - constants in `knownIncompatible` (meta-only Lean kernel checks the + Aiur kernel structurally cannot see). +-/ +import Ix.Meta +import Ix.Aiur.Protocol +import Ix.Aiur.Compiler +import Ix.IxVM +import Ix.IxVM.CheckHarness +import Tests.Aiur.Common +import Tests.Ix.Kernel.TutorialMeta +import Tests.Ix.Kernel.TutorialDefs +import Tests.Ix.Kernel.NatReduction +import LSpec + +open LSpec +open Tests.Ix.Kernel.TutorialMeta +open IxVM.CheckHarness + +namespace Tests.Ix.Kernel.Arena + +structure ArenaCheck where + name : Lean.Name + expectPass : Bool + +/-- Constants the Aiur kernel structurally cannot adjudicate. Skipped + rather than counted as pass/fail. -/ +def knownIncompatible : Array (Lean.Name × String) := #[ + -- Duplicate `levelParams` is a meta-mode hygiene check (Lean's + -- `Level.Param` is name-keyed). Ixon Anon erases the structural + -- duplication pattern (only `lvls : UInt64` count survives) and the + -- Ixon compiler resolves `Param u` via first-occurrence, silently + -- making the second binder dead. Rejection happens only in the Rust + -- kernel via `has_duplicate_level_params` (Meta-mode only). + (`tut06_bad01, + "duplicate levelParams: Anon-mode hygiene check, see src/ix/kernel/check.rs:107"), + -- AdvNat.rec is a malformed raw recursor payload that aux-gen would + -- sanitize before it reaches Ixon. Tests.Ix.Kernel.Tutorial uses a + -- dedicated FFI (`rs_kernel_check_malformed_rec_rule_ixon`) to inject + -- the bad rule post-aux-gen. Standard Lean→Ixon→Aiur path never + -- exposes the malformed rule. + (`Tests.Ix.Kernel.TutorialDefs.AdvNat.rec, + "malformed rec rule sanitized by aux-gen; Tutorial uses bespoke FFI") +] + +private def collectChecks (env : Lean.Environment) : Array ArenaCheck := Id.run do + let skipSet : Std.HashSet Lean.Name := + knownIncompatible.foldl (init := {}) (fun s (n, _) => s.insert n) + let mut out : Array ArenaCheck := #[] + let mut seen : Std.HashSet Lean.Name := {} + for tc in getTestCases env do + if tc.renamings.size > 0 then continue + let pass := tc.outcome == .good + for n in tc.decls do + if seen.contains n then continue + seen := seen.insert n + if !env.constants.contains n then continue + if skipSet.contains n then continue + out := out.push { name := n, expectPass := pass } + return out + +/-- Build the `kernel_check_test` input for `name` against the shared + `ixonEnv`. Returns `error` when `compile_env` filtered the + constant (no Ixon address) — caller treats that as a skip. -/ +private def buildInput (ixonEnv : Ixon.Env) (name : Lean.Name) : + Except String (Array Aiur.G × Aiur.IOBuffer) := + match ixonEnv.getAddr? (Ix.Name.fromLeanName name) with + | none => .error "ungrounded by compile_env" + | some addr => + let ioBuffer := buildKernelCheckIOBufferFor ixonEnv addr + let targetAddrBytes : Array Aiur.G := addr.hash.data.map .ofUInt8 + -- check_deps=0: only the target const is type-checked. Each fixture + -- runs in isolation; we don't need to revalidate every transitive + -- dep N times. + .ok (targetAddrBytes.push 0, ioBuffer) + +/-- Run the arena suite against `compiled` (already-compiled Aiur + `IxVM.ixVM` toplevel) using a single shared Ixon env. Returns one + `TestSeq` entry per fixture. -/ +def arenaTests (env : Lean.Environment) + (compiled : Aiur.CompiledToplevel) : IO TestSeq := do + let funIdx ← match compiled.getFuncIdx `kernel_check_test with + | some i => pure i + | none => throw <| IO.userError "kernel_check_test entrypoint missing" + let checks := collectChecks env + let ixonEnv ← loadSharedIxonEnv (checks.map (·.name)) env + let mut tests : TestSeq := .done + for c in checks do + let label := s!"arena {if c.expectPass then "GOOD" else "BAD"} {c.name}" + match buildInput ixonEnv c.name with + | .error reason => tests := tests ++ test s!"{label}: skipped ({reason})" true + | .ok (input, ioBuffer) => + match compiled.bytecode.execute funIdx input ioBuffer with + | .ok _ => + tests := tests ++ + test label (c.expectPass) + | .error e => + tests := tests ++ + test s!"{label} ({e})" (!c.expectPass) + return tests + +end Tests.Ix.Kernel.Arena diff --git a/Tests/Main.lean b/Tests/Main.lean index 00941e36..73eb720d 100644 --- a/Tests/Main.lean +++ b/Tests/Main.lean @@ -13,6 +13,7 @@ import Tests.Ix.Kernel.CheckEnv import Tests.Ix.Kernel.Roundtrip import Tests.Ix.Kernel.RoundtripNoCompile import Tests.Ix.Kernel.Tutorial +import Tests.Ix.Kernel.Arena import Tests.Ix.RustSerialize import Tests.Ix.RustDecompile import Tests.Ix.Sharing @@ -86,13 +87,21 @@ def ignoredRunners (env : Lean.Environment) : List (String × IO UInt32) := [ | IO.eprintln "SHA256 setup failed"; return 1 let r2 ← LSpec.lspecEachIO sha256TestCases fun tc => pure (sha256Env.runTestCase tc) return if r1 == 0 && r2 == 0 then 0 else 1), - -- ixvm tests temporarily disabled while Aiur kernel port lands on ap/kernel - -- ("ixvm", do - -- let kernelUnitTests := .exec `kernel_unit_tests - -- let serdeNatAddCommTest ← serdeNatAddComm env - -- let kernelChecks ← kernelChecks env - -- let tests := [kernelUnitTests, serdeNatAddCommTest] ++ kernelChecks - -- LSpec.lspecIO (.ofList [("ixvm", [mkAiurTests IxVM.ixVM tests])]) []), + ("ixvm", do + let kernelUnitTests := .exec `kernel_unit_tests + let serdeNatAddCommTest ← serdeNatAddComm env + let kernelChecks ← kernelChecks env + let aiurTests := [kernelUnitTests, serdeNatAddCommTest] ++ kernelChecks + -- The arena suite shares the compiled toplevel with the AiurTestCase + -- runs above; build it once here and weave the resulting TestSeq in + -- alongside `mkAiurTests`'s output. + match AiurTestEnv.build IxVM.ixVM with + | .error e => IO.eprintln s!"IxVM env build failed: {e}"; return 1 + | .ok aiurEnv => + let arenaSeq ← Tests.Ix.Kernel.Arena.arenaTests env aiurEnv.compiled + let aiurSeq := aiurTests.foldl (init := .done) fun s tc => + s ++ aiurEnv.runTestCase tc + LSpec.lspecIO (.ofList [("ixvm", [aiurSeq, arenaSeq])]) []), ("rbtree-map", do IO.println "rbtree-map" match AiurTestEnv.build (pure IxVM.rbTreeMap) with diff --git a/src/aiur/execute.rs b/src/aiur/execute.rs index b98a6707..9fec900b 100644 --- a/src/aiur/execute.rs +++ b/src/aiur/execute.rs @@ -56,36 +56,105 @@ pub struct IOBuffer { impl IOBuffer { #[inline] - pub(crate) fn get_info(&self, key: &[G]) -> &IOKeyInfo { - self.map.get(key).expect("Invalid IO key") + pub(crate) fn get_info(&self, key: &[G]) -> Result<&IOKeyInfo, ExecError> { + self.map.get(key).ok_or(ExecError::InvalidIOKey) } - fn set_info(&mut self, key: Vec, idx: usize, len: usize) { + fn set_info( + &mut self, + key: Vec, + idx: usize, + len: usize, + ) -> Result<(), ExecError> { let Entry::Vacant(e) = self.map.entry(key) else { - panic!("Mapping already set for key"); + return Err(ExecError::IOMappingAlreadySet); }; e.insert(IOKeyInfo { idx, len }); + Ok(()) } #[inline] - pub(crate) fn read(&self, idx: usize, len: usize) -> &[G] { - &self.data[idx..idx + len] + pub(crate) fn read(&self, idx: usize, len: usize) -> Result<&[G], ExecError> { + self + .data + .get(idx..idx.saturating_add(len)) + .ok_or(ExecError::IOReadOutOfBounds { idx, len }) } fn write(&mut self, data: impl Iterator) { self.data.extend(data) } } +/// Errors raised by Aiur bytecode execution. Mirrors the panic/assert sites +/// in `Function::execute` so callers (tests, kernel-arena runner) can +/// distinguish recoverable rejections (`AssertEq`, `MatchNoCase`) from +/// genuine bytecode bugs. +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum ExecError { + NotEntryFunction(FunIdx), + InvalidMemorySize(usize), + UnboundPointer { ptr: u64, size: usize }, + PointerTooLarge(u64), + IndexTooLarge(u64), + U32OutOfRange(u64), + AssertEqLengthMismatch { lhs: usize, rhs: usize }, + AssertEqMismatch { lhs: u64, rhs: u64 }, + MatchNoCase(u64), + NoContinuation, + StackNotEmpty, + InvalidIOKey, + IOMappingAlreadySet, + IOReadOutOfBounds { idx: usize, len: usize }, +} + +impl std::fmt::Display for ExecError { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + Self::NotEntryFunction(idx) => { + write!(f, "cannot execute non-entry function {idx}") + }, + Self::InvalidMemorySize(s) => write!(f, "invalid memory size {s}"), + Self::UnboundPointer { ptr, size } => { + write!(f, "unbound pointer {ptr} for memory size {size}") + }, + Self::PointerTooLarge(p) => write!(f, "pointer {p} too large for usize"), + Self::IndexTooLarge(i) => write!(f, "index {i} too large for usize"), + Self::U32OutOfRange(v) => write!(f, "value {v} out of u32 range"), + Self::AssertEqLengthMismatch { lhs, rhs } => { + write!(f, "assert_eq length mismatch: lhs={lhs}, rhs={rhs}") + }, + Self::AssertEqMismatch { lhs, rhs } => { + write!(f, "assert_eq mismatch: {lhs} != {rhs}") + }, + Self::MatchNoCase(v) => write!(f, "no match case for value {v}"), + Self::NoContinuation => write!(f, "yield without continuation"), + Self::StackNotEmpty => { + write!(f, "exec entries stack not empty at return") + }, + Self::InvalidIOKey => write!(f, "invalid IO key"), + Self::IOMappingAlreadySet => write!(f, "IO mapping already set for key"), + Self::IOReadOutOfBounds { idx, len } => { + write!(f, "IO read out of bounds: idx={idx}, len={len}") + }, + } + } +} + +impl std::error::Error for ExecError {} + impl Toplevel { pub fn execute( &self, fun_idx: FunIdx, args: Vec, io_buffer: &mut IOBuffer, - ) -> (QueryRecord, Vec) { - assert!(self.functions[fun_idx].entry, "Cannot execute non-entry function"); + ) -> Result<(QueryRecord, Vec), ExecError> { + if !self.functions[fun_idx].entry { + return Err(ExecError::NotEntryFunction(fun_idx)); + } let mut record = QueryRecord::new(self); let function = &self.functions[fun_idx]; - let output = function.execute(fun_idx, args, self, &mut record, io_buffer); - (record, output) + let output = + function.execute(fun_idx, args, self, &mut record, io_buffer)?; + Ok((record, output)) } } @@ -114,7 +183,7 @@ impl Function { toplevel: &Toplevel, record: &mut QueryRecord, io_buffer: &mut IOBuffer, - ) -> Vec { + ) -> Result, ExecError> { let mut exec_entries_stack = vec![]; let mut callers_states_stack = vec![]; let mut continuation_stack: Vec> = vec![]; @@ -173,8 +242,10 @@ impl Function { ExecEntry::Op(Op::Store(values)) => { let values = values.iter().map(|v| map[*v]).collect::>(); let size = values.len(); - let memory_queries = - record.memory_queries.get_mut(&size).expect("Invalid memory size"); + let memory_queries = record + .memory_queries + .get_mut(&size) + .ok_or(ExecError::InvalidMemorySize(size))?; if let Some(result) = memory_queries.get_mut(&values) { if !unconstrained { result.multiplicity += G::ONE; @@ -191,46 +262,61 @@ impl Function { } }, ExecEntry::Op(Op::Load(size, ptr)) => { - let memory_queries = - record.memory_queries.get_mut(size).expect("Invalid memory size"); + let memory_queries = record + .memory_queries + .get_mut(size) + .ok_or(ExecError::InvalidMemorySize(*size))?; let ptr = &map[*ptr]; let ptr_u64 = ptr.as_canonical_u64(); - let ptr_usize = usize::try_from(ptr_u64).expect("Pointer is too big"); - let (args, result) = - memory_queries.get_index_mut(ptr_usize).expect("Unbound pointer"); + let ptr_usize = usize::try_from(ptr_u64) + .ok() + .ok_or(ExecError::PointerTooLarge(ptr_u64))?; + let (args, result) = memory_queries + .get_index_mut(ptr_usize) + .ok_or(ExecError::UnboundPointer { ptr: ptr_u64, size: *size })?; if !unconstrained { result.multiplicity += G::ONE; } map.extend(args); }, ExecEntry::Op(Op::AssertEq(xs, ys)) => { - assert_eq!(xs.len(), ys.len()); + if xs.len() != ys.len() { + return Err(ExecError::AssertEqLengthMismatch { + lhs: xs.len(), + rhs: ys.len(), + }); + } for (x, y) in xs.iter().zip(ys) { - assert_eq!(map[*x], map[*y]); + let lhs = map[*x]; + let rhs = map[*y]; + if lhs != rhs { + return Err(ExecError::AssertEqMismatch { + lhs: lhs.as_canonical_u64(), + rhs: rhs.as_canonical_u64(), + }); + } } }, ExecEntry::Op(Op::IOGetInfo(key)) => { let key = key.iter().map(|v| map[*v]).collect::>(); - let IOKeyInfo { idx, len } = io_buffer.get_info(&key); + let IOKeyInfo { idx, len } = io_buffer.get_info(&key)?; map.push(G::from_usize(*idx)); map.push(G::from_usize(*len)); }, ExecEntry::Op(Op::IOSetInfo(key, idx, len)) => { let key = key.iter().map(|v| map[*v]).collect::>(); let get = |x: &usize| { - map[*x] - .as_canonical_u64() - .try_into() - .expect("Index is too big for an usize") + let v = map[*x].as_canonical_u64(); + usize::try_from(v).ok().ok_or(ExecError::IndexTooLarge(v)) }; - io_buffer.set_info(key, get(idx), get(len)); + io_buffer.set_info(key, get(idx)?, get(len)?)?; }, ExecEntry::Op(Op::IORead(idx, len)) => { - let idx = map[*idx] - .as_canonical_u64() - .try_into() - .expect("Index is too big for an usize"); - let data = io_buffer.read(idx, *len); + let idx_val = map[*idx].as_canonical_u64(); + let idx = usize::try_from(idx_val) + .ok() + .ok_or(ExecError::IndexTooLarge(idx_val))?; + let data = io_buffer.read(idx, *len)?; map.extend(data); }, ExecEntry::Op(Op::IOWrite(data)) => { @@ -307,12 +393,12 @@ impl Function { } }, ExecEntry::Op(Op::U32LessThan(x_idx, y_idx)) => { - let a_val = map[*x_idx]; - let b_val = map[*y_idx]; + let a_val = map[*x_idx].as_canonical_u64(); + let b_val = map[*y_idx].as_canonical_u64(); let a_u32 = - u32::try_from(a_val.as_canonical_u64()).expect("Out of range"); + u32::try_from(a_val).ok().ok_or(ExecError::U32OutOfRange(a_val))?; let b_u32 = - u32::try_from(b_val.as_canonical_u64()).expect("Out of range"); + u32::try_from(b_val).ok().ok_or(ExecError::U32OutOfRange(b_val))?; let result = G::from_bool(a_u32 < b_u32); map.push(result); if !unconstrained { @@ -349,7 +435,9 @@ impl Function { if let Some(block) = cases.get(val) { push_block_exec_entries!(block); } else { - let default = default.as_ref().expect("No match"); + let default = default + .as_ref() + .ok_or_else(|| ExecError::MatchNoCase(val.as_canonical_u64()))?; push_block_exec_entries!(default); } }, @@ -370,12 +458,15 @@ impl Function { if let Some(block) = cases.get(val) { push_block_exec_entries!(block); } else { - let default = default.as_ref().expect("No match"); + let default = default + .as_ref() + .ok_or_else(|| ExecError::MatchNoCase(val.as_canonical_u64()))?; push_block_exec_entries!(default); } }, ExecEntry::Ctrl(Ctrl::Yield(_, output)) => { - let cont = continuation_stack.pop().expect("No continuation"); + let cont = + continuation_stack.pop().ok_or(ExecError::NoContinuation)?; let yielded: Vec = output.iter().map(|&v| map[v]).collect(); map.truncate(cont.map_len); map.extend(yielded); @@ -405,14 +496,16 @@ impl Function { unconstrained = caller_unconstrained; } else { continuation_stack.clear(); - assert!(exec_entries_stack.is_empty()); + if !exec_entries_stack.is_empty() { + return Err(ExecError::StackNotEmpty); + } map = output; break; } }, } } - map + Ok(map) } } diff --git a/src/aiur/synthesis.rs b/src/aiur/synthesis.rs index 6699c811..f5fbaf0a 100644 --- a/src/aiur/synthesis.rs +++ b/src/aiur/synthesis.rs @@ -116,9 +116,12 @@ impl AiurSystem { input: &[G], io_buffer: &mut IOBuffer, ) -> (Vec, Proof) { - // Execute the Aiur bytecode. - let (query_record, output) = - self.toplevel.execute(fun_idx, input.to_vec(), io_buffer); + // Execute the Aiur bytecode. The prover assumes inputs are valid; any + // execution error here is a programmer bug, so we unwrap. + let (query_record, output) = self + .toplevel + .execute(fun_idx, input.to_vec(), io_buffer) + .expect("Aiur execution failed during prove"); // Build the `SystemWitness` let functions = diff --git a/src/aiur/trace.rs b/src/aiur/trace.rs index 36b37656..5cc6f9f6 100644 --- a/src/aiur/trace.rs +++ b/src/aiur/trace.rs @@ -361,7 +361,8 @@ impl Op { }, Op::IOGetInfo(key) => { let key = key.iter().map(|a| map[*a].0).collect::>(); - let IOKeyInfo { idx, len } = io_buffer.get_info(&key); + let IOKeyInfo { idx, len } = + io_buffer.get_info(&key).expect("Invalid IO key"); for f in [G::from_usize(*idx), G::from_usize(*len)] { map.push((f, 1)); slice.push_auxiliary(index, f); @@ -373,7 +374,7 @@ impl Op { .as_canonical_u64() .try_into() .expect("Index is too big for an usize"); - for &f in io_buffer.read(idx, *len) { + for &f in io_buffer.read(idx, *len).expect("IO read out of bounds") { map.push((f, 1)); slice.push_auxiliary(index, f); } diff --git a/src/ffi/aiur/protocol.rs b/src/ffi/aiur/protocol.rs index 4e3352e4..c6a1f9fe 100644 --- a/src/ffi/aiur/protocol.rs +++ b/src/ffi/aiur/protocol.rs @@ -87,7 +87,10 @@ extern "C" fn rs_aiur_system_verify( } /// `Bytecode.Toplevel.execute`: runs execution only (no proof) and returns -/// `Array G × Array G × Array (Array G × IOKeyInfo)` +/// `Except String (Array G × (Array G × Array (Array G × IOKeyInfo)) × Array Nat)`. +/// On execution failure (e.g. assertion mismatch from a typechecker +/// rejecting a constant), returns `Except.error msg` instead of panicking +/// — letting Lean test runners (`KernelArena.lean`) classify failures. #[unsafe(no_mangle)] extern "C" fn rs_aiur_toplevel_execute( toplevel: LeanAiurToplevel>, @@ -95,13 +98,19 @@ extern "C" fn rs_aiur_toplevel_execute( args: LeanArray>, io_data_arr: LeanArray>, io_map_arr: LeanArray>, -) -> LeanOwned { +) -> LeanExcept { let toplevel = decode_toplevel(&toplevel); let fun_idx = lean_unbox_nat_as_usize(fun_idx.inner()); let mut io_buffer = decode_io_buffer(&io_data_arr, &io_map_arr); - let (query_record, output) = - toplevel.execute(fun_idx, args.map(|x| lean_unbox_g(&x)), &mut io_buffer); + let (query_record, output) = match toplevel.execute( + fun_idx, + args.map(|x| lean_unbox_g(&x)), + &mut io_buffer, + ) { + Ok(pair) => pair, + Err(err) => return LeanExcept::error_string(&err.to_string()), + }; // Build query counts: one per function, then one per memory size let mut query_counts: Vec = Vec::with_capacity( @@ -130,7 +139,7 @@ extern "C" fn rs_aiur_toplevel_execute( // (Array G, (Array G × Array (Array G × IOKeyInfo), Array Nat)) let io_counts = LeanProd::new(lean_io, lean_query_counts); let result = LeanProd::new(build_g_array(&output), io_counts); - result.into() + LeanExcept::ok(result) } /// `AiurSystem.prove`: runs the prover and returns