Rationale
Sheaf started with a simple observation: modern machine learning did not emerge from a language designed for it.
Python became dominant because it was accessible. Over time, layers of frameworks accumulated to express what the language itself could not: computation graphs, differentiation, vectorization, and compilation. The result works. However, much of the mathematical structure of models is now expressed indirectly, through imperative control flow and auxiliary plumbing.
This disconnect contrasts with the Lisp lineage. For decades, Lisp served as an environment for both symbolic and connectionist artificial intelligence research, from early systems on DEC PDP-10 machines to neural network frameworks such as LeCun’s Lush.
Across these domains, Lisp offered something essential: a way to represent computation explicitly. In this tradition, code is data, a concept known as homoiconicity. With Lisp, programs are data structures that a system can inspect, transform, and compose. This made it particularly well suited to building systems that operate on their own representations.
Sheaf aims to bring homoiconicity to modern machine learning, and with that, the ability for a model to remain mere data that can observe and modify itself using the same tools it uses to manipulate tensors.
The technical inspiration for Sheaf came from Clojure. Clojure demonstrated that a modern Lisp can coexist with a dominant ecosystem without replacing it, actually leveraging its runtime. Similarly, Sheaf was envisioned as a functional layer for model description, delegating the numerical execution to JAX. Its syntax is inspired by Clojure, adapted for tensor operations.
The goal of Sheaf is not to replace existing AI tooling, and never will. The goal is to provide a space where a model's high-level representation and its machine-manipulable data structures remain the same.