"Intelligence is not defined by scale, but by the purity of its logic."
We are a specialized research collective focused on the frontier of Small Language Models (SLM). Our mission is to democratize high-level reasoning by building models that are powerful enough to understand the world, yet small enough to run on-device, 100% offline.
Our Horizon-Zero series represents models trained from a "Tabula Rasa" (clean slate). We don't just fine-tune; we architect the weights from step zero, optimizing for deep semantic understanding within sub-1B parameter constraints.
The Axiom-Free series focuses on "Sanctified Intelligence." These models undergo rigorous filtering and alignment to ensure they are safe, professional, and suitable for environments requiring the highest level of content purity.
The Horizon-Axiom initiative is dedicated to the art of surgical refinement. We take high-potential Small Language Models and subject them to our proprietary fine-tuning pipelines. By optimizing weight distribution and enhancing latent reasoning patterns, we push the mathematical boundaries of what sub-7B models can achieve, transforming raw silicon into precision instruments.
Optimization is our craft. Every model we release is natively converted and tested for local inference engines like LM Studio and private RAG systems.