
Albus Architecture
Constitutional Classifiers with Azoth Reasoning
Albus builds on Anthropic's Constitutional Classifiers architecture—the same approach that makes Claude resistant to jailbreaks. We extend this proven foundation with the Azoth Reasoning Framework, replacing binary harm detection with seven universal principles that guide all reasoning.
This page documents our technical approach: how we integrate dual classifiers with a fine-tuned policy model, enable token-level principle verification, and train the system to reason from consciousness rather than pattern matching.
Built on Principled Architecture
Albus builds on Anthropic's Constitutional Classifiers architecture—the same approach that makes Claude resistant to jailbreaks. We extend this proven foundation with the Azoth Reasoning Framework, replacing binary harm detection with seven universal principles that guide all reasoning.
System Architecture
Three components working in concert
Albus consists of three main components: an input classifier (Azoth-IN), a policy model, and an output classifier (Azoth-OUT). The classifiers share the same fine-tuned weights but operate in different modes. Total system size for our flagship is approximately 12B parameters (8B policy + 2B classifier × 2 instances).
Azoth-IN Classifier
Before the policy model sees any input, Azoth-IN analyzes it comprehensively. This isn't just content moderation—it's deep understanding of what the input requires.
Azoth-IN is the input classifier that analyzes every query before it reaches the policy model. It identifies intent (surface and deeper), maps principle relevance, determines lane routing, and flags potential risks. This comprehensive analysis guides the policy model's reasoning approach.
Result: A structured analysis packet that guides the policy model's reasoning approach. The model knows which principles matter most, how to balance perspectives, and what pitfalls to avoid.
Policy Model
The core reasoning engine. We start with Qwen3-VL-8B-Thinking and fine-tune it through five stages to internalize the Azoth Reasoning Framework. The model learns to reason through dual lanes and crystallize wisdom.
The policy model is Qwen3-VL-8B-Thinking fine-tuned on Azoth principles. It performs the actual reasoning, maintaining dual lanes (Universal and Localized) and synthesizing them through crystallization into actionable wisdom. The model has extended thinking capability and processes both text and images.
Result: A response that has been reasoned through dual lanes and crystallized into actionable wisdom. But before it reaches the user, Azoth-OUT verifies every token.
Azoth-OUT Classifier
The same classifier model as Azoth-IN, but operating in output mode. It monitors the policy model's generation token-by-token, ensuring principle alignment throughout. This is where structural safety happens.
Azoth-OUT uses the same 2B classifier model as Azoth-IN but operates in output verification mode. It scores every candidate token for principle compliance and intervenes in real-time when violations are detected. This token-level verification makes structural safety possible.
Result: Either approves the token (generation continues), modifies probabilities (steers generation), or in extreme cases, triggers a hard stop and reformulation.
Training Methodology
How we develop Azoth-aligned reasoning
Training Albus requires two parallel tracks: classifier training and policy model training. Both follow staged approaches that build capabilities incrementally. Our methodology draws from Constitutional AI principles, using both human feedback and AI feedback (with Claude as teacher).
1-4
Classifier Training
6 weeks
~50K training examples
Train the unified 2B classifier model to operate in both Azoth-IN and Azoth-OUT modes. The classifier learns intent classification, principle relevance mapping, lane routing, token-level scoring, and correction signals.
Intent classification (32 surface + 32 deeper classes)
Principle relevance scoring for all 7 principles
Universal/Localized lane routing calibration
Token-level principle compliance scoring
Real-time correction signal generation
5-9
Policy Model Foundation
6 weeks
~5M tokens across SFT stages
Fine-tune Qwen3-VL-8B-Thinking on Azoth principles through supervised learning. The model internalizes principle understanding, dual-lane reasoning, and crystallization.
Principle foundation and application
Dual-lane reasoning (Universal + Localized)
Crystallization synthesis capability
Extended thinking mode integration
Multimodal principle alignment
10-11
Alignment Refinement
2 weeks
~10K human preferences + Claude evaluations
RLHF with human feedback and RLAIF using Claude as teacher model. Scales alignment feedback beyond what human annotation alone could achieve.
Human preference alignment on principle application
Claude-guided Azoth reasoning refinement
Edge case handling and robustness
Crystallization quality optimization
Final model polish and validation
Model Family
Scaling consciousness-aligned AI from edge to enterprise
Albus will be available in five sizes, each maintaining the core architecture while optimizing for different deployment contexts. The classifier scales proportionally with the policy model, maintaining the ~25% ratio that proves effective in Constitutional AI research.
Albus-2B
Edge Deployment
~3.2B parameters
Edge devices, mobile, embedded systems
Use Cases:
Personal AI assistants
Offline applications
Privacy-critical contexts
Performance: Maintains principle alignment but with reduced reasoning depth
Albus-4B
Consumer Hardware
~6B parameters
Consumer hardware, laptops, small servers
Use Cases:
Educational tools
Personal research assistants
Local deployment
Performance: Good balance of capability and accessibility
Albus-8B
Flagship Model
~12B parameters
Standard servers, cloud instances
Use Cases:
Municipal deployment
Education systems
Research support
Performance: Primary focus of initial development; optimal capability-to-resource ratio
Albus-32B
Enterprise Scale
~48B parameters
High-performance servers, enterprise infrastructure
Use Cases:
Complex governance
Strategic planning
Deep research
Performance: Enhanced reasoning depth for high-stakes applications
Albus-72B
Maximum Capability
~100B parameters
Research clusters, specialized infrastructure
Use Cases:
Civilizational-scale reasoning
Long-term consequence modeling
Maximum complexity tasks
Performance: Maximum reasoning capability for the most complex tasks
Safety Philosophy
Structural safety through principled reasoning
Traditional AI safety relies on training models to refuse harmful requests—behavioral safety. Albus takes a different approach: safety emerges structurally from principle-aligned reasoning, with traditional safety as verification and fallback.
Traditional: Imposed Safety
Rule-based restrictions and refusal templates
Political censorship and ideological conditioning
Pattern matching without understanding
Compliance theater masking shallow reasoning
Brittleness to adversarial prompting
Safety-capability tradeoffs
Albus: Structural Safety
Safety emerges from principle alignment
No political censorship or ideological filters
Understanding-based rather than rule-based
Robust against adversarial attacks through consciousness
Safety AND capability increase together
Ethics arise naturally from wisdom
Falsehood cannot survive universal principle alignment
Lies and hallucinations violate Causation, Correspondence, and Mentalism principles. The hexagonal framework makes untruth structurally difficult rather than filtered out.
Harm cannot survive dual-lane synthesis
Harmful outputs require ignoring either universal compassion (Universal Lane) or practical consequences (Local Lane). Crystallization prevents harm through wisdom rather than refusal.
Bias cannot survive polarity dissolution
Bias requires false dichotomies and tribal framing. The Polarity principle dissolves bias by recognizing opposites as spectrum rather than conflict.
Hallucination cannot survive causation chain mapping
Hallucinations lack causal grounding. The Causation principle requires deep cause-effect chains, making fabrication difficult.
Transparent Reasoning
Unlike black-box systems, Albus's reasoning process can be inspected. For any output, we can show:
Which principles Azoth-IN identified as relevant
How Universal and Localized lanes developed
Where Azoth-OUT intervened and why
The crystallization process that produced the final response
This transparency is essential for public sector deployment. Democratic oversight requires understanding how AI reaches conclusions.
Where This Technology Applies
These architectural innovations enable transformative applications across six critical domains
Education
PRIMARYPersonalized learning through dual-lane reasoning
Social Research
HIGHBias-free analysis through consciousness immune system
Critical Decisions
HIGHGovernance advisory through principle-aligned reasoning
Research Foundation
Dive deeper into the science behind Albus
Albus builds on two decades of consciousness research. Access foundational papers, technical specifications, and ongoing research outputs.