Models
All models are open source on Hugging Face. Trained on our GPU cluster.
AgentGuard-2.8B
Local AI Agent Security via Mamba-2
Mamba-2 SSM fine-tuned to detect prompt injection, exfiltration, and tool-call hijacking in AI agent sessions. Runs as a local sidecar with O(1) memory — monitors arbitrarily long agent trajectories without truncation.
CBD-LLM-PoC-V1
Causal Block Diffusion for Large Language Models - Proof of Concept v1
Hybrid diffusion architecture enabling block-parallel text generation while retaining standard causal attention and KV caching.
Qwen3-0.6B-Tool-Router
Efficient Tool Routing System
Lightweight tool call router for agentic systems. 29.2% accuracy overall vs industry 23.93 600M params models on BFCL
IND-QWENTTS-V1
Indian Language Text-to-Speech v1
Multilingual TTS for 2 languages. MOS 3.8/5.0. Cross-lingual transfer from high-resource anchors. Edge-deployable.
STRM-4B-v1
Stateful Reasoning Model
LoRA fine-tune of Qwen3-4B for parsing unstructured spoken-language input into structured JSON. Maintains running state to handle corrections, cancellations, and quantity changes in a single forward pass. ~94% exact-match accuracy.