// Project
Changelog
Release history and development roadmap. Uneven AI follows semantic versioning.
The first public npm release is v0.4.0 — April 11, 2026. Versions v0.1–v0.3 represent the internal development history shipped as part of that release.
- Rust core scaffold + napi-rs bridge
- TypeScript public API + type declarations
- Basic CLI: uneven-ai init, uneven-ai ask
- Local LLM inference via Candle (LLaMA 3.2 1B Q8)
- External brain providers: OpenAI, Claude, Gemini, Ollama
- Embeddings generation via Candle (1024-dim, L2 normalized)
- Vector store integration (usearch HNSW)
- File and directory indexing (Rust)
- Database connectors: PostgreSQL, MySQL, SQLite, MongoDB
- URL loader and docs scraping (undici + cheerio)
- Terminal watcher via tokio::process
- Stack trace error parser (TS, JS, Python, Rust, Go, Java, PHP, Ruby)
- Retrieval-Augmented Fix (RAF) — KB semantic search before pattern-match fallback
- Auto-fix engine via similar crate (surgical diffs)
- File watcher with auto re-indexing (notify)
- Pentester static mode (OWASP Top 10, secrets, CVEs, injections, headers)
- Pentester active mode with SHA-256 signed authorization scope
- Malware scanner: 8 categories, CI-compatible exit code (uneven scan)
- AI data analyst: natural language → SQL → Excel/dashboard (uneven analyze)
- CI/CD headless pipeline with GitHub Actions integration (uneven ci)
- Session state protocol (.uneven/session.json)
- Security report generator: HTML + Markdown output
- License system: free / Pro tiers with machine fingerprinting and offline grace period
- Process lock protocol — exclusive .uneven/uneven.lock prevents concurrent Uneven instances from corrupting state
- Atomic index saves — crash during indexing no longer corrupts the previous state
- Logger write queue — serializes all log writes to prevent race conditions
- Throttled security scanners — pentest runs at most 3 concurrent filesystem walkers (was 8)
- Timeout kill switch — all LLM inference, DB queries, web fetches and git operations have hard deadlines
- Index preview — cost and time estimate per brain provider before committing to indexing
- User-selectable DB tables for data analysis + LLM-suggested analysis ideas
- Excel folder reading for data analysis (uneven analyze)
- Humanized CLI help screen — plain-language descriptions for every command
- Hardware Acceleration: Automatic GPU/CUDA offloading (32 layers max) for lightning-fast local inference
- Ethical Guardrails: Built-in defensive system that prevents malware, exploit, and cracking generation
- Advanced Document Intelligence: Native support for PDF, Word (.docx), Excel (.xlsx), and CSV Pro (industrial parsing)
- Strategic AI Recommendations: Preferred path for Local AI and Google Gemini Flash for privacy and cost-efficiency
- System Diagnostics: New `uneven info` command for real-time hardware, VRAM, and version auditing
- Memory Hardening: Silent model auto-unload (10min) and anti-loop safety (5-try ceiling)
- Local Data Analyst: Direct natural language queries over local CSV and Excel files without DB requirements
- Stable public API
- Full test coverage
- Community plugins API
- Real-time diff approval UI
- Multi-project workspace support
- Apache 2.0 license transition begins (target: 2029-01-01)