Find the perfect model with our new search functionality - filter by size, name, or description
Attach multiple files to your AI chat for better context-aware assistance
Models automatically sorted by size with visual indicators for installation status
Stop AI generation at any time with real-time streaming responses
AI-powered command suggestions with real-time execution in terminal
Generate comprehensive documentation for entire folders with ASCII diagrams
Quick access to AI features through VS Code's right-click menu
Real-time download progress with visual indicators for model installation
< 100MB • Perfect for quick tasks
100MB - 500MB • Balanced performance
500MB - 1.5GB • Enhanced capabilities
> 1.5GB • Maximum capability
Use the new search feature to find the perfect model for your needs
Download with visual progress tracking and automatic selection
Attach relevant files and get context-aware assistance
Review AI suggestions and apply them with one click
Your AI powered coding assistant for VS Code
Powered by Local LLM using ONNX and HuggingFace models
Run AI models locally on your machine for privacy and speed.
Advanced language models from onnx-community for intelligent assistance.
Identify and fix potential bugs in your code with intelligent analysis.
Generate comprehensive unit tests for your code automatically.
Create detailed documentation for your codebase with AI assistance.
Install NoaxAI from the VS Code marketplace
ext install noaxai
Choose an AI model from the Explore Models panel
Use the command palette or context menu to access NoaxAI features
Ctrl+Shift+P → NoaxAI: Refactor Code
Ctrl+Shift+P → NoaxAI: Fix Bug
Ctrl+Shift+P → NoaxAI: Generate Unit Test
Powered by HuggingFace and ONNX optimization
Model | Size | Speed | RAM Usage | Best For |
---|---|---|---|---|
SmolLM2-135M | 135MB | Fast | ~300MB | Quick code completions, syntax fixes |
Llama-3.2-1B | 1.1GB | Standard | ~2GB | Code generation, refactoring |
Phi-3-mini | 4K | Very Fast | ~100MB | Real-time completions |
Qwen2.5-1.5B | 1.5GB | Standard | ~3GB | Complex code analysis |
Use Phi-3-mini or SmolLM2-135M for:
Use Llama-3.2-1B or Qwen2.5-1.5B for:
Start with a smaller model and upgrade based on your needs. All models can be switched instantly!
function processData(data) {
// TODO: Optimize this function
let result = [];
for(let i = 0; i < data.length; i++) {
for(let j = 0; j < data.length; j++) {
if(data[i] === data[j]) {
result.push(data[i]);
}
}
}
return result;
}
// Optimized version with O(n) complexity
function processData(data) {
const seen = new Set();
const result = [];
for (const item of data) {
if (seen.has(item)) {
result.push(item);
}
seen.add(item);
}
return result;
}