A powerful code editor that uses OpenAI's latest models for intelligent code editing, AI chat assistance, and comprehensive development tools.
- AI-Powered Code Editing: Intelligent code modifications using OpenAI models
- Multi-Model Support: GPT-4, GPT-5, O3 series, and more
- File History Management: Complete version tracking with revert capabilities
- Tabbed Interface: Code editing, AI chat, and debug console
- Context-Aware Chat: AI remembers conversation history for continuity
- Live Cost Tracking: Real-time token usage and cost estimation
- Debug Console: Monitor API calls, requests, and system events
This editor is designed as a lightweight, focused way to use your OpenAI API key for coding tasks. Instead of being tied to subscription-based IDEs like Cursor or complex setups in VS Code, you can plug in your own API key and immediately get AI-powered code assistance. It’s not a replacement for a full IDE, but a fast, cost-flexible tool: you only pay for the API usage you actually consume, making it a great option for rapid prototyping, learning new languages, or experimenting with AI-assisted coding without long-term commitments or extra subscriptions.
- Python 3.8+
- OpenAI API key
- Internet connection
git clone https://github.com/cev-api/ai-code-editor.git
cd ai-code-editor
pip install openai>=1.0.0
python code_editor.py
- Enter your OpenAI API key
- Select your preferred model
- Click "Save Config"
- Select a folder to start working
- Open a file in the editor
- Type your request in the AI Prompt area
- Press Enter or click "Edit Code"
- AI modifies your code based on instructions
- Switch to "AI Chat" tab
- Type your question or request
- Check "Include file context" if needed
- Press Enter to send
- Use "Select Folder" to choose project directory
- Right-click files or use "History" button for version control
- Files are automatically tracked in version history
Parameter | Description | Range | Default |
---|---|---|---|
Temperature | Controls randomness | 0.0 - 2.0 | 1.0 |
Max Tokens | Token limit for responses | 100 - 8000 | 4000 |
Conversation Memory | Messages to retain | 5 - 100+ | 10 |
- GPT-5:
gpt-5
(usesmax_completion_tokens
) - GPT-4 Series:
gpt-4
,gpt-4.1
,gpt-4.1-mini
,gpt-4.1-nano
- O3 Series:
o3-pro
,o3-mini
,o3-mini-high
- Legacy:
gpt-3.5-turbo
.py
, .js
, .ts
, .html
, .css
, .java
, .cpp
, .c
, .h
, .json
, .xml
, .md
, .txt
, .ino
- Cost Optimization: Use lower temperature (0.0-0.5) for precise editing, higher (1.0-2.0) for creativity
- Token Usage: Uncheck file context for general questions, lower conversation memory for cost-conscious usage
- Workflow: Start with chat to discuss approach, use file context only when needed
- Keyboard: Shift+Enter for multi-line input, Enter to send/submit
Issue | Solution |
---|---|
API Key Error | Verify API key is valid and has credits |
Model Errors | Check if model is available in your account |
High Token Usage | Reduce conversation memory or file context inclusion |
Performance Issues | Check debug console for error logs |
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
MIT License - see LICENSE file for details.
Made with ❤️ (and some AI) by CevAPI
The age of the programmer is dead!