Coding Assistants

Cursor IDE

Cursor is an AI-first code editor (VS Code fork) with deep LLM integration: inline edits, codebase-aware chat, multi-file refactoring, and an Agent mode that can autonomously make changes across a project. The fastest-growing dev tool of 2024.

VS Code fork
Base
Agent mode
Autonomous edits
Codebase index
Context aware

Table of Contents

SECTION 01

What makes Cursor different

Cursor is a fork of VS Code that replaces the standard GitHub Copilot integration with a much deeper LLM layer. Where Copilot suggests single-line or function-level completions, Cursor adds: a Chat panel aware of your entire codebase, a Composer that can edit multiple files simultaneously, Agent mode for autonomous multi-step tasks, and inline edit shortcuts that can rewrite whole functions or files on request.

The key architectural advantage: Cursor indexes your entire codebase and uses semantic search to pull relevant context into the LLM prompt automatically. When you ask "why is this function slow?", Cursor doesn't just see the function — it sees the callers, the data structures, and the relevant imports, all injected into the prompt without you having to specify them.

SECTION 02

Tab completion vs Chat vs Agent

Cursor has three interaction modes for different tasks:

Tab completion: Like GitHub Copilot, suggests code as you type. Cursor's version is notable for "cursor tab" — multi-line completions that predict entire blocks, not just the next line. It also suggests where to jump next in the file after accepting a suggestion. Use for: writing new code from scratch, filling in boilerplate.

Chat (Cmd+L): Opens a side panel chat where you can ask questions about your code, request explanations, ask for refactoring suggestions. Chat is aware of the current file and can reference other files via @-symbols. Responses include code blocks you can apply with one click. Use for: understanding unfamiliar code, debugging, asking architectural questions.

Agent / Composer (Cmd+I): Creates and edits multiple files in response to a natural-language request. Can run terminal commands, write tests, and make coordinated changes across the codebase. Use for: adding new features, implementing a spec, large-scale refactoring.

SECTION 03

Codebase indexing

Cursor automatically indexes your project files using embeddings stored locally. This index is used to retrieve relevant context for every query — similar to a RAG pipeline built specifically for code.

The indexing respects .gitignore and .cursorignore files. For large monorepos, you can create .cursorignore to exclude directories that are irrelevant to the current task (e.g., vendor/, generated/, build/ directories) to keep the index focused.

Privacy note: by default, code is sent to Cursor's servers and to the underlying LLM provider (Anthropic, OpenAI, Google depending on the model selected). For sensitive codebases, Cursor offers a "Privacy Mode" where code is not stored on their servers, and an enterprise option for using your own API keys with your own model provider.

SECTION 04

Composer for multi-file edits

Composer (Cmd+I in a file, or Cmd+Shift+I for a full-panel composer) is Cursor's most powerful feature. You describe what you want in natural language and Cursor makes changes across multiple files simultaneously.

Typical workflow: open Composer, describe the feature ("Add a /users/me endpoint to the FastAPI app that returns the current user's profile, with JWT authentication"), and Cursor will create new files, modify existing ones, and show you a diff of all changes before applying them. You can accept all, reject all, or review file by file.

Agent mode (checkbox in Composer) lets Cursor run terminal commands — install packages, run tests, read error output, and iterate. It can do a full TDD loop: write a failing test, implement the feature, run the test, fix failures, until all tests pass.

SECTION 05

@-symbols and context

Cursor's @-symbol system lets you explicitly pull specific context into any prompt:

Combining @-symbols gives you precise control over what goes into the LLM context window, reducing hallucination from insufficient context and controlling costs.

SECTION 06

Using custom models

Cursor supports multiple model backends. You can configure which model handles completions vs chat vs agent tasks independently:

Cursor's pricing: $20/month Pro includes unlimited Claude Sonnet usage (rate-limited) and 500 fast premium requests/month. Beyond that, usage degrades to slower models or you bring your own API key. For heavy Agent usage, BYO API key is often more economical.

SECTION 07

Gotchas

Context window limits: Even with codebase indexing, there's a limit to how much code fits in the prompt. For very large projects, be specific with @-symbols rather than relying on auto-retrieval — the retrieval isn't perfect and can miss the most relevant files.

Agent mode can make destructive changes: Always review Agent diffs before accepting. Agent mode can delete files, overwrite existing code, and make changes that break other parts of the codebase. Use git (preferably with a clean branch) before running agents on complex tasks.

Privacy with proprietary code: Default mode sends your code to third-party LLM providers. Enable Privacy Mode or use local models (via Ollama with a custom API endpoint) for sensitive IP.

VS Code extension compatibility: Most VS Code extensions work in Cursor since it's a fork. Occasionally an extension has compatibility issues. The Cursor team tracks these; check their GitHub issues if an extension breaks.

Cursor AI Features Comparison

Cursor is an AI-native code editor built on VS Code that deeply integrates LLM assistance into the development workflow. Unlike IDE plugins that add AI as an afterthought, Cursor is designed from the ground up around AI-assisted coding, with codebase indexing, context-aware completions, and multi-file editing as core features rather than add-ons.

FeatureHow It WorksModel UsedBest For
Tab autocompleteMulti-line inline suggestionsCursor-optimized small modelBoilerplate, repetitive code
Chat (Ctrl+L)Codebase-aware conversationGPT-4o / Claude SonnetExplanation, planning
Composer (Ctrl+I)Multi-file editing agentClaude Opus / GPT-4oLarge refactors, new features
@ ContextManual context attachmentAnyPrecise context control
Codebase indexSemantic code searchEmbedding modelLarge repo navigation

Cursor's codebase indexing builds a semantic search index of the entire repository, enabling the AI to retrieve relevant files, functions, and classes as context when answering questions or making edits. The index is updated incrementally as files change, keeping the AI's knowledge of the codebase current without full reindexing. When the AI needs to understand how a function is used or find related code patterns, the index retrieval is transparent to the user — the AI simply has access to the relevant code without requiring manual @file references for every question.

The Cursor Composer feature enables multi-file editing where the AI proposes and applies changes across multiple files in a single operation. The developer reviews the proposed changes in a unified diff view before accepting, maintaining oversight of what the AI modifies. This workflow is particularly effective for large refactoring tasks — renaming an interface and updating all implementations, migrating an API version, or restructuring a module's export pattern — where manually applying consistent changes across dozens of files would be tedious and error-prone.

# Cursor .cursorrules — project-specific AI instructions
# Placed in project root to guide Cursor's behavior

You are an expert Python developer. Follow these conventions:
- Use type hints on all function signatures
- Write docstrings in Google style
- Prefer dataclasses over plain dicts for structured data
- Use pathlib.Path instead of os.path
- Write pytest unit tests for all public functions
- Never use mutable default arguments
# Example: Using Cursor's @ context references in chat
# Type @ in the chat to reference specific files, functions, or docs

# Reference a specific file:
# @src/utils/auth.py  - include this file as context

# Reference the current file's function:
# @getCurrentUser  - reference this function

# Reference documentation:
# @docs  - search your indexed documentation

# Reference web resources:
# @https://docs.fastapi.tiangolo.com  - fetch and include URL

Cursor's AI-generated commit messages analyze the staged diff and produce descriptive commit messages that summarize what changed and why. For complex diffs spanning multiple files, the AI synthesizes the changes into a coherent narrative rather than listing individual modifications. Teams using this feature consistently produce more informative commit histories without the overhead of writing detailed messages manually. The generated messages can be edited before committing, combining AI efficiency with developer intent and context that the AI cannot infer from the diff alone.

Cursor Rules (.cursorrules) files allow defining project-specific coding conventions, architectural patterns, and guidelines that influence AI suggestions throughout a repository. Placed in the project root, the rules file provides persistent context that modifies how the AI responds to all requests in that project — similar to a persistent system prompt. Teams that maintain detailed .cursorrules files reporting coding standards, testing requirements, naming conventions, and dependency preferences see significantly more consistent AI suggestions that align with the project's established patterns.