Getting the best out of Cursor AI
AI is rapidly changing the way how we code. Cursor is one particular tool which has changed the game. Here's some tips on how to get the best out of it.
I've been using Cursor daily for some time now and it's been a interesting learning experience. Sometimes it feels like my greatest co-worker - it's reading my mind, it's saving me a heap of time; and then suddenly it turns on me and it starts to produce convulated, broken solutions to what should be a simple problem, or it creates a bunch of new components rather than use what we already have.
Be explicit
The general rule is: the more precise you are with your instructions to Cursor, the better the output. Cursor isn't a mind reader, it needs context if you want it to perform in a particular way.
For example, if you know it is going to need to call a particular API or function, let it know by @ the relevant file. This will help avoid the danger of it creating repetition in your code base.
Also make sure your prompt doesn't ask too many things - Cursor is much better at implementing functionality in smaller pieces. Plus with smaller pieces, you are likely to be more precise with your ask.
Making better prompts
Regardless of how clear you make your prompt, it's unlikely AI is going to be on exactly the same wavelength as you and it's unlikely it will get everything right on it's first go.
Test Driven Development (TDD) can help you get a better handle on this. The TDD cycle follows three primary steps:
- Write a failing test that defines the functionality you want to implement.
- Write the minimal amount of code to make the test pass.
- Clean up and optimize the code while ensuring tests still pass.
To get Cursor to do this for us we can suffix our build AI prompt with an instruction to write our tests before writing the code. Furthermore, if we enable and configure YOLO mode in Cursor's Settings > Cursor Settings > Features
, we can also give it permission to run npm test
which means it can run our tests iand iterate on the code until they all tests pass. We can also use this method to specify any test edge cases we need the AI to consider.
Get Cursor to think it through
Cursor's chat box has both 'Agent' and 'Ask' functionality. While the 'Agent' is useful for most things, if Cursor isn't giving you the results you want, then getting it to think about the problem in the 'Ask'. Ask for three different solutions to the problem, and get Cursor to discuss the pros and cons for each. Then you can get it to build the solution you think will work best.
Try something else
Sometimes Cursor just can't fix a problem. If it's tried two or three times and it's not finding a suitable resolution, then it's likely you need to change something about the ask. There's three ways we can change this:
- present the problem differently
- try another tool
- do it ourselves
I've actually found a lot of success with taking the problem statement away from the contextbuilt up in Cursor and instead using a fresh Claude or OpenAI chat window. Talking to an AI that isn't living your app can give a secondary perspective to understand the problem which then will either help you come to a better solution, or it will help you chat to Cursor in a way it finds easier to understand.
Try a different model
Speaking of using something else, there is a lot of value in trying a different model within the Cursor IDE itself.
Currently using the new Claude 3.7 agent in particular seems to cause a lot of extra unwanted activity and change. For example, it may add the requested functionality to a file and then decide it will go on a refactoring journey across the codebase whilst you're frantically looking for the 'stop' button.
You can swap models within Cursor whenever you like - simply hit the dropdown next to Agent (which has the model name or 'default'); and try again with another model. You'll find some AI models are better for planning, some are better for coding, some are better for debugging.
Create rules
Previously we'd create a .cursorrules
file in our root directory, but Cursor has recently moved to allowing multiple rules through a new MDC format hosted in /.cursor/rules
. With MDC, you can break up your rules in to separate files and then apply it to specific files or directories using globs. For example, if I wanted to apply a set of rules only to React components, I could create a file like:
---
description: Generating React Components
globs: *.tsx, *.jsx
---
# Coding style guide
- Style guide goes here
As well as style guides you can also add other information to assist Cursor around your code base - for example, consider adding information on the following:
- App Flow - how users navigate through your app
- Backend - the structure of the backend and any techniques you want to consistently use
- Frontend - the structure of front end code
- Requirements - how the app should behave
- Implementation plan - a numbered implementation plan, broken down in to well defined, small steps so the AI can concentrate on one thing at a time
You can get AI to help you create these through conversation AI such as Claude, though make sure you review the documents manually too.
Commit often, review your commits
Though it is possible to rollback to previous versions with Cursor, it's quite easy to get lost within lots of untracked changes. Having a sensible git commit history allows you to quickly revert if something goes badly wrong.
You can commit changes from within Cursor's source control panel and then @ tag your Git commit within Cursor's Ask chat prompt to ask for improvements and potential bugs.
Will Cursor replace me?
While it's undeniable that the speed of progression with AI coding has been phenomenal, the requirement for real developers is unlikely to go away in, at least in the short term. The advancements in code quality that AI is making are starting to reach diminishing returns and the solutions built with AI tend to ignore the sensibilities of writing good code.
Developers are likely to take more of an orchestrator role with AI ensuring that the code that is written is of good quality and fit for purpose. Remember, code readability is far more important than how quickly we write, so don't blindly accept code - make sure it is something you will understand when the context of the chat window is long past and your debugging an issue in production.
Good luck out there!