People treat .cursorrules like they're fragile. One wrong character and everything stops working, right?
I spent an evening trying to break them. Malformed YAML, massive files, conflicting instructions, encoding weirdness. I wanted to find the edge cases that silently fail, the ones where your rules look fine but Cursor ignores them.
Turns out Cursor is way more forgiving than you'd expect. Most things that "should" break... don't.
How I tested
Every test followed the same pattern:
- Create a
.cursorrulesfile with the specific edge case - Give Cursor a prompt that would reveal whether the rule loaded
- Check the output for compliance
All tests ran through the Cursor CLI (cursor agent), which loads both .cursorrules and .cursor/rules/ files the same way the GUI does.
Test 1: Huge files
The worry: "My rules file is too long, Cursor probably truncates it."
I tested with a 121KB file. The rules at the bottom were followed just as well as the ones at the top.
Verdict: File size isn't your problem. If your rules aren't working, it's not because the file is too big.
Test 2: Malformed YAML frontmatter
The worry: "I messed up the YAML at the top and now nothing loads."
I deliberately broke the YAML frontmatter (missing colons, bad indentation). The rule content below it still loaded and was followed.
Verdict: Cursor doesn't care about your frontmatter formatting. It reads the rule content regardless. Bad YAML won't silently kill your rules.
Test 3: UTF-8 BOM
The worry: "My editor added a byte order mark and now Cursor can't read the file."
Added a UTF-8 BOM to the beginning of a .cursorrules file. Rules loaded fine.
Verdict: BOM is ignored. If you're on Windows and your editor adds one, don't worry about it.
Test 4: Conflicting rules
The worry: "I have two rules that contradict each other. What happens?"
I set up a conflict: one rule said "use camelCase for all variables," another said "use snake_case for all variables." Both in the same file.
Result: the first rule won. Cursor used camelCase and actually acknowledged the conflict in its response, noting that it chose the first instruction.
Verdict: First rule wins. If you have conflicting rules, put the one you care about most at the top. Cursor won't crash or ignore both, it just picks the first one it sees.
Test 5: .cursorrules vs .cursor/rules/
The worry: "I have both a root .cursorrules file and rules in .cursor/rules/. Which one wins?"
When both exist, .cursorrules in the project root takes priority over files in .cursor/rules/.
Within the .cursor/rules/ folder, files load in alphabetical order. If two .mdc files conflict, the one with the earlier filename wins.
Verdict: Priority chain is: .cursorrules (root) > .cursor/rules/ (alphabetical). Name your files accordingly if order matters.
Test 6: Glob patterns in .cursor/rules/
The worry: "Can I have different rules for different file types?"
Yes. I created two rule files in .cursor/rules/:
-
javascript-only.mdcwith a glob targeting*.jsfiles, requiring JSDoc comments -
typescript-only.mdcwith a glob targeting*.tsfiles, requiring strict typing
Both applied correctly. JavaScript files got JSDoc, TypeScript files got strict types.
Verdict: Glob patterns work. You can have per-filetype rules, which is useful if you work across multiple languages in one project.
Test 7: Complex multi-part rules
The worry: "My rule has 5 sub-items. Does Cursor follow all of them or just the first?"
I wrote a rule with 5 distinct sub-requirements. All 5 were followed in the generated output.
Verdict: Complex rules work. Cursor reads the whole thing, not just the first line.
Test 8: Rules on existing code
The worry: "Rules only affect new code generation, not edits."
I asked Cursor to refactor an existing file that used any types. The rule said to use unknown instead of any. Cursor replaced any with unknown in the existing code.
Verdict: Rules apply to edits too, not just fresh generation.
So what actually breaks?
Honestly, not much at the file level. The real reasons your rules don't work are usually:
- The rule is too vague. "Write clean code" does nothing. "Always add error.tsx alongside every page.tsx" does something.
- The rule tells Cursor what it already does. "Use TypeScript" or "prefer functional components" won't change the output because that's already the default behavior.
- The rule conflicts with the prompt. If your prompt asks for something that contradicts a rule, the prompt tends to win. (This is general advice from observation, not a controlled test.)
The pattern: rules that target specific, concrete behaviors work. Rules that describe general vibes don't.
If you want rules that have been tested with before/after comparisons, I put together a free starter pack with the ones that actually changed Cursor's output in my testing. Two rules, both verified.
This is part of a series where I test .cursorrules claims with actual data. Part 1 covered which rules change output. Part 2 covered how to write them.
Top comments (2)
Really appreciate the systematic approach here instead of the usual "trust me bro" advice. The conflicting rules test is interesting — first-rule-wins is good to know but it also means rule file ordering becomes a subtle source of bugs in team projects where multiple people contribute rules.
The "too vague" point at the end is the real takeaway though. We maintain cursorrules across a few client projects and the ones that work are always hyper-specific: "wrap all fetch calls in a retry with exponential backoff" vs "handle errors properly." The moment you write something a human would nod at but not know how to implement, the model won't either.
Good point about rule file ordering in teams. That's actually one of the trickiest parts of scaling cursorrules across a project. We ran into the same thing where two devs wrote conflicting rules in separate .mdc files and nobody caught it until the output started flip-flopping.
One approach that helps: treat your rules directory like you'd treat ESLint configs. Have one person own the "base" rules, and use the file naming convention (alphabetical loading order) intentionally rather than letting it happen by accident.
And yeah, the "would a human know how to implement this" test is the best heuristic I've found. If you can't turn the rule into a specific code review comment, the model can't turn it into specific code.