Your AI agent doesn't know what matters
Every AI coding tool can write code, review PRs, and fix bugs. None of them know what your team actually needs right now.
Your coding agent doesn't know you had a meeting yesterday that reprioritized everything. It doesn't know that a stale PR blocks a compliance deadline. It doesn't know your teammate is already working on the same thing. It sees a list of issues sorted by age and guesses.
This is the gap nobody talks about. We gave AI tools the ability to code. We didn't give them the ability to prioritize. And prioritization is the thing that actually matters — the difference between shipping what moves the business and shipping what happens to be at the top of the backlog.
The real problem isn't capability. It's context. Your agent can build anything you ask. But it can't figure out what to build — because the context lives in your head, in meeting notes nobody reads, in Slack threads that disappear, in the founder's gut feeling about what matters this week.
Brief is our attempt to fix that. It's
a .brief/ directory that lives in your project or team repo. Markdown files
that contain your priorities, decisions, assignments, and relationships between work items.
Rule files that tell any agent how to build and maintain this context — what data to fetch,
how to combine it with human priorities, what the morning and evening workflow looks like.
The key insight came from watching our own AI agents work. We built an agent that runs nightly — scans repos, reviews PRs, ships code. It was productive but directionless. It would fix a 50-day-old issue on a deprioritized product while ignoring a merge-ready PR that blocked a client deadline. The signals said "this is stale." The business context said "this doesn't matter." The agent only had signals.
So we added a human layer. A priority interview — five questions, once a week — where the team lead says "P0 is compliance, P1 is the partnership, ignore the feature requests." That file becomes the skeleton. The agent reads it first, then layers on signals from GitHub, the knowledge base, meeting action items. The output is a priority list grounded in human judgment, enriched with real data.
We tried building this as a CLI with 23 commands. Sync engines, enrichment pipelines, source adapters, web viewers. Then we realized: AI agents already know how to read files. They don't need a CLI to read markdown. They need the right markdown to read.
So we stripped it to three commands. brief init creates the directory and rule
templates. brief fetch pulls data from your configured sources.
brief check tells automation whether anything changed. Everything else —
reading priorities, logging decisions, building the brief — is just an agent reading and
writing markdown files, following rules described in markdown.
The convention is the product. The tool barely exists. And that's the point.
What we learned: the bottleneck in AI-assisted work isn't execution anymore. It's making sure the AI works on the right thing. Brief doesn't make your agent smarter. It makes your agent informed.