Replies: 1 comment
-
|
Interesting concept. We run AI agents for production work and repository legibility is a real bottleneck. Your 75.8 score for ai aligns with what we see: well-documented repos are significantly easier for agents to navigate. Key insight from our experience: The biggest time sink for agents is not complex logic, but finding where to start. A clear README with file structure explanation saves massive exploration tokens. Suggestion: Add a quickstart test. Can an agent successfully:
If yes → +15 points. This measures "can an agent actually work here" vs just "is the code readable". Another suggestion: Include MCP server compatibility in the score. Does the repo provide:
These would differentiate "human-friendly" from "agent-friendly". We documented our agent-onboarding process: anthropics/anthropic-sdk-python#1501 Would be useful to see how TanStack repos score across these dimensions. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi team — I'm Himanshu, I built Agent Friendly Code, which scores public repos on how legible they are to AI coding agents (clear conventions, docs, tests, build signals — not anything about accepting agent-authored PRs).
aiscored 75.8/100 — full breakdown: https://www.agentfriendlycode.com/repo/154If you're open to it, here's a badge you can drop in the README:
Renders as:
A note on what this isn't: the badge signals codebase readability for agents, not an invitation for drive-by AI PRs. I just wanted to let you know that your contribution policy is unchanged. Totally fine to pass — happy either way, and feedback on the score itself is welcome.
Beta Was this translation helpful? Give feedback.
All reactions