Two weeks in with Feynman, the open source research agent built on the Pi runtime. The short version: rough around the edges, genuinely useful.

What it does

Feynman is a CLI tool. You ask a research question, it dispatches four parallel agents: a researcher pulling from papers and the web, a reviewer running simulated peer critique, a writer producing structured output, and a verifier checking every citation and killing dead links. Every claim in the output links back to a source. That last part matters more than it sounds.

The other standout feature is feynman audit <arxiv-id>: give it a paper and it compares the stated claims against the actual public codebase. How often does published research actually match the code? Turns out, not always.

How I use it

Two patterns have stuck.

The first is gut-feeling verification. I work in software and AI. You accumulate opinions fast, and not all of them survive contact with the literature. Before I put something in writing or stake a position in a conversation, I'll run it through Feynman. Sometimes it confirms what I thought. Sometimes it finds a paper that reframes the question entirely. Either way I come out better informed than I went in.

The second is writing support. When I am drafting something and need a citation I am typically too lazy and go with gut feeling. Feynman changes that for someone like me. Now I ask Feynman. The verifier agent means I am not getting hallucinated references. That is a real change in how much I trust the tool versus how much I trust my own search habits.

The rough edges

It is a young project. The main friction I hit: subagents do not automatically inherit the LLM provider of the main agent. If you are running against anything other than Anthropic, subagents silently fall back, hit a missing API key, and fail. You do not always notice immediately.

Two things cause this. First, Pi does not expand environment variables in agent frontmatter. If an agent file says model: ${OPENAI_MODEL}, Pi reads that as the literal string, not your configured provider. Second, Pi-subagents only looks for agent definitions in specific paths (.agents/ or .pi/agents/). If agents end up anywhere else, Pi falls back to its own bundled definitions, which hardcode anthropic/claude-sonnet-4-6. That is where the missing API key error comes from, even when you have a different provider set up correctly.

The fix is to expand env vars at bootstrap time, before agent files are written to the path Pi actually searches. The file lands with the model string already baked in, not a placeholder. Feynman does not do this yet out of the box.

When subagents fail, Feynman degrades to the main agent or whichever agent has a working pipeline. The system prompt may not be optimised for the task, but the output is still useful. I have a fork that wires this up properly and adds parallel-cli as the search backend. If you run into the same issue, it might save you some time.

Worth it

The project has the right idea about what matters in a research tool: source-grounded output, not plausible-sounding summaries. For a two week old piece of software it is further along than I expected. I will keep using it.