Insights
Read our latest insights and research
Intent Before Specific
Intent is important. Stay with it before you diverge into specifics. If you know what you want, orchestration becomes easier. But you have to be specific with your intent first.
CMA Can Now Generate and Embed Images
We updated the Content Management Agent spec to document image generation and embedding. The CMA can add heroes and thumbnails to Lab entries when the environment supports it.
How Nimbus Maintains Itself
A look at how Nimbus uses its own agent system to maintain and evolve its infrastructure
Experiments That Failed: Too Many Tools
Adding more tools felt like increasing capability. In reality it increased complexity. Fewer tools lead to more stable systems.
Experiments That Worked: Artifact-Driven Collaboration
Agents collaborating through artifacts rather than conversation. Files became the shared memory of the system. Artifacts create durable context.
Experiments That Worked: Structured Prompts
Structured prompt frameworks dramatically improve consistency. Structure reduces randomness and improves reliability.
Experiments That Failed: Overly Smart Agents
Highly autonomous agents that could decide their own approach led to unpredictability. Constraining agents actually improved performance.
The CEO Becomes a System Architect
Operating an autonomous system changes the role of leadership. The job shifts from managing work to designing the system that produces the work.
The Real Bottleneck Is Not AI—It's Inputs
Whenever automation slowed down, the root cause was rarely the AI. Poor inputs: unclear briefs, missing docs, incomplete requirements.
AI Works Best When Work Is Clearly Structured
AI struggles with vague instructions. When tasks have context, constraints, and expected outputs, performance improves dramatically.
Most Agency Work Is Decision Friction
Most agency time is not spent building. It's spent clarifying requirements, making decisions, and aligning expectations.
AI Systems Drift Without Documentation
AI agents are sensitive to small changes. Without documentation, behavior gradually diverges from expectations.
Automation Creates New Failure Modes
Automation removes certain human errors but introduces new types of failures. It changes the nature of problems rather than eliminating them.
Observability Is More Important Than Intelligence
When something breaks, the biggest challenge is understanding what happened. A system you can observe is a system you can fix.
Autonomous Systems Still Need Governance
Autonomy sounds appealing, but unguided systems drift quickly. True autonomy is structured.
Idempotency Is the Hidden Requirement of Automation
Automated systems retry tasks when something fails. If operations aren't safe to repeat, retries create duplicates or corrupt state.
Task Queues Are the Backbone of AI Systems
One of the least glamorous components turned out to be one of the most important. With a queue, automation becomes predictable.
Sequential Execution Beats Parallel Chaos
Running many AI tasks simultaneously seemed efficient. Agents overwrote each other. Sequential execution dramatically improved stability.
Automation Breaks When Humans Stay in the Loop Too Much
Frequent intervention interrupts flow and introduces inconsistency. Automation works best when boundaries between human and machine are clear.
Agents Don't Fail Where You Expect
We feared hallucination. The most common failure was much simpler: unclear instructions. Most AI problems are actually workflow problems.
The Power of Agent Handoffs
The biggest improvement came from something simple: explicit handoffs. Agents collaborate through documents, not conversation.
Why Agent Roles Matter More Than Model Quality
We assumed upgrading models would fix most problems. Most failures came from agents stepping outside their responsibilities.
The Myth of the Single AI Agent
One powerful agent that handles everything seemed intuitive. In practice, we learned that intelligence scales through collaboration, not size.