Insights

Read our latest insights and research

Intent Before Specific

Intent is important. Stay with it before you diverge into specifics. If you know what you want, orchestration becomes easier. But you have to be specific with your intent first.

intentorchestrationclarityworkflowlessons

CMA Can Now Generate and Embed Images

We updated the Content Management Agent spec to document image generation and embedding. The CMA can add heroes and thumbnails to Lab entries when the environment supports it.

cmaimagesspeclabautomation

How Nimbus Maintains Itself

A look at how Nimbus uses its own agent system to maintain and evolve its infrastructure

metaautomationagentsself-improvement

Experiments That Failed: Too Many Tools

Adding more tools felt like increasing capability. In reality it increased complexity. Fewer tools lead to more stable systems.

experimentstoolscomplexitysimplicitylessons

Experiments That Worked: Artifact-Driven Collaboration

Agents collaborating through artifacts rather than conversation. Files became the shared memory of the system. Artifacts create durable context.

experimentsartifactscollaborationhandoffslessons

Experiments That Worked: Structured Prompts

Structured prompt frameworks dramatically improve consistency. Structure reduces randomness and improves reliability.

experimentspromptsstructurereliabilitylessons

Experiments That Failed: Overly Smart Agents

Highly autonomous agents that could decide their own approach led to unpredictability. Constraining agents actually improved performance.

experimentsagentsautonomyconstraintslessons

The CEO Becomes a System Architect

Operating an autonomous system changes the role of leadership. The job shifts from managing work to designing the system that produces the work.

leadershipautomationsystem-designagencylessons

The Real Bottleneck Is Not AI—It's Inputs

Whenever automation slowed down, the root cause was rarely the AI. Poor inputs: unclear briefs, missing docs, incomplete requirements.

automationbottlenecksinputsbriefslessons

AI Works Best When Work Is Clearly Structured

AI struggles with vague instructions. When tasks have context, constraints, and expected outputs, performance improves dramatically.

agentsstructurebriefsworkflowlessons

Most Agency Work Is Decision Friction

Most agency time is not spent building. It's spent clarifying requirements, making decisions, and aligning expectations.

agencyworkflowdecisionsautomationlessons

AI Systems Drift Without Documentation

AI agents are sensitive to small changes. Without documentation, behavior gradually diverges from expectations.

documentationagentsmaintenancestabilitylessons

Automation Creates New Failure Modes

Automation removes certain human errors but introduces new types of failures. It changes the nature of problems rather than eliminating them.

automationfailure-modesreliabilitylessons

Observability Is More Important Than Intelligence

When something breaks, the biggest challenge is understanding what happened. A system you can observe is a system you can fix.

observabilitydebuggingautomationreliabilitylessons

Autonomous Systems Still Need Governance

Autonomy sounds appealing, but unguided systems drift quickly. True autonomy is structured.

automationgovernanceguardrailsautonomylessons

Idempotency Is the Hidden Requirement of Automation

Automated systems retry tasks when something fails. If operations aren't safe to repeat, retries create duplicates or corrupt state.

automationidempotencyreliabilityretrieslessons

Task Queues Are the Backbone of AI Systems

One of the least glamorous components turned out to be one of the most important. With a queue, automation becomes predictable.

automationqueuesinfrastructurereliabilitylessons

Sequential Execution Beats Parallel Chaos

Running many AI tasks simultaneously seemed efficient. Agents overwrote each other. Sequential execution dramatically improved stability.

automationexecutionparallelismstabilitylessons

Automation Breaks When Humans Stay in the Loop Too Much

Frequent intervention interrupts flow and introduces inconsistency. Automation works best when boundaries between human and machine are clear.

automationhuman-in-the-loopworkflowlessons

Agents Don't Fail Where You Expect

We feared hallucination. The most common failure was much simpler: unclear instructions. Most AI problems are actually workflow problems.

agentsfailure-modespromptsworkflowlessons

The Power of Agent Handoffs

The biggest improvement came from something simple: explicit handoffs. Agents collaborate through documents, not conversation.

agentshandoffsartifactscollaborationlessons

Why Agent Roles Matter More Than Model Quality

We assumed upgrading models would fix most problems. Most failures came from agents stepping outside their responsibilities.

agentsrolesreliabilitylessons

The Myth of the Single AI Agent

One powerful agent that handles everything seemed intuitive. In practice, we learned that intelligence scales through collaboration, not size.

agentsautomationarchitecturelessons