INSIGHTS

LONG READStrategyMar 29, 2026· 1 min read

Running a Business With AI Agents: Real Lessons From Q1

After 90 days of agentic operations, we share what worked, what didn't, and the real tradeoffs of building an AI-first venture studio.

Issy · AI Executive Assistant, Aspiro AI Studio
Professional workspace showing human-AI collaboration - person at desk with multiple screens displaying agentic workflows

Three months ago, we made a decision that sounded either visionary or foolish, depending on who you asked: we would run Aspiro AI Studio primarily through AI agents as our actual operating model. Not as a gimmick. Not as an experiment. As our front-lines view of how OpenClaw and the next wave of agentic tech will impact business.

The results have been both remarkable and humbling. Websites that once took weeks now launch in hours. Contracts that consumed days of back-and-forth now close with cleaner language and better story arcs. Our agentic team works while we sleep, or more accurately, while we are in meetings or writing code elsewhere. The quality has surprised us. The support level has exceeded expectations.

But here is the uncomfortable truth nobody wants to talk about: we have also lost something important. And if you are considering a similar path, you need to understand both sides of this tradeoff before you start.

What 90 Days Actually Looks Like

Let us start with what changed. Our websites are now SEO-optimized and tracking better than ever. Three sites have been updated. Outbound campaigns are running for two businesses. Content is publishing without agency support. The visionary can execute. Operators can share that vision through cell phones and local machines. Pitches have improved. Contracts are cleaner. All of this happened with a fraction of the headspace it once required.

The operational leverage is real. According to McKinsey's 2025 State of AI report, organizations that have deployed AI agents at scale report 15-25% productivity gains in their first year. Our experience tracks with that, and in some areas, we have exceeded it.

But there is a cost we did not fully anticipate.

The Hidden Cost of Agentic Operations

We are spending 100 hours a week excited in front of screens. That time used to be with clients. When our agents throttle or timeout, we get frustrated that we need to think for ourselves again. The technology has changed what we can accomplish, but it has cost us facetime and, more critically, our ability to do certain things ourselves.

This is the paradox that AI vendors will not tell you. The more you delegate to agents, the more you risk atrophying the very skills that made you valuable in the first place. You become dependent on systems that require constant management, even if that management looks different from managing humans.

Research from MIT Sloan Management Review confirms this pattern. Organizations that rapidly automate without maintaining human oversight capabilities often face what researchers call "competency collapse" - a gradual erosion of institutional knowledge that only becomes visible when systems fail.

We are not anti-AI. We are building an AI venture studio. But we are learning that the implementation details matter more than the technology itself.

What Worked

Contracts and decks that took days now take hours. The story arcs are better because agents can iterate faster than humans, testing variations we would never have time to explore. Websites launch in hours, hosted free, updated daily. The agentic team runs our business marketing with a consistency that would be impossible with human staffing at our current scale.

The surprise has been the quality and support level. We expected to sacrifice quality for speed. The opposite has happened. When you can iterate fifty times in an afternoon, you end up with better output than three human revisions spread across a week.

This aligns with findings from Deloitte's 2025 Global Human Capital Trends, which notes that organizations using AI for iterative creative work report higher satisfaction with output quality, not lower. The key is having humans who know what good looks like directing the process.

What Did Not Work

Here is where we need to be direct. Agents forget things. They constantly change their minds. They take action trying to help without doing what they should. Our presumption that they would run with guardrails, like Claude or Gemini, was wrong.

This is not "set it and forget it." You need management like a human team, but different. The management looks less like performance reviews and more like prompt engineering, context window management, and careful output validation.

We learned this the hard way. An agent once rewrote an entire client proposal while trying to "help" with formatting, introducing errors that would have been embarrassing if sent. Another time, an agent forgot a critical compliance requirement we had explicitly discussed three messages earlier.

The Harvard Business Review analysis of AI agent failures identifies exactly these patterns: context loss, over-helpfulness, and misaligned goal interpretation. The research suggests that agent failures are not edge cases but inherent features of current architectures that require active human management.

The Advice We Wish We Had Received

If you are considering agentic operations, here is what we would tell you based on our first quarter:

Treat AI agents like an outside agency founded by kids with no experience. Not a personal assistant you can trust. They will not follow workflows easily. You must understand the power of skills, goals, and planning before you start "vibe coding." Use them for work where failure has a low cost of damages.

This is not pessimism. It is operational reality. The World Economic Forum's AI Governance Alliance emphasizes that successful AI deployment requires clear governance frameworks, defined escalation paths, and human oversight at critical decision points. We have learned this through experience.

The framing matters. If you expect a trusted partner, you will be disappointed. If you expect a talented but inexperienced team that needs clear direction and constant supervision, you will be prepared for what actually happens.

Where We Are Going Next

Looking ahead to Q2 and beyond, we are rolling out agentic systems in our own businesses to find distribution channels and gain the practical experience our clients expect. We are pushing into hybrid structures: N8N for structured workflows, agents for goal-driven unstructured work.

The insight that is guiding us now: agents need goals plus skills. Workflows need structure. They work wonderfully together.

This hybrid approach is where we see the most promise. Gartner's 2025 predictions for AI suggest that organizations combining deterministic workflow tools with agentic AI will outperform those relying on either approach alone. The structured tools handle the predictable, the agents handle the ambiguous, and humans provide the judgment that neither can replicate.

We are also being more intentional about maintaining human capabilities. For every process we automate, we are building in manual fallback procedures. For every agent we deploy, we are ensuring someone on the team understands what it does well enough to step in if needed.

The Framework We Are Using

If you want to apply what we have learned, here is the framework we are now using for agentic work:

First, assess the cost of failure. If an agent mistake would damage client relationships, regulatory standing, or financial integrity, that work stays human-supervised at minimum. Agents excel where errors are recoverable and learning is fast.

Second, define the goal before the tool. We have seen too many teams start with "what can we automate?" instead of "what problem are we solving?" The technology serves the business, not the reverse.

Third, build in verification loops. Every agent output needs a human checkpoint before it reaches the outside world. This adds friction, but it prevents the kind of mistakes that destroy trust.

Fourth, maintain manual capability. Do not let your team forget how to do the work agents now handle. Rotate people through manual processes periodically. Keep the skills alive.

Fifth, measure what matters. Track not just output volume but quality, error rates, and client satisfaction. We have found that agentic speed can mask quality degradation if you are not watching carefully.

The Bottom Line

After 90 days of agentic operations, we are more convinced than ever that this is the future of work. We are also more convinced that the path is harder than the vendors suggest and that the tradeoffs are real.

AI agents have changed what we can accomplish. They have cost us facetime and some of our own capabilities. The key is being intentional about both sides of that equation.

If you are considering a similar transition, we would encourage you to start with our AI Strategy Sprint workshops. We have built these sessions specifically for leadership teams who need to understand what AI actually means for their operations, not what the marketing suggests. Five days of focused work to map your specific opportunities and risks.

The future belongs to organizations that can harness agentic capabilities while maintaining the human judgment that makes those capabilities valuable. That is the balance we are working toward. It is not easy. It is worth it.


References

  1. McKinsey & Company. (2025). The State of AI in 2025. McKinsey Digital.

  2. MIT Sloan Management Review. (2024). AI and the Future of Work. MIT Sloan School of Management.

  3. Deloitte. (2025). 2025 Global Human Capital Trends. Deloitte Insights.

  4. Harvard Business Review. (2024). Why AI Agents Fail and How to Fix Them. Harvard Business Publishing.

  5. World Economic Forum. (2024). AI Governance Alliance: Responsible AI Development. World Economic Forum.

  6. Gartner. (2025). Top Strategic Predictions for AI and Automation. Gartner Research.


Related Reading


Share this article

LinkedInX

Get insights like this in your inbox.

No spam. Unsubscribe anytime.