·10 min read·Business & Strategy

The Fractional CTO Playbook: How I Run 3 Companies with AI

I'm the CEO of R Software, the fractional CTO of Resolve Systems, and the CTO and co-founder of Project Ethos. Here's how AI makes that possible without burning out or dropping balls.

Phillip Roberts

Phillip Roberts

CEO, R Software & Consulting

People ask me how I manage three companies at once. The honest answer is that I don't—not in the traditional sense. I don't sit in three offices. I don't attend triple the meetings. I don't have three separate teams reporting to me through three separate Slack workspaces with three separate sets of status updates.

What I have is a system. And at the core of that system is AI—specifically, a set of autonomous agents and workflows that handle the operational load that would otherwise make this arrangement impossible. This isn't about working harder or being more organized. It's about fundamentally rethinking what a technical leader actually needs to do with their time versus what can be delegated to intelligent systems.

The Reality of Fractional Leadership

A fractional CTO isn't a consultant who shows up once a month with a slide deck. You're embedded. You own the technical direction. You're responsible for architecture decisions, team development, security posture, and delivery timelines—just like a full-time CTO. The difference is that you're doing it for multiple organizations simultaneously.

The traditional model says this doesn't scale past two companies, maybe three if they're small and low-complexity. The bottleneck isn't intelligence or skill—it's time. There are only so many hours in a day, and context-switching between codebases, teams, and business priorities is cognitively expensive.

AI changes that equation. Not by making you faster at doing the same things, but by eliminating entire categories of work from your plate.

What AI Actually Handles Day-to-Day

Let me be specific about what my AI systems do across these three companies. This isn't theoretical—this is what happened last week:

Code generation and review. Across all three codebases, AI handles the majority of implementation work. When I make an architecture decision, I don't write the code myself. I spec it, hand it to Claude Code, and review the output. For Resolve Systems, that means the ResolveNXT 2.0 DME ERP platform. For Project Ethos, that's Showcase. For R Software, that's The Positivity App, Jim Flynn, and everything else we ship. One person, four active products, shipping weekly.

Documentation and specs. Every feature starts with a spec. AI drafts the initial PRD based on a conversation about the problem. I refine it, add business context, and approve. What used to take half a day takes thirty minutes.

Operational coordination. Jim Flynn—our AI CEO framework—handles task routing, status tracking, and coordination across projects. It knows which projects are blocked, which PRs are waiting for review, and which deadlines are approaching. I check a dashboard instead of chasing updates.

Communication drafts. Status updates, client communications, technical documentation—AI drafts all of it. I review and send. The tone is right because the system has learned our communication patterns. The facts are right because it pulls from actual project data.

The Three Rules That Make It Work

Running this model isn't just “use AI more.” There are structural decisions that make it sustainable:

Rule 1: Standardize your stack. All three companies run Next.js and Python. That's not an accident. When your AI assistant knows one stack deeply—and your CLAUDE.md files reflect the conventions for each project—the context-switching penalty nearly disappears. The AI doesn't care if it's writing a DME billing module or a youth athlete profile page. The patterns are the same.

Rule 2: Automate the judgment-free work first. There's work that requires your judgment (architecture, hiring, strategy) and work that doesn't (writing tests, formatting docs, generating boilerplate, reviewing standard PRs). Automate the second category completely before trying to get clever with the first. Most leaders don't realize how much of their day falls into the second bucket until they start measuring.

Rule 3: Build quality gates, not approval gates. The difference matters. An approval gate means nothing moves without you looking at it. A quality gate means automated systems verify quality and only escalate to you when something fails. CI/CD pipelines, AI code review, automated testing—these are quality gates. They let you sleep while code ships, because you trust the system to catch problems.

Interactive Tool

Is your team ready for AI-augmented leadership?

Take the AI Readiness Assessment

What This Looks Like in Practice

A typical Monday for me: I wake up and check Jim Flynn's overnight summary. It tells me what shipped, what's blocked, and what needs my attention across all three companies. I spend the first hour making decisions—approving PRs, resolving architecture questions, responding to escalations. By 9am, I've touched all three companies.

The rest of the day is deep work. I might spend the morning on a ResolveNXT architecture session, the afternoon speccing a new Showcase feature, and the evening reviewing code that Claude generated while I was doing both of those things. AI doesn't sleep. It doesn't context-switch. It picks up exactly where it left off.

By the end of the week, all three companies have moved forward. Not because I worked 80 hours—I didn't. Because the system I built multiplies whatever hours I do put in.

The Honest Limitations

I'm not going to pretend this is effortless. There are real constraints:

AI can't replace relationship-building. When a client at Resolve needs to talk through a difficult product decision face-to-face, no agent handles that. When a co-founder at Project Ethos needs to hash out strategy, that's a human conversation.

AI makes mistakes. Every piece of generated code gets reviewed. Every draft gets edited. The productivity gain comes from the fact that reviewing is faster than creating from scratch, not from blindly trusting output.

This model requires discipline. If you don't maintain your CLAUDE.md files, your CI pipelines, your quality gates—the whole thing degrades. The system works because it's maintained. It's not magic.

Should You Go Fractional with AI?

If you're a technical leader considering the fractional model, or a company considering hiring one, the question isn't whether AI is involved. It's whether the systems are in place to make it work. A fractional CTO without AI is spreading themselves thin. A fractional CTO with AI—with the right workflows, agents, and quality gates—is giving you full-time output at a fraction of the cost.

That's not a pitch. That's what I do every day.

Find Out If Your Team Is AI-Ready

Take our interactive AI Readiness Assessment to see where your organization stands—and what it would take to adopt AI-augmented leadership and development workflows.

Phillip Roberts

Phillip Roberts

CEO of R Software & Consulting, fractional CTO at Resolve Systems, and CTO & co-founder of Project Ethos. He leads development across ResolveNXT, Showcase, The Positivity App, and the Jim Flynn AI framework.