Tl;dr
The best businesses are built on arbitrage. Distribution, data, regulation, talent, timing — if you see the leverage, or the shift in leverage, before others, you win.
Right now, there’s a massive arbitrage opportunity forming around how work happens. Not just tools or features, but the actual architecture of companies.
That’s happening now, and I started centaurprise to keep track of it. Not the hype, not the hand-waving or the hand-wringing: there’s already plenty of that going around. The actual changes in how companies are built, how teams are structured, how decisions get made when humans and AI start working together.
My plan is to share what I’m seeing here - at gigue, and in the companies we work with - as a way to start conversations and work through the details about these transitions.
Truths and fictions
I've been building ML systems since 2011. Managing enterprise accounts since 2016. For much of the past year, I've been building gigue, a multiplayer IDE for non-technical work that assumes humans and AI are teammates, not competitors.
We're not theorizing about hybrid work. We're shipping it.
Companies are quietly reorganizing themselves around human-AI teams. Not just in their marketing slides or investor decks, though for sure there’s a lot of that going around, but in their actual, daily operations.
There’s sales reps orchestrating AI agents to handle discovery calls, and others leaning on agent coaches to help them hone their discovery talk tracks. Engineers pair-program with models that suggest entire architectures, or outsource all their architecture decisions entirely. Support teams where AI handles triage while humans handle empathy, and support teams where AI takes the heat when the humans are out of emotional bandwidth.
There’s no playbook for this yet. The patterns are just starting to emerge.
I’ve taken to calling these companies "centaurprises"1 – part human, part AI, figuring out how to be more than the sum of their parts2.
The hybrid reality we’re scraping together
Here's what's cropping up, piecewise, bit-by-bit across the industry:
That competitor's blog post that actually caught your attention? Written by a human, edited by AI, fact-checked by a human, formatted by AI. The contract you just signed? AI flagged the non-standard terms, humans negotiated the edges, AI tracks compliance, humans manage the relationship.
75+% of businesses are using AI; 95+% of their AI projects are failing. But the ones that are succeeding are getting really interesting, and they don’t look anything like the hype pieces on LinkedIn or Twitter.
We keep having this exhausting debate about AI versus humans when the thought leaders are quietly using AI with humans. We catastrophize or fantasize about replacement while we're actively doing augmentation. We're so busy arguing about a future imperfect that we're missing what's happening right in front of us.
The sales email that made you respond? Drafted by LLM scooping up the latest context from the org, personalized and fact-checked by a human who knows you and your industry, timed by an AI that knows best delivery times by channel and time zone and industry and audience demographic.
The ones you ignored? Either 100% AI without context and an uncanny-valley headshot, or 100% human without homework or a response SLA. Both fail for the same reason: they're not playing as a team.
Backing into the messy middle
Playing as a team has nothing to do with AI replacing humans or humans rejecting AI. We're already living in the messy middle.
The real problem? Most folks - the ones who aren’t trying to replace humans outright - are still doing it backwards. We've got AI assigned to creative strategy while humans copy-paste between systems. We've got models writing poetry while knowledge workers spend hours reformatting spreadsheets. We've got artificial intelligence tackling the highest-value work while human intelligence handles the digital dishwashing.
You’ve felt it, right? That particular flavor of exhaustion that comes from cleaning up after an AI, clicking through each source to see if they even contain the cited data, checking if it hallucinated test results (or rewrote its hooks so it could fake them outright), or typing “continue” every ten minutes because it shut itself down out of confusion or context window or token limitations.
That's not what you're good at. That's not what you should be doing. That's not even work: it's babysitting.
We’re paying massive opportunity costs to work the way we work now. This isn't the future of work. It's barely the present. We can do so much better.
We haven’t seen the future of work yet
The reason people miss the real goal: they think it's about efficiency.
Sure, any LLM can help you write emails faster. But that's like saying the internet just made libraries faster. The part that matters is AI will change what comprises work.
We’ve seen this pattern before. When developers got IDEs, they didn't just write code faster. They wrote different code. They built things that weren't possible before. The tool changed the craft. The second- and third-order effects wind up washing out the short-term first-order benefits.
The same thing is about to happen to every knowledge worker. Not replacement. Evolution.
Sales reps won't just send more emails: they'll orchestrate complex, multi-threaded campaigns across dozens of stakeholders simultaneously, because they’ll abstract emails away. Marketers won't just generate more content: they'll create personally-adapted experiences for every single prospect, because they’ll abstract individual micro-updates to content away.
The core creative and strategic processes will be more important than ever; the thousands of tiny adaptations and adjustments and personalizations that no one ever had the time for will suddenly become widely available.
This isn't about doing the same work with fewer humans. It's about doing work that was impossible when we only had humans, because we didn’t have the time, the capacity, the capability, the energy to do these tasks. Opening up a second, synthetic type of automation is way more useful when you optimize for opportunity cost rather than trying to replace the first type one-for-one.3
Why “centaurs?”
In 1997, Garry Kasparov lost to Deep Blue. Chess was "solved." Humans were obsolete.
Except, no, that's not what happened.
Players invented "centaur chess" – humans and computers playing together4. And something weird happened: centaur teams beat both the best humans and the best computers. Not because the human was better at calculating moves (they weren't). Not because the computer understood strategy (it didn't). But because together they could do something neither could do alone.
The human provided intuition and pattern recognition. The computer provided calculation and consistency. Together, they beat everyone, humans and AIs alike.
(Yeah, I know: pure AI has caught up in chess since then5. Players and researchers warned as early as 2013 that the gap was narrowing or already gone, though studies as late as 20226 still show some advantage to centaur and human-engine teams over pure-AI formats. Even still: chess is bounded, a finite solution space. Business isn't7.)
Plus, I like puns. That’s what you’re really subscribing to.
What I’ll bring here
Every week: Real notes, experiments, and frameworks from building gigue and working with companies navigating this transition.
No: AGI predictions, consciousness debates, "10 ChatGPT prompts," or dystopian hand-wringing. I feel like those bases are covered.
Yes: How to structure hybrid teams. Why AI agents need managers. What breaks when half your workforce isn't human. How to design products that both humans and agents use. Where the frontier is and why the technology does or doesn’t work at specific corner cases.
I'm writing from the middle of it – building gigue, working with early adopters, watching what actually ships versus what gets announced. The gap between those two is where the real insights live.
I'm writing for builders, operators, and anyone who's tired of the hype and wants to understand what's actually happening.
This is a blog with frameworks, not formulas. Better questions lead to better systems. Expect iteration and open questions.
Unlike our official gigue blog, I don’t have a full structure or tag system for this yet: I want to let this play out a bit and see where the community and the lines of thought take it organically before imposing too much structure.
Because the truth is, the playbook is still being written. The companies that win will be the ones willing to experiment in public, learn fast, and share what works in the interest of reaching a global optimum outcome down the line (rather than chasing the social media flavor-of-the-month).
So, welcome to centaurprise. The companies building the future are doing it in real-time. Let's compare notes.
I won’t spend much time here advertising gigue - we have a separate blog, context, for that - but I’ll make references here and there, especially from customer conversations and field experiments. context is planned to be a lot more practical and focused on how to use our toolkit to do large enterprise sales better, right now. centaurprise is going to be more forward-looking and theoretical, “why we’re building” instead of “what we’re building.” That said - it’s early days, and I haven’t locked in formats yet. Open to feedback as we get going.
I promise this is the last post where I say the word “centaurprise” so much. But I’ll still say it a couple more times later on.
While using the term “centaur” for human-AI combo gets its origins from other fields, notably chess, you may be familiar with the business application from the the HBS/Warwick/MIT study on consultant performance with and without AI assist in 2023, where Ethan Mollick referenced the term “centaur” to mean delineated task allocation and “cyborg” to mean ubiquitous AI allocation when describing LLM augmentation strategies at work (summary here).
There’s too much here for just this post - I’ll revisit it later - but idea that the superset of behaviors for “centaur evaluations” exceeds either the set of human or AI behaviors alone is important. https://digitaleconomy.stanford.edu/wp-content/uploads/2025/06/CentaurEvaluations.pdf
Kasparov himself is credited with inventing the modern format - he called it “advanced chess” - but the “centaur” name pops up all over in a bunch of different contexts. In the ‘70s the idea was called “consultation chess,” which fortunately didn’t stick.
https://gwern.net/note/note#advanced-chess-obituary, for one.
https://sms.onlinelibrary.wiley.com/doi/10.1002/smj.3387, worth reading in its entirety.
There’s a lot to process in these two links, and a lot of others worth reading (on substack and elsewhere). I’m not enough of a chess expert to wax philosophical on the future of the formats, but it’s interesting that at the time of the 2022 paper classical matches were averaging 80~ turns, centaur matches were averaging 100~ turns, and engine matches were averaging 140 turns, with each format progressively more likely to end in a draw than the previous. I think this speaks to saturation of the solution space: fast games come from early errors (e.g. grandmasters don’t lose to Scholar’s Mate) and while there’s something like 10^120 possible games, the fun thing about lopping off decision tree branches is just how exponentially-quickly you can collapse that space such that a) the number of leaves is reasonable again and/or b) every remaining leaf is a draw.