Cristian Munteanu
Cristian Munteanu
Jan 12, 2026
The Magic Gap
Why “retention after 90 days” is the most important metric I want to see today.
There’s a growing gap between what AI products promise and what they can actually deliver in the hands of an ordinary user on a Tuesday afternoon. I call it “the magic gap”.
You can see it in the slogans: “Build an app in seconds.” “Do full qualitative and quantitative market research in a click.” “Replace your whole team with one AI agent.” Who can say no to that? No one. That’s why AI adoption curves look insane. Everyone signs up. For a while.
Then reality hits hard.
The “app in seconds” turns out to be an uncanny prototype with a logic bug right in the middle of the core flow. You ask the AI to fix it, and instead of patching the bug, it rewrites half the app and introduces two new bugs. Ten cycles later, you realize you could have written a clean version yourself in less time.
The one-click market research tool gives you 40 pages of confident analysis with fake quotes, made-up stats, and a survey that no actual human has answered. It sounds like real research, but the moment you try to act on it, the floor gives way.
This is what churn feels like from the inside: the moment when the user realizes the distance between the demo and their reality.
And right now, a lot of AI startups live in that distance.
This is why “retention after 90 days” is the single number I care about most when evaluating an AI product today. Signups tell you how powerful the promise was; day-1 or week-1 engagement tells you how good the demo felt. But only 90-day retention tells you whether the product survived contact with the user’s real workflow, real stakes, and real tolerance for failure. If someone is still using your AI app three months later, it means they’ve pushed past the magic gap, found where it’s actually reliable, and decided it’s net-positive even after the disappointments. In a world where anyone can get millions of curious users for free, the scarce thing isn’t attention at the top of the funnel; it’s trust that survives 90 days.
The seduction of “magic”
Historically, new tech waves sold speed or cost: cheaper calls, faster processors, more storage.
AI sells magic.
Not “do this thing faster,” but “skip the thing entirely.” Don’t learn to code, just describe the app. Don’t do user interviews, just ask the AI. Don’t analyze the data, just paste it in.
That kind of promise is incredibly attractive because it doesn’t just solve a problem, it erases a skill barrier. For a few minutes, as you watch the loading animation and tokens stream in, it feels like the barrier really is gone.
Founders feel the seduction too. When you build on top of these models, your first working demo is usually mind-blowing compared to what you could do with traditional software. You get addicted to that feeling, and you start pitching the demo as if it were the product.
But a demo is one cherry-picked moment. A product is a spectrum of moments, and users live in the average case, not the best case.
High adoption, higher churn
This is why AI startups see a pattern that would be bizarre in normal SaaS: explosive signups, miserable retention.
The funnel looks something like:
Enormous top-of-funnel from hype, virality, and curiosity.
Strong first-use activation: the user gets some kind of “wow” moment.
Then, there is a steep drop-off as soon as they try to use it for real work.
The problem is not that AI isn’t useful. It’s that the promise invites users to rely on it in places where its current failure modes are catastrophic: code generation pushed straight to production, research used for real business decisions, content used without verification, etc.
If you promise “magic,” people will expect reliability plus magic. When they discover they only got magic with no reliability, they feel not just disappointed but misled.
And once trust is gone, usage is gone.
You can see this in the way people talk. Someone will say, “Yeah, I tried tool X, it was amazing at first, but it messed up something important, so I stopped.” That’s the worst kind of marketing: former fans who now act as negative referrals.
The emotional hangover
There’s a second-order effect that matters more than founders admit: disillusionment.
The public has been bombarded for two years with variations of “AI will take your job,” “AI will replace programmers,” “AI will run companies.” Most people don’t read research papers. They experience AI through consumer tools.
So they try the app that says it can build software from a prompt, and it breaks. They try the agent that claims it can handle their entire workflow, but it gets stuck in a loop. They try the research assistant that promises “real-time insights,” and it invents survey results.
At first, they blame themselves. Maybe I didn’t phrase it right. Maybe I’m not using it properly. After a few cycles, they realize it’s not them. The tech just isn’t there yet for the thing they were promised.
That’s when the mood shifts. The fear of “AI will take over the world next year” quietly turns into “this is much dumber and more fragile than I was told.”
You can already see people starting to say both that AI is impressive and useful and that AI is overhyped and unreliable. Both are true. The tech is genuinely powerful, and also nowhere near “takes over the world” territory. What they’re really reacting to is this promise-performance gap.
If this gap stays wide for long, it creates a kind of trust debt. The next generation of AI tools will be better, but they’ll have to work harder to overcome the skepticism the first wave created.
The next wave
Every big tech wave goes through a phase where builders overpromise, users get burned, and the mood sours. Then the hype people drift away, and what’s left are the teams that actually made something useful.
AI will be no different. The “build an app in seconds” era will probably be remembered the way we remember early WAP browsers on Nokia phones: impressive for the time, obviously not the final form.
The danger for founders is not that AI won’t be big. It will be bigger than most people expect, just over a longer horizon and in less flashy ways. The danger is burning your reputation in the first act by selling the miracle version of a technology that’s still in its messy adolescence.
The right move is to treat AI as what it is today: a powerful, unreliable, fast-improving engine that needs serious product thinking wrapped around it. Use it to bend cost curves, compress timelines, and unlock new workflows. But respect its limitations, and be upfront about them.
Because users don’t churn just when a product fails. They churn when a product fails in a way that makes them feel foolish for believing in it.
The AI startups that survive this wave will be the ones that stop trying to impress people in the demo and start trying to still be useful on the hundredth use.
There’s a growing gap between what AI products promise and what they can actually deliver in the hands of an ordinary user on a Tuesday afternoon. I call it “the magic gap”.
You can see it in the slogans: “Build an app in seconds.” “Do full qualitative and quantitative market research in a click.” “Replace your whole team with one AI agent.” Who can say no to that? No one. That’s why AI adoption curves look insane. Everyone signs up. For a while.
Then reality hits hard.
The “app in seconds” turns out to be an uncanny prototype with a logic bug right in the middle of the core flow. You ask the AI to fix it, and instead of patching the bug, it rewrites half the app and introduces two new bugs. Ten cycles later, you realize you could have written a clean version yourself in less time.
The one-click market research tool gives you 40 pages of confident analysis with fake quotes, made-up stats, and a survey that no actual human has answered. It sounds like real research, but the moment you try to act on it, the floor gives way.
This is what churn feels like from the inside: the moment when the user realizes the distance between the demo and their reality.
And right now, a lot of AI startups live in that distance.
This is why “retention after 90 days” is the single number I care about most when evaluating an AI product today. Signups tell you how powerful the promise was; day-1 or week-1 engagement tells you how good the demo felt. But only 90-day retention tells you whether the product survived contact with the user’s real workflow, real stakes, and real tolerance for failure. If someone is still using your AI app three months later, it means they’ve pushed past the magic gap, found where it’s actually reliable, and decided it’s net-positive even after the disappointments. In a world where anyone can get millions of curious users for free, the scarce thing isn’t attention at the top of the funnel; it’s trust that survives 90 days.
The seduction of “magic”
Historically, new tech waves sold speed or cost: cheaper calls, faster processors, more storage.
AI sells magic.
Not “do this thing faster,” but “skip the thing entirely.” Don’t learn to code, just describe the app. Don’t do user interviews, just ask the AI. Don’t analyze the data, just paste it in.
That kind of promise is incredibly attractive because it doesn’t just solve a problem, it erases a skill barrier. For a few minutes, as you watch the loading animation and tokens stream in, it feels like the barrier really is gone.
Founders feel the seduction too. When you build on top of these models, your first working demo is usually mind-blowing compared to what you could do with traditional software. You get addicted to that feeling, and you start pitching the demo as if it were the product.
But a demo is one cherry-picked moment. A product is a spectrum of moments, and users live in the average case, not the best case.
High adoption, higher churn
This is why AI startups see a pattern that would be bizarre in normal SaaS: explosive signups, miserable retention.
The funnel looks something like:
Enormous top-of-funnel from hype, virality, and curiosity.
Strong first-use activation: the user gets some kind of “wow” moment.
Then, there is a steep drop-off as soon as they try to use it for real work.
The problem is not that AI isn’t useful. It’s that the promise invites users to rely on it in places where its current failure modes are catastrophic: code generation pushed straight to production, research used for real business decisions, content used without verification, etc.
If you promise “magic,” people will expect reliability plus magic. When they discover they only got magic with no reliability, they feel not just disappointed but misled.
And once trust is gone, usage is gone.
You can see this in the way people talk. Someone will say, “Yeah, I tried tool X, it was amazing at first, but it messed up something important, so I stopped.” That’s the worst kind of marketing: former fans who now act as negative referrals.
The emotional hangover
There’s a second-order effect that matters more than founders admit: disillusionment.
The public has been bombarded for two years with variations of “AI will take your job,” “AI will replace programmers,” “AI will run companies.” Most people don’t read research papers. They experience AI through consumer tools.
So they try the app that says it can build software from a prompt, and it breaks. They try the agent that claims it can handle their entire workflow, but it gets stuck in a loop. They try the research assistant that promises “real-time insights,” and it invents survey results.
At first, they blame themselves. Maybe I didn’t phrase it right. Maybe I’m not using it properly. After a few cycles, they realize it’s not them. The tech just isn’t there yet for the thing they were promised.
That’s when the mood shifts. The fear of “AI will take over the world next year” quietly turns into “this is much dumber and more fragile than I was told.”
You can already see people starting to say both that AI is impressive and useful and that AI is overhyped and unreliable. Both are true. The tech is genuinely powerful, and also nowhere near “takes over the world” territory. What they’re really reacting to is this promise-performance gap.
If this gap stays wide for long, it creates a kind of trust debt. The next generation of AI tools will be better, but they’ll have to work harder to overcome the skepticism the first wave created.
The next wave
Every big tech wave goes through a phase where builders overpromise, users get burned, and the mood sours. Then the hype people drift away, and what’s left are the teams that actually made something useful.
AI will be no different. The “build an app in seconds” era will probably be remembered the way we remember early WAP browsers on Nokia phones: impressive for the time, obviously not the final form.
The danger for founders is not that AI won’t be big. It will be bigger than most people expect, just over a longer horizon and in less flashy ways. The danger is burning your reputation in the first act by selling the miracle version of a technology that’s still in its messy adolescence.
The right move is to treat AI as what it is today: a powerful, unreliable, fast-improving engine that needs serious product thinking wrapped around it. Use it to bend cost curves, compress timelines, and unlock new workflows. But respect its limitations, and be upfront about them.
Because users don’t churn just when a product fails. They churn when a product fails in a way that makes them feel foolish for believing in it.
The AI startups that survive this wave will be the ones that stop trying to impress people in the demo and start trying to still be useful on the hundredth use.
There’s a growing gap between what AI products promise and what they can actually deliver in the hands of an ordinary user on a Tuesday afternoon. I call it “the magic gap”.
You can see it in the slogans: “Build an app in seconds.” “Do full qualitative and quantitative market research in a click.” “Replace your whole team with one AI agent.” Who can say no to that? No one. That’s why AI adoption curves look insane. Everyone signs up. For a while.
Then reality hits hard.
The “app in seconds” turns out to be an uncanny prototype with a logic bug right in the middle of the core flow. You ask the AI to fix it, and instead of patching the bug, it rewrites half the app and introduces two new bugs. Ten cycles later, you realize you could have written a clean version yourself in less time.
The one-click market research tool gives you 40 pages of confident analysis with fake quotes, made-up stats, and a survey that no actual human has answered. It sounds like real research, but the moment you try to act on it, the floor gives way.
This is what churn feels like from the inside: the moment when the user realizes the distance between the demo and their reality.
And right now, a lot of AI startups live in that distance.
This is why “retention after 90 days” is the single number I care about most when evaluating an AI product today. Signups tell you how powerful the promise was; day-1 or week-1 engagement tells you how good the demo felt. But only 90-day retention tells you whether the product survived contact with the user’s real workflow, real stakes, and real tolerance for failure. If someone is still using your AI app three months later, it means they’ve pushed past the magic gap, found where it’s actually reliable, and decided it’s net-positive even after the disappointments. In a world where anyone can get millions of curious users for free, the scarce thing isn’t attention at the top of the funnel; it’s trust that survives 90 days.
The seduction of “magic”
Historically, new tech waves sold speed or cost: cheaper calls, faster processors, more storage.
AI sells magic.
Not “do this thing faster,” but “skip the thing entirely.” Don’t learn to code, just describe the app. Don’t do user interviews, just ask the AI. Don’t analyze the data, just paste it in.
That kind of promise is incredibly attractive because it doesn’t just solve a problem, it erases a skill barrier. For a few minutes, as you watch the loading animation and tokens stream in, it feels like the barrier really is gone.
Founders feel the seduction too. When you build on top of these models, your first working demo is usually mind-blowing compared to what you could do with traditional software. You get addicted to that feeling, and you start pitching the demo as if it were the product.
But a demo is one cherry-picked moment. A product is a spectrum of moments, and users live in the average case, not the best case.
High adoption, higher churn
This is why AI startups see a pattern that would be bizarre in normal SaaS: explosive signups, miserable retention.
The funnel looks something like:
Enormous top-of-funnel from hype, virality, and curiosity.
Strong first-use activation: the user gets some kind of “wow” moment.
Then, there is a steep drop-off as soon as they try to use it for real work.
The problem is not that AI isn’t useful. It’s that the promise invites users to rely on it in places where its current failure modes are catastrophic: code generation pushed straight to production, research used for real business decisions, content used without verification, etc.
If you promise “magic,” people will expect reliability plus magic. When they discover they only got magic with no reliability, they feel not just disappointed but misled.
And once trust is gone, usage is gone.
You can see this in the way people talk. Someone will say, “Yeah, I tried tool X, it was amazing at first, but it messed up something important, so I stopped.” That’s the worst kind of marketing: former fans who now act as negative referrals.
The emotional hangover
There’s a second-order effect that matters more than founders admit: disillusionment.
The public has been bombarded for two years with variations of “AI will take your job,” “AI will replace programmers,” “AI will run companies.” Most people don’t read research papers. They experience AI through consumer tools.
So they try the app that says it can build software from a prompt, and it breaks. They try the agent that claims it can handle their entire workflow, but it gets stuck in a loop. They try the research assistant that promises “real-time insights,” and it invents survey results.
At first, they blame themselves. Maybe I didn’t phrase it right. Maybe I’m not using it properly. After a few cycles, they realize it’s not them. The tech just isn’t there yet for the thing they were promised.
That’s when the mood shifts. The fear of “AI will take over the world next year” quietly turns into “this is much dumber and more fragile than I was told.”
You can already see people starting to say both that AI is impressive and useful and that AI is overhyped and unreliable. Both are true. The tech is genuinely powerful, and also nowhere near “takes over the world” territory. What they’re really reacting to is this promise-performance gap.
If this gap stays wide for long, it creates a kind of trust debt. The next generation of AI tools will be better, but they’ll have to work harder to overcome the skepticism the first wave created.
The next wave
Every big tech wave goes through a phase where builders overpromise, users get burned, and the mood sours. Then the hype people drift away, and what’s left are the teams that actually made something useful.
AI will be no different. The “build an app in seconds” era will probably be remembered the way we remember early WAP browsers on Nokia phones: impressive for the time, obviously not the final form.
The danger for founders is not that AI won’t be big. It will be bigger than most people expect, just over a longer horizon and in less flashy ways. The danger is burning your reputation in the first act by selling the miracle version of a technology that’s still in its messy adolescence.
The right move is to treat AI as what it is today: a powerful, unreliable, fast-improving engine that needs serious product thinking wrapped around it. Use it to bend cost curves, compress timelines, and unlock new workflows. But respect its limitations, and be upfront about them.
Because users don’t churn just when a product fails. They churn when a product fails in a way that makes them feel foolish for believing in it.
The AI startups that survive this wave will be the ones that stop trying to impress people in the demo and start trying to still be useful on the hundredth use.
There’s a growing gap between what AI products promise and what they can actually deliver in the hands of an ordinary user on a Tuesday afternoon. I call it “the magic gap”.
You can see it in the slogans: “Build an app in seconds.” “Do full qualitative and quantitative market research in a click.” “Replace your whole team with one AI agent.” Who can say no to that? No one. That’s why AI adoption curves look insane. Everyone signs up. For a while.
Then reality hits hard.
The “app in seconds” turns out to be an uncanny prototype with a logic bug right in the middle of the core flow. You ask the AI to fix it, and instead of patching the bug, it rewrites half the app and introduces two new bugs. Ten cycles later, you realize you could have written a clean version yourself in less time.
The one-click market research tool gives you 40 pages of confident analysis with fake quotes, made-up stats, and a survey that no actual human has answered. It sounds like real research, but the moment you try to act on it, the floor gives way.
This is what churn feels like from the inside: the moment when the user realizes the distance between the demo and their reality.
And right now, a lot of AI startups live in that distance.
This is why “retention after 90 days” is the single number I care about most when evaluating an AI product today. Signups tell you how powerful the promise was; day-1 or week-1 engagement tells you how good the demo felt. But only 90-day retention tells you whether the product survived contact with the user’s real workflow, real stakes, and real tolerance for failure. If someone is still using your AI app three months later, it means they’ve pushed past the magic gap, found where it’s actually reliable, and decided it’s net-positive even after the disappointments. In a world where anyone can get millions of curious users for free, the scarce thing isn’t attention at the top of the funnel; it’s trust that survives 90 days.
The seduction of “magic”
Historically, new tech waves sold speed or cost: cheaper calls, faster processors, more storage.
AI sells magic.
Not “do this thing faster,” but “skip the thing entirely.” Don’t learn to code, just describe the app. Don’t do user interviews, just ask the AI. Don’t analyze the data, just paste it in.
That kind of promise is incredibly attractive because it doesn’t just solve a problem, it erases a skill barrier. For a few minutes, as you watch the loading animation and tokens stream in, it feels like the barrier really is gone.
Founders feel the seduction too. When you build on top of these models, your first working demo is usually mind-blowing compared to what you could do with traditional software. You get addicted to that feeling, and you start pitching the demo as if it were the product.
But a demo is one cherry-picked moment. A product is a spectrum of moments, and users live in the average case, not the best case.
High adoption, higher churn
This is why AI startups see a pattern that would be bizarre in normal SaaS: explosive signups, miserable retention.
The funnel looks something like:
Enormous top-of-funnel from hype, virality, and curiosity.
Strong first-use activation: the user gets some kind of “wow” moment.
Then, there is a steep drop-off as soon as they try to use it for real work.
The problem is not that AI isn’t useful. It’s that the promise invites users to rely on it in places where its current failure modes are catastrophic: code generation pushed straight to production, research used for real business decisions, content used without verification, etc.
If you promise “magic,” people will expect reliability plus magic. When they discover they only got magic with no reliability, they feel not just disappointed but misled.
And once trust is gone, usage is gone.
You can see this in the way people talk. Someone will say, “Yeah, I tried tool X, it was amazing at first, but it messed up something important, so I stopped.” That’s the worst kind of marketing: former fans who now act as negative referrals.
The emotional hangover
There’s a second-order effect that matters more than founders admit: disillusionment.
The public has been bombarded for two years with variations of “AI will take your job,” “AI will replace programmers,” “AI will run companies.” Most people don’t read research papers. They experience AI through consumer tools.
So they try the app that says it can build software from a prompt, and it breaks. They try the agent that claims it can handle their entire workflow, but it gets stuck in a loop. They try the research assistant that promises “real-time insights,” and it invents survey results.
At first, they blame themselves. Maybe I didn’t phrase it right. Maybe I’m not using it properly. After a few cycles, they realize it’s not them. The tech just isn’t there yet for the thing they were promised.
That’s when the mood shifts. The fear of “AI will take over the world next year” quietly turns into “this is much dumber and more fragile than I was told.”
You can already see people starting to say both that AI is impressive and useful and that AI is overhyped and unreliable. Both are true. The tech is genuinely powerful, and also nowhere near “takes over the world” territory. What they’re really reacting to is this promise-performance gap.
If this gap stays wide for long, it creates a kind of trust debt. The next generation of AI tools will be better, but they’ll have to work harder to overcome the skepticism the first wave created.
The next wave
Every big tech wave goes through a phase where builders overpromise, users get burned, and the mood sours. Then the hype people drift away, and what’s left are the teams that actually made something useful.
AI will be no different. The “build an app in seconds” era will probably be remembered the way we remember early WAP browsers on Nokia phones: impressive for the time, obviously not the final form.
The danger for founders is not that AI won’t be big. It will be bigger than most people expect, just over a longer horizon and in less flashy ways. The danger is burning your reputation in the first act by selling the miracle version of a technology that’s still in its messy adolescence.
The right move is to treat AI as what it is today: a powerful, unreliable, fast-improving engine that needs serious product thinking wrapped around it. Use it to bend cost curves, compress timelines, and unlock new workflows. But respect its limitations, and be upfront about them.
Because users don’t churn just when a product fails. They churn when a product fails in a way that makes them feel foolish for believing in it.
The AI startups that survive this wave will be the ones that stop trying to impress people in the demo and start trying to still be useful on the hundredth use.





