Thanks for sharing this post—I really enjoyed the conversation. Where I see room for nuance is in how we project the future. And this isn’t about “my truth vs. yours,” it’s just a different crystal ball. Because let’s be honest: the real answer to what’s coming is probably “who the heck knows?”
Here’s my take: the argument that AI avatars won’t be trusted seems rooted in today’s mindset. I don’t think it makes any of us feel younger to admit this, but I remember life before the internet. What builds trust for me isn’t necessarily what builds trust for Gen Z or Gen Alpha—they were born into this tech.
Back in the ’90s, people said online sales—especially for things like clothing—would never replace in-person shopping. I was one of those people who hesitated to put my credit card online because I grew up with cash and wallets. But here we are. So when we talk about future adoption, we have to consider how the generation making the decisions will think about these technologies. They might have a very different take.
The trust issue assumes companies won’t solve it. I doubt that—especially if it becomes a recurring barrier. Younger generations already trust AI enough to use it for homework, learning, even news—despite its known flaws. If the industry addresses trust (whether through actual accuracy or just perceived credibility), that could shift the forecast.
Will everything be automated and humans replaced? Not necessarily. But could the standards we use today to judge tomorrow’s tech evolve toward adoption? Historically, they have.
The most truthful answer is "I don't know," but where is the fun in that? :)
My take is that AI is here and already being used to create avatars of real people (I just reported on a Tim Draper AI avatar making the rounds of startups on my other newsletter, AI Daily) and I think that if you get value out of the interaction, what does it matter if it's real or AI?
Thanks for sharing this post—I really enjoyed the conversation. Where I see room for nuance is in how we project the future. And this isn’t about “my truth vs. yours,” it’s just a different crystal ball. Because let’s be honest: the real answer to what’s coming is probably “who the heck knows?”
Here’s my take: the argument that AI avatars won’t be trusted seems rooted in today’s mindset. I don’t think it makes any of us feel younger to admit this, but I remember life before the internet. What builds trust for me isn’t necessarily what builds trust for Gen Z or Gen Alpha—they were born into this tech.
Back in the ’90s, people said online sales—especially for things like clothing—would never replace in-person shopping. I was one of those people who hesitated to put my credit card online because I grew up with cash and wallets. But here we are. So when we talk about future adoption, we have to consider how the generation making the decisions will think about these technologies. They might have a very different take.
The trust issue assumes companies won’t solve it. I doubt that—especially if it becomes a recurring barrier. Younger generations already trust AI enough to use it for homework, learning, even news—despite its known flaws. If the industry addresses trust (whether through actual accuracy or just perceived credibility), that could shift the forecast.
Will everything be automated and humans replaced? Not necessarily. But could the standards we use today to judge tomorrow’s tech evolve toward adoption? Historically, they have.
The most truthful answer is "I don't know," but where is the fun in that? :)
My take is that AI is here and already being used to create avatars of real people (I just reported on a Tim Draper AI avatar making the rounds of startups on my other newsletter, AI Daily) and I think that if you get value out of the interaction, what does it matter if it's real or AI?
Totally agree, on both issues: There's no fun in not taking risks...
And yes, value—objective or perceived—is what counts.