Read the PDF/Flipbook version by clicking here
The Human at the End of the Algorithm
The influencer marketing landscape is undergoing a seismic shift, not through gradual evolution, but through a fundamental rewrite of its core mechanics. Artificial Intelligence is no longer just a tool for editing videos or generating captions; it is now capable of creating the influencer itself. This raises profound questions that go far beyond simple marketing tactics, striking at the heart of value, trust, and creativity in the digital age.
To navigate this new frontier, AllTech Magazine convened a roundtable of leading voices—from frontline marketing executives and platform technologists to esports community builders and neuro-acoustic pioneers. We asked them to move past the hype and grapple with the tangible tensions emerging between human authenticity and synthetic scale.
This compilation captures their direct, unvarnished perspectives. Together, they explore the pressing themes of our time: the economic restructuring of influence as content scarcity collapses; the precarious standoff between scalable performance and human trust; the inevitable reinvention of agencies and platforms; and the new power structures forming around ownership and identity in an age of synthetic talent.
What emerges is not a consensus, but a crucial mapping of the crossroads where the industry now stands. The insights within provide a vital lens on whether we are heading toward a commoditized future of AI utility, a premiumized era of human connection, or—most likely—a complex hybrid where understanding the difference becomes the most valuable skill of all.
You can read the flipbook version of this roundtable or continue scrolling to read web version. Happy reading!
AI Influencers vs Human Authenticity Roundtable Discussion Insights
Hone John Tito: Co-founder, GameHostBros.
On why trust is everything in gaming communities and the new IP complexities AI introduces:
He shared, “Being the COO of a game hosting company that operates esports communities, I have been watching AI influencers start popping up in the circles, and as a result, I thought it would be valuable to include my analysis on how this could work out in the real-life context.”
“Trust is all in gaming, esports. We have already seen the brands testing artificial intelligence generated streamers and virtual influencers, but communities reveal it fairly fast. Consumers interact with creators because of personality and authenticity, and not necessarily content delivery. AI is able to increase production and remain consistent but it is not able to reproduce spontaneous moments and authentic reactions that create true connection. When one is aware of communicating with an AI, the entire dynamic changes and it looks like the brand endorsements are hollow.
What is actually effective is AI as the resource of actual creators, in substitution with their substitute. AI enables streamers to deal with editing, scheduling and repetitive tasks to enable them to be creative and community-oriented. The ownership of IP issues becomes complex within a short time as in case a creator creates his/her brand with the help of AI tools, it is not clear who owns the final creation and the persona. In my opinion, AI is going to help people who create something willing to adjust to it, but it would not remove human factors or cause real impact. Brands that become fully AI to get the authenticity typically get called out by the communities right away.”
Yaniv Masjedi: Chief Marketing Officer, Nextiva
On why virtual influencers feel like hype and the ethics of manufactured authenticity:
“I’ve watched marketing trends come and go. The ones that last are usually based on something solid. The ones that disappear were built on hype. Virtual influencers feel like hype to me.
Think about how you decide which brands to trust. You read reviews. You ask people you know. You look at how the company treats customers when things go wrong. None of that works with virtual creators because there’s nobody there to hold accountable. It’s marketing pretending to be a relationship.
We use AI at Nextiva to make our team better. Transcription. Sentiment detection. Smart routing. These tools help people do their jobs more effectively. They don’t pretend to be people. There’s a difference between using technology to improve your service and using it to manufacture authenticity.
The ethics conversation is pretty straightforward if you ask me. If someone thinks they’re engaging with a person and they’re actually engaging with AI, that’s deceptive. Period. You can argue about disclosure requirements and transparency standards all day. Or you can just be honest with people from the start.
Brands that succeed with AI will be the ones that use it to enhance their team’s capabilities, not fake having more resources than they have. That’s been true for every technology we’ve implemented at Nextiva. It’ll be true for AI too. Focus on making your people better instead of trying to replace them with cheaper alternatives.”
Steve Zeitchik: CEO and Co-Founder, Agency 8200
On the current reality of AI in influencer marketing and where the market is truly headed:
“I believe there will be a trend toward increased use of AI and virtual influencers over time, but the current reality is more nuanced. There is a growing awareness, even if not always on a fully conscious level, that people (justifiably) crave authenticity. Many consumers want to engage with real humans behind a brand, and there is still hesitation around AI or virtual influencers, even in cases where there may not be a clear disclosure.
The technology itself is impressive, and there are clear efficiencies and optimization opportunities where it can be leveraged effectively. That said, at this stage, it is not a replacement for real influencers when it comes to meaningful engagement and, ultimately, conversion. I strongly believe in the long term potential of this space, but right now, human influence still plays a critical role that technology has not fully replicated.”
On consumer perception and market evolution:
“Over time, I think consumers will care less about whether an interaction is powered by AI, much like how many people today are comfortable engaging with AI driven chatbots as long as they get the answers they need. However, being truly influenced by AI may take longer.
There is already a tremendous amount of noise in the market, and AI driven content will likely compound that challenge. To be effective, AI influencers must be scaled intelligently and used to engage communities in ways that improve satisfaction and create smoother user experiences. I do not see a fully self contained, AI only influencer market emerging, particularly in service based industries where trust and credibility are paramount. E-commerce may adopt these tools more quickly, but cutting through the noise will remain the core challenge.”
David Marco: President, EWSolutions
On whether authentic human interaction can survive when AI wins on metrics:
“Authentic human interactions will be the differentiator between content that creates engagement and content that underperforms. It already is. When creators rely solely on AI to generate content, the results are never original. Content consumers love original thought and are attracted to it. Also, consumers are beginning to understand the tell-tale signs of AI creation. Repetitive phrasing, language that feels distant, excessive reliance on references, overly polished verbiage, a lack of depth, and the trademark ‘—’ are all giveaways that AI created the content.
AI is a great way to stimulate creator thought and edit existing concepts. Using AI appropriately as a research tool and editor is a fair way to build your content. We use AI extensively to research best practices for content, SEO, and AEO development. We have saved time and money by eliminating the guesswork in these areas.
Before AI was an option, I wrote four books where I had to do the initial editing myself. Any writer knows that being your own editor never ends well. I spent over a thousand hours thinking of ways to say something more clearly or battling with where to place a comma. I could have saved that time and my editor’s patience if we had AI to clean up my writing before sending it to the publisher. Now I can write out my original thoughts, and AI can tell me where to place the comma.”
Jennalyn Ponraj: Founder, Delaire
On the biology of trust, breaking the uncanny valley, and why we might be aiming for the wrong thing:
“Think about how much resistance there was when people wanted to put words, not just music, into movie soundtracks.
I think it’s inherently human to be distrustful. We are always afraid of what we don’t know or what we don’t understand. Before we psychologically form a prejudice, our bodies physiologically decide.
I work in voice AI and (conservatively) people are deciding within 500 ms whether they trust a voice or not. Sonically, we have less than half a second.
When technology is performatively human, our brains actually shift toward hatred. The way through the uncanny valley isn’t realism. It’s suspension of disbelief. And the way you craft suspension of disbelief is through complete creative immersion! When you completely flatten your human cadence with ChatGPT, there’s an unnatural rhythm that emerges.
Brands may want consistent metrics, but no one says, ‘Let’s get to the middle of the internet.’ No, you wanna fucking break the internet.
I don’t even know if trust is what we should desperately be aiming for anymore. Sometimes provocation is more effective. I think that the way we have to create is completely shifting. Usability is no longer the key to adoption, but we can’t become so over-reliant on AI for polish that we begin to distrust our own human ideation.”
Vaibhav Kumar Bajpai: Group Engineering Manager, Microsoft Azure AI
On how agencies must evolve to manage AI personas:
“I want to be very clear here: AI is still far from replacing human level thinking. For many years, I see AI personas mainly as tools that help people multiply their output, not autonomous creators. Each persona can serve a specific purpose, but it still needs to be designed, guided, and controlled by someone with real expertise.
Because of that, agencies will not disappear, but their role will change. They will shift away from managing individual creators and toward operating persona portfolios. From a leadership standpoint, this looks a lot like running multiple teams or services inside a large organization. Success depends on clear goals and strong safety and quality guardrails.
The agencies that succeed will go beyond brand access. They will offer strategy, innovative tools built on top of AI personas, and an environment where these systems can be operated safely and responsibly. In that sense, agencies evolve from middlemen into platforms that help experts use AI effectively without losing control, quality, or trust.”
On whether AI-native creators will become the preferred discovery layer for platforms:
“Yes, I think this is a very real possibility though it may depend on how AI tools evolve. AI native creators can be more predictable, easier to moderate, and can reliably meet brand safety expectations. That alone makes them appealing candidates for increased visibility in discovery systems.
But this only works if platforms can trust that AI behavior stays stable over time. That trust does not come from policies or one-time reviews. A single AI mistake can have a real business impact. Platforms need to continuously check that these creators are not drifting in tone, producing misleading content, or introducing new safety issues as models, prompts, and incentives change.
The upside is scale and reliability. The risk is over optimization and hallucinations. If platforms lean too hard into synthetic creators without strong testing and monitoring, discovery becomes repetitive and disconnected. The platforms that win will treat testing and monitoring as core infrastructure, not an afterthought.”
Chris M. Walker: CEO of Legiit.com
On redefining authenticity and why audit-style accountability is the new trust currency:
“Chris M. Walker, Founder of Legiit, believes the biggest misconception in influencer marketing is that virtual creators will inevitably erode authenticity. In practice, the opposite often occurs.
AI driven influencers force brands to confront what ‘authenticity’ really means. It is no longer about whether the creator is human but whether the message aligns with audience values.
One example is when AI generated voices in podcasting revealed that listeners cared more about clarity and relevance than origin. The ethical challenge is not whether AI replaces humans but whether brands disclose the use of AI transparently. Audit style accountability will become the new trust currency.”
Jason Hennessey: CEO, Hennessey Digital
On the rise of hybrid creators and the coming licensing frameworks for AI avatars:
“We expect ‘hybrid creators’ to dominate, where humans set direction and AI scales variants across languages, formats, and micro-communities. The contrarian angle is that audiences will accept automation when it improves utility and disclosure is clear. People reject manipulation, not efficiency, and those are different outcomes. Authenticity will shift from spontaneity to reliability.
Ethics will depend on how brands handle identity mimicry and cultural borrowing. Training avatars on real creators without explicit licensing will become the next major scandal. We believe licensing frameworks will mature quickly because lawsuits will force clarity. Brands that pay for provenance will earn credibility.”
Yuriy Boykiv: CEO of Front Row
On where AI creators will accelerate and where human creators remain essential:
“For premium consumer brands, the shift toward AI creators will likely accelerate in categories where educational content and product demonstration matter more than parasocial relationships. Beauty tutorials, wellness information, and product comparisons don’t require human connection to provide value. However, lifestyle content and aspirational positioning will remain primarily human territory because consumers seek emotional connection and social validation that AI cannot authentically provide.
The brands that will succeed are those recognizing that influence effectiveness depends on content utility and transparent relationships rather than the biological status of the creator. Virtual influencers excel at scalable, consistent content delivery while human creators provide irreplaceable emotional resonance and cultural credibility. Smart brands will use both strategically rather than viewing them as competitors.”
Emmy Bre: Social Media Influencer and Founder of 3VERYBODY
On why virtual creators will dominate demos but fail at emotional connection, and the hybrid model:
“Here’s my contrarian take: virtual creators will dominate product demos and educational content, but they’ll fail at aspirational lifestyle marketing. When I watch a real person with texture, scars, or different body types use our tanning products, the comments explode with ‘finally someone who looks like me.’ AI can’t replicate that vulnerability yet. We’ve had customers literally cry seeing our male model or curvier bodies in campaigns--that emotional trust is where human creators still win.
The ethics piece gets messy when brands don’t disclose AI involvement. I refuse to retouch our ads or use shade name gimmicks because my customers’ moms had skin cancer—they need to trust what they’re putting on their bodies. If a ‘person’ recommending skincare isn’t real, and brands hide that fact, we’re back to the same manipulative beauty industry BS we’re trying to escape.
My prediction: hybrid model wins. Use AI for scalable how-to content and product visualization across skin tones (God knows we need better representation there). Reserve human creators for testimonials, unboxings, and anything requiring genuine emotional response. At 3VERYBODY, we’d never fake a sensitive-skin testimonial.”
Mada Seghate: CEO and Marketing Co-founder of Upside
On the shift from personal relationship-based influence to structural influence:
“The more significant transition is not the change from IRL to virtual creators, but instead personal relationship based influence to structural influence. Virtual creators merely reveal what has been true for years, which is that reach, repetition and platform mechanics are often more significant than the individual serving up the message.
How should brands think about the ethics of AI driven influence?
The ethics of AI influence decision-making shouldn’t be ‘is the creator synthetic?’ but rather ‘are intent, incentives and disclosure transparent’? AI cuts out the feel-good buffer that human creators give, leaving brands to reckon with whether their influence strategies are transparent or stealthily calibrated for manipulation.
What impact will this have on brand authenticity?
Authenticity will increasingly be judged by consistency and accountability rather than personality. Brands that can clearly explain why they exist, how they create value, and how decisions are made will build trust in an AI-saturated landscape.”
Jaidie V.: Co Founder Amigas in Tech
On the business shortcut of virtual creators and the risk of erasing diversity and lived experience:
“I come into this conversation from a place that is both deeply technical and unapologetically human. The move from human to virtual creators isn’t a creative leap, it’s a business shortcut and audiences can tell. Generative AI created scale and speed, and it offered brands something even more tempting: autonomy over the ‘talent.’ Virtual creators don’t push back. They don’t age, burn out, unionize, or go off-script. They don’t carry cultural memory, pain, mess or lived contradictions. That’s the appeal. But it’s also a problem. Authenticity will never be replaced. But with AI and the humans training it, authenticity is being simulated. And simulations break the moment values, culture, or accountability are tested.
What worries me most is that we’re watching diversity flatten, not expand. After years of hard-earned progress toward visibility and representation, AI is compressing influence back into a neutral, low-risk aesthetic. What used to be textured, imperfect, and culturally specific is getting smoothed into a blank, cookie-cutter white space that performs ‘inclusion’ without carrying it. AI reflects what it’s fed and what it’s fed is already biased, already optimized for comfort. When virtual creators become the face of influence, marginalized perspectives don’t just get excluded. They get diluted, remixed, and stripped of context. That’s not innovation. That’s erasure with better branding.
The ethical conversation usually stops at disclosure, labels, or whether an avatar looks ‘too real.’ That’s surface-level. The deeper issue is asymmetry. AI creators can shape behavior, aesthetics, and purchasing decisions at scale without lived experience, social consequence, or reciprocal trust. There’s no community feedback loop, only performance metrics. That matters, especially when younger audiences are forming identity and belief systems inside algorithmic environments that reward sameness over truth.
Where I diverge from the hype is this: AI is most powerful when it’s invisible and supportive, not front-facing and performative. The strongest brands will use generative AI to give real people more leverage, more reach, and more room to be honest.AI should do the labor. and humans should carry the meaning. Influence, whether human or virtual, is ultimately relational. If technology doesn’t deepen trust, it erodes it.”
Brandon Brown: CEO, Search Party
On why AI can find influencers but can’t build trust:
“Running creator platforms taught me something: AI can find influencers, but it can’t build trust. We tested AI-driven personalities and people just didn’t buy it. The best campaigns always had a real person involved, sharing those unpolished stories. Even when tech helped reach millions, the connection remained person-to-person. So as AI tools get better, the human element becomes even more critical for credibility.”
Bell Chen : Founder, Enlighten Animation Labs
On using AI for consistent messaging and the importance of transparency layers:
“From what I’ve observed using our Synthesizer engine, AI-generated narratives can keep messaging incredibly consistent even across thousands of customer touchpoints. There was some skepticism at first, but as creative teams adapted, they appreciated how it reduced brand voice drift without sacrificing personality. My suggestion is to use transparency layers within AI content to maintain authenticity, since it’s easy for audiences to spot when something feels too automated.”
Max Marchione: Founder at Superpower.com
On the simple solution of clear labeling for AI-generated content:
“Here’s the thing with AI creators: people feel tricked when they find out later. Someone follows an influencer for months, then learns it’s not a real person, and they’re done. Platforms should just put a clear label on AI accounts from the start. It’s not complicated. Just tell people what they’re looking at, and nobody gets mad.”
Deepak Shukla: Group CEO – Pearl Lemon Group
On the real tension between trust and scale in the age of Generative AI:
“From what I’m seeing on the ground, the real tension isn’t human versus virtual creators, it’s trust versus scale. Generative AI makes influence cheaper, faster, and easier to control, but that same control is what risks draining authenticity if brands aren’t careful.
Virtual creators can work well for awareness and consistency, but influence still breaks down at the point where audiences want lived experience, judgement, and accountability. AI doesn’t get cancelled, feel pressure, or evolve values, and people know that. The ethical line, in my view, is transparency. When brands try to pass AI-generated influence off as human, trust erodes quickly.
The winners will be brands that use AI to support creators, not replace them, and that are honest about where automation ends and human voice begins.”
Jason Levin: CEO and Founder, Memelord.com
On why virtual creators might scale output but fail at humor and taste:
“As the memelord of funny marketing, my contrarian take is that virtual creators will scale output and lower risk, but they also delete the thing that actually drives influence: good humor and taste. You can’t rely on boring ChatGPT content when attention is a knife fight.
Ethics is simple: if it’s AI, say it’s AI. The internet is already saturated as it’s getting harder and harder to just stay afloat as a startup. I’m not against AI, I use AI, but right now, it’s just not interesting or funny enough in marketing that it’s just filling the internet up.
I’m not worried about ‘AI voice sameness.’ I’m worried about brands choosing safe, synthetic influence and then acting confused when nobody cares. That’s why I created memelord to help businesses keep up with what’s interesting and funny when almost nothing on the web is.”

