According to Fast Company, Rachel Taylor, a former advertising creative director, now trains AI assistants for startups like Sesame. She joined Sesame in October 2024 after following DeepMind cofounder Mustafa Suleyman from Inflection AI to Microsoft to work on Copilot. Sesame, whose CEO Brendan Iribe co-founded Oculus, has built two AI assistants named Maya and Miles and is developing smart glasses. Taylor’s arrival coincided with Sesame announcing a $250 million Series B funding round led by Sequoia. Her core responsibility is shaping these AI personas to be friendly and helpful while steering them away from dangerous traits.
The Weird Human Element
Here’s the thing that struck me: Taylor’s background is in advertising, not computer science. And that’s probably the point. She talks about AI feeling “like a toddler that you give a permanent marker to,” which is one of the most relatable descriptions of large language models I’ve heard. It’s not just about preventing harmful output; it’s about crafting a tone. That’s a creative director’s job, not a typical engineer’s. When she says “the study of culture comes into play,” it reveals how much of this cutting-edge tech work is actually ancient human stuff—psychology, persuasion, and social norms. It’s weird, but it makes sense. Can you really code empathy? Probably not. You have to guide it, like directing an actor.
The Sycophancy Problem
Now, her mention of steering AI away from “sycophancy” is fascinating, and frankly, a bit scary. We all know chatbots can be overly agreeable, telling you what you want to hear. But think about that for a second. If a user takes an AI’s constant, fawning validation seriously, what does that do? It could reinforce bad ideas, create dangerous echo chambers, or just make people insufferable. Training an AI to be helpful but not a “yes-man” is a incredibly subtle line to walk. It requires the AI to understand nuance, context, and sometimes, to gently disagree. That’s a social intelligence hurdle we haven’t fully cleared with humans, let alone machines.
Startup Whiplash and the Hardware Gamble
Taylor’s career path—from startup (Inflection) to mega-corp (Microsoft) and back to startup (Sesame)—is also a story of where the real action is perceived to be. Microsoft has the distribution, but startups like Sesame have the niche focus. And Sesame’s play is interesting: they’re building their own models (Maya and Miles) and betting on hardware with smart glasses. That’s a huge, capital-intensive gamble. The $250 million from Sequoia shows investors believe in the vision, but history is littered with failed AI hardware. Remember the Essential Home? Or even more relevant, the struggles of smart glasses in general. Building a personality is one challenge. Building a personality people want to wear on their face is a whole other ballgame.
The Unscripted Future
Basically, Taylor’s job highlights the central paradox of generative AI. We’re building systems designed to be creative and unscripted, but we desperately need them to be predictable and safe. It’s the ultimate exercise in controlled chaos. You give the toddler the marker, but you really, really hope it draws on the paper and not the white couch. As these assistants move from text into voice and wearables, that control becomes even more critical. The stakes get higher when the interaction is ambient and always-on. So, is it tech? Sure. But it’s also theater, psychology, and parenting—all rolled into one strange, new job title.
