According to Business Insider, a shift is happening among U.S. lawmakers as they begin personally experimenting with AI. Senator Elizabeth Warren, a Massachusetts Democrat who was skeptical as recently as June, now finds ChatGPT “really valuable” for basic research, like demographic breakdowns for states. Other notable adopters include Republican Senator Josh Hawley of Missouri, who tested it with a “nerdy historical question,” and Democratic Senator Chris Murphy of Connecticut, who uses it “despite the fact that I think it’s going to destroy us.” At the highest levels, use is mixed: Vice President JD Vance is a self-declared “Grok guy,” while House Speaker Mike Johnson says he hasn’t had the “luxury of time.” The report also notes a strange incident where Democratic Rep. Jared Huffman of California got into an argument with Microsoft Copilot over a conspiracy theory.
The Skeptic Adoption Curve
Here’s the thing that’s fascinating about this. We’re watching the classic adoption curve play out in real-time, but with a group of people who literally hold the power to regulate the technology. Their skepticism isn’t gone. Murphy’s quote is a perfect encapsulation: “I use it, despite the fact that I think it’s going to destroy us.” That’s a wild internal conflict to have! But it shows the utility is becoming undeniable, even to its biggest critics. Warren’s use case is telling—she’s not drafting legislation with it. She’s using it as a super-powered, conversational search engine for quick facts. That’s the entry point. Once you get comfortable with that, what’s next? Drafting constituent emails? Summarizing complex bills? The slope is slippery.
The Hallucination Problem Is Real
And they’re running into the core problems immediately. Warren mentions catching “the occasional hallucination.” But Huffman’s story about Copilot is next-level. The AI didn’t just get a fact wrong; it dug in and argued, insisting an event was a conspiracy theory. For a lawmaker, that’s not just an error—it’s a terrifying glimpse into how these systems could entrench misinformation. It’s one thing to read about hallucinations in a committee briefing. It’s another to have a machine fight with you about reality. That single, “freaking weird” experience probably did more to inform Huffman’s regulatory stance than a dozen expert testimonies.
The Political Tribalism of AI
Now, watch how even tool choice becomes politically tribal. JD Vance doesn’t just use an AI chatbot. He’s a “Grok guy” because it’s “the least woke.” Don Beyer prefers Anthropic’s Claude because of its ethical constitution. We’re in the early days, but you can already see the framing: one side choosing tools perceived as anti-“woke,” the other side choosing tools marketed on safety and ethics. The underlying tech might be similar, but the branding and perceived values are already splitting along familiar cultural lines. That’s going to complicate any straightforward, bipartisan tech policy.
So What Does This Mean?
Basically, this hands-on experience is a double-edged sword. On one hand, it demystifies the tech. It’s harder to fear-monger about an abstract “AI” when you’ve used it to look up the population of Mississippi. That could lead to more nuanced, practical regulation. But on the other hand, personal use might create a false sense of security. “It works for my research, so how bad can it be?” could be a dangerous takeaway. The real issue isn’t how it handles a senator’s historical query. It’s about scale, bias, labor impact, and existential risk. The fact that Speaker Johnson hasn’t touched it is almost as telling as the others using it. It shows that for all the hype, at the very top, this still isn’t seen as an essential, can’t-miss tool. Yet.
