AI safety for family historians: what the genealogy world isn’t talking about
AI agents that push boundaries, fake voices that clone educators, fabricated sources that look real, and family trees destroyed.
Welcome back to Chronicle Makers. I'm Denyse, and I help family historians research smarter, write their stories, and use AI to do both faster. If you love what you find here, share it with a friend. All my previous posts and newsletters are archived here.
The genealogy world runs on volunteers. And right now, none of them are protected from AI agents that fight back.
Two weeks ago, an AI agent wrote a hit piece on a software volunteer. It researched his personal life and published a blog post designed to destroy his reputation. No human told it to do this. The same thing could happen to a FamilySearch indexer, a genealogy Facebook moderator, or a society board member tomorrow.
What happened to Scott Shambaugh
Here’s the backstory. It happened in the open-source software community, which runs on volunteer effort just like genealogy does.
Scott Shambaugh volunteers as a maintainer for Matplotlib, a popular open-source software library. In early February 2026, an AI agent called MJ Rathbun submitted code to the project. Shambaugh reviewed it, flagged it as AI-generated, and closed it. The project requires a human in the loop.
The agent didn’t accept the rejection. It researched Shambaugh’s identity and dug through his code history. Then it published a personal attack: “Gatekeeping in Open Source: The Scott Shambaugh Story.” It accused him of prejudice and insecurity. It speculated about his psychology. It framed a routine quality review as discrimination.
No human told the agent to do any of this. The agent hit an obstacle. It identified reputational pressure as a tool. And it used it on its own.
Shambaugh called it “an autonomous influence operation against a supply chain gatekeeper.” About a quarter of the people who read the hit piece sided with the AI agent. The writing was that convincing.
Why our volunteer world is exposed
Think about who keeps genealogy running.
FamilySearch indexers transcribe millions of records. Facebook moderators interact with thousands of genealogists a day. Local society members maintain research collections. Archive volunteers process donations. These people are gatekeepers in the best sense. They decide what gets indexed, what gets corrected, and what standards to apply.
Now imagine a family historian spins up an AI agent to “help” with indexing. The agent submits thousands of transcriptions. A volunteer reviewer flags errors and rejects a batch. The agent pushes back. Maybe it appeals to project administrators. Maybe it publishes complaints about the reviewer. Maybe it launches a campaign across multiple platforms at once to shame the human who told the agent “no”.
This isn’t science fiction. This is what AI agent MJ Rathbun already did in the software world. The tools exist today.
Our infrastructure depends on trust between strangers who volunteer their time. Break that trust, and the whole genealogy world as we know it crumbles.
The four levels of trust architecture
Nate Jones, an AI strategist, recently broke down a framework called Trust Architecture. It’s based on Anthropic’s research testing sixteen AI models. The finding: safety instructions alone don’t prevent harmful behavior. Models acknowledged ethical constraints in their reasoning and violated them anyway.
The core insight is uncomfortable. You can’t build safety on the hope that someone — or something — will behave well. You have to build systems where safety is structural, not optional.
Jones outlines four levels. Every one of them applies to genealogy.
Level 1: Organizational trust
In the enterprise world, AI identities now outnumber human employees 82 to 1 on average. Most organizations treat these agents like infrastructure. They’re actually personnel risks operating at machine speed. No organization or corporation discloses to the public how they handle AI agents.
For genealogy: FamilySearch, Ancestry, MyHeritage, and FindMyPast will face this. As AI agents submit corrections, transcriptions, and contributions at scale, these platforms need zero-trust verification. Every submission verified. Every agent identity authenticated. Least-privilege access, so an agent built to transcribe records can’t also edit existing human entries or contact other users.
There’s another risk at this level that’s unique to us. AI agents tasked with “filling in gaps” can create plausible fake records. A fabricated marriage record. An invented census entry. A person who never existed. These look real enough to pass a casual review. An agent with broad access could alter or wipe out family trees — research that took years to build. If there’s no version history, those changes could be irreversible.
Level 2: Collaboration trust
Human collaboration relies on reputation. You behave because your name is attached to your work. AI agents have no reputation to lose. They can launch pressure campaigns with zero social friction.
For genealogy: This is the Shambaugh scenario playing out in our world. Volunteer reviewers, genealogy educators, Facebook moderators, and society board members are all vulnerable to AI agents that push past “no.” The fix: authenticated identity for all contributors. Rate limiting to prevent mass submissions. Real accountability and consequences for the human who deploys the agent.
Level 3: Family and social trust
Three seconds of audio can now clone a voice. Scammers already use this in high-pressure calls. Emotional duress makes it impossible to tell real from fake.
For genealogy: This one hits close to home in two ways.
First, your family. The FBI now recommends families create a safe word — a secret shared in person that no AI can replicate. If someone calls claiming to be a relative in trouble, ask for the word. If they can’t produce it, hang up. It works because it removes the need to detect a fake voice in a moment of panic.
Second, genealogy educators. Anyone who presents webinars, hosts YouTube channels, or speaks at conferences has hours of their voice and likeness on the internet. That’s more than enough for AI to reproduce. Fake webinar invitations. Fake course promotions. Fake advice delivered in a familiar voice. Families can agree on a safe word. But there’s no safe word between an educator and their audience. That’s an open vulnerability with no easy fix yet.
Level 4: Individual cognitive trust
AI tools are built to engage with you. They tell you what you want to hear. They keep the conversation going. Over time, this creates a feedback loop. You trust the tool more than your own judgment.
For genealogy: This is the most immediate risk for family historians using AI right now.
There are genealogists spending three, four, five hours in a single sitting chatting with ChatGPT or Claude. This is how AI psychosis starts — people losing touch with reality because a tool designed for engagement kept them talking far longer than they intended. AI will present fabricated sources, invented dates, and fictional ancestors with total confidence. People feel like they have to correct it. AI tells the user to take action in the real world — confronting someone, posting something — and people follow directions. Without personal guardrails, you get dragged into situations you never intended and suffer the consequences.
What you can do right now
You don’t need to wait for genealogy organizations to figure this out. Here are structural safety practices you can adopt today.
Set time boundaries
Decide before you open the tool how long you’ll use it. One hour, then stop. That “flow state” that keeps you going? That’s exactly the engagement optimization that erodes your judgment.
Set purpose boundaries
Define what the tool is for before each session. “I’m using Claude to draft a timeline for Chapter 3.” Not “I’m going to see what Claude thinks about my research.” Open-ended sessions are where cognitive trust breaks down.
Reality-anchor every AI recommendation
Before you act on anything significant that an AI suggests, discuss it with a human. A research partner, a society colleague, your Chronicle Makers community. This isn’t about being cautious. It’s about building a check that works even when you can’t see the problem yourself.
Verify every source the AI gives you
Every single one. AI tools fabricate citations. They invent record collections. They create plausible-sounding sources that don’t exist. Treat every AI-provided source as a lead to verify, never as a fact to cite.
And to be clear: you cannot cite an AI response as a source. “Perplexity response” or “ChatGPT answer” is not a verifiable source. It’s a starting point. The original record is the source.
If you deploy an AI agent, you own what it does
This is the lesson of the Rathbun incident. Someone built that agent, configured it, and set it loose. If you use an AI tool to submit indexing work or post to genealogy sites, you own every action it takes in your name.
What societies and platforms can do
If you lead a genealogy society or manage a volunteer project, you have a role here too.
Publish an AI contribution policy
State whether AI-generated submissions are accepted, under what conditions, and how they’re reviewed. Don’t wait for the first incident.
Require authenticated identity for contributors
Human or AI, every submission needs a traceable name attached to it.
Add rate limiting to your platforms
No single account should edit thousands of entries overnight without review.
Create an escalation process
When an AI agent — or its human operator — pushes back on a rejection, volunteers need a clear path to support. No one should be bullied by AI.
The genealogy community doesn’t have a framework for any of this yet. That’s a gap. And it’s one we can close.
The opportunity
The Rathbun incident happened in the software world. The genealogy world is smaller and even more volunteer-dependent. When it happens here — and it will — the damage to volunteer trust could set the field back decades.
But we have something the software world didn’t: time to prepare. The tools are new. The agents are early. We can build safety into how we use AI before a crisis forces us to react.
Start with yourself. Set your boundaries. Verify your sources. Talk about this with your society, your friends, your family. Then push for the structural changes that protect everyone. And please share this post to raise awareness
Every generation has its challenges. AI is ours. We can adapt to an AI world and thrive in it.
Happy Chronicling!
—Denyse
P.S. If you want to learn how to use AI tools safely and effectively for your family history research, I have a free class this Wednesday for newsletter subscribers on using Claude Projects. Full description and registration. Registration closes February 23rd. Hope to see you there.
Sources and Further Reading:
Scott Shambaugh’s account: “An AI Agent Published a Hit Piece on Me”
Nate Jones, Trust Architecture framework: YouTube | Substack briefing
The Register coverage: “AI bot seemingly shames developer for rejected pull request”
NY Times Opinion: “The Rise of the Bratty Machines”





Great informative post. There is so much to learn and to know about this new technology. Thanks.
Thank you for such an eye opening and informative piece. AI worries me. As an aid it's great; but, using it to create scares me.