The thing controlling the quality of what AI gives you that no one tells you about
What the context window is, why it controls everything, and why some people get AI output that sounds like them while others get robots
Welcome to Chronicle Makers. I'm Denyse, and I help family historians research smarter, write their stories faster, and use AI to do both. Every post here is designed to move you forward on your family history journey. Over 200 past newsletters are archived here.
“Why does her AI output sound like her, but mine sounds like it was written by a customer service bot?”
I hear versions of this all the time. Two people, same model, similar questions, wildly different results. One gets flat, bland text that screams “a computer wrote this.” The other gets something warm, specific, grounded in real research.
The difference isn’t the prompt.
It’s something happening behind the scenes that most family historians have never been told about. And once it clicks for you, it will reshapes how AI works for everything.
The hidden stage most people skip right over
Here’s what most of us think happens when we send a prompt:
Type a question
AI thinks about it
Answer appears
Not wrong. But it skips the most important part.
Before the model generates a single word, it assembles a bundle of text to read. Everything the model can see while it works lives inside that bundle. It’s called the context window.
When new AI models are released, the companies will talk about how big the context window is. People love when the number goes up. But why does it matter?
Think of it like a workbench. System instructions, saved custom preferences, the full conversation history, uploaded files or search results, and the prompt itself all sit on that workbench.
That’s what the model reads. That’s all it reads. Nothing else exists in its world when you chat with it.
The workbench determines the output
Here’s the thing: AI doesn’t “know” anything in the human sense. It reads the workbench (what is in the context window) and generates a response based on what’s there. If the workbench holds nothing but a vague question, the model fills gaps with generic, statistically average text.
That’s where the robot voice comes from. Not a flaw in the model. The AI model is working exactly as designed.
When someone gets output that captures their voice, their research style, their family’s specific story, the workbench was loaded with the right material. Source documents. Writing samples. Specific constraints. The model had something to work with.
When someone gets output that sounds like every other AI response on the internet (the term for this is AI slop), the workbench was near-empty. The model had nothing to build from except patterns in its training data. So it produced the average of everything it’s ever seen.
That’s the difference. Not talent. Not a secret prompt. Context management.
What fills the workbench (and what pushes things off)
Most of us assume the prompt is everything. It’s not. The prompt is one item on the workbench. Here’s what else shows up:
System instructions. Rules the AI follows that aren’t visible. These control formatting, safety, and tool behavior. These are made and controlled by the company who created the AI model. The user can not change them.
Custom instructions and memories. Saved preferences like “write in a warm, conversational tone” or “I research 18th-century Pennsylvania families.” Memories the AI model choose to save about you are looked at. These load before the prompt even arrives. You as the user can change both of these.
Conversation history. Every exchange in the current chat. This is where context management matters most. We’ll dive into this in minute.
Tool outputs. If the AI searched the web or read an uploaded file, those results land on the workbench too. You can control the tools used.
All of it competes for space. The context window has a limit. When it fills up, older material gets compressed or dropped. The model starts working from summaries of earlier conversation rather than the actual words.
If at this point you wish you had a beginner course in AI, you are in luck! I am releasing a course on AI Fundamentals on YouTube. The first episode is out now:
Why long conversations with AI fall apart
This is the part most of us discover the hard way.
A fresh conversation starts clean. The workbench has room. The model reads everything and responds with precision.
Five or six exchanges in (commonly known as “turns”), the workbench is crowded. The model is reading its own previous responses alongside the original source material. Small drifts compound.
Ten exchanges in, the model may be working from compressed summaries of earlier turns. It’s responding to what it thinks was said, not what was said. The context has degraded.
That’s not a hallucination problem. It’s a context management problem. The workbench got cluttered, and the model lost track of what mattered.
For family historians working with precise facts (dates, names, places, relationships) this degradation is what we can manage. A birth date mentioned in exchange 3 might get altered by exchange 8. Not because the model is making things up, but because the original context got buried.
Why some people sound like themselves and others sound like a robot
This is the part that surprises most family historians.
When AI produces generic, flat writing, it’s not because the model can’t do better. It’s because the workbench gave it nothing personal to draw from. No voice samples. No style preferences. No examples of how this particular person writes or thinks.
The model defaulted to “average internet writing,” because that’s what fills the gap when context is missing.
Family historians who get output that sounds like them have figured out how to load their voice onto the workbench. Custom instructions describing their writing style. Pasted examples of past work. Specific guidance about tone, sentence length, perspective.
The model isn’t creative. It’s a pattern-matching engine that adapts to what’s in front of it. Give it vague input, get generic output. Load it with a specific voice, and it reflects that voice back.
That’s context management at work.
The people getting great results share a pattern
Here’s what separates family historians who love working with AI from those who fight it.
They start fresh often. One research question, one conversation. When the focus shifts, the chat resets. The workbench stays clean.
They provide source material upfront. Instead of asking the model to recall or guess, they paste the census record, the death certificate, the transcription. The model works from real evidence because real evidence is on the workbench.
They use custom instructions. Saved preferences load before every conversation. Writing voice, research focus, formatting preferences. All arriving on the workbench before the first prompt.
They keep factual work short. Two or three exchanges for analysis. Maybe five for something complex. Then a fresh start.
They treat output as a draft. The workbench produces working documents. Verification happens elsewhere.
They go beyond the chat box. Each AI has advanced features which are easy to use and make a huge difference in output. (Check the P.S. below for a course series I’m offering on these.)
None of that requires being more technical. It’s about understanding that the quality of what comes off the workbench depends on what goes onto it.
Why this matters more than prompt engineering
The internet is full of advice about writing better prompts. I’ve given out an advice on writing prompts too. Prompts matter. But a brilliant prompt on a cluttered (or empty) workbench still produces mediocre output.
Context management is the foundation. Get the workbench right, and even a simple prompt produces something worth working with. Ignore the workbench, and the most carefully crafted prompt in the world competes with noise from ten previous exchanges (or maybe even months of chat memories).
That’s what separates the family historian who gets a boring ancestor biography with incorrect facts from the one who gets a nuanced draft with proper citations that carries their own voice.
The mental model most of us need to update
Most of us started thinking of AI as a search engine that writes paragraphs. Ask a question, get a long answer.
That mental model leads to empty workbenches and disappointing results.
The better model: AI is a research assistant sitting at a workbench. What goes on the workbench determines the quality of the work. The assistant is capable, but only as good as the materials in front of it.
Context management is the skill that makes everything else work. It’s not glamorous. It’s not a hack. It’s the foundation that separates frustrating AI experiences from genuinely useful ones.
And most people have never been told it exists.
—Denyse
P.S. If you want to master the best AI tool for writing and research, I have a series of workshops on using Claude that will take you from beginner to expert. By the end of the series, you’ll have it integrated into your writing and research workflow and be able to finish and share your genealogy. Full details and registration here. (This course series is also a part of Chronicle Makers on Skool, included with the one time purchase price.)





This is really helpful, thank you. I have just started to use AI in family research. I have created an agent and asked it to provide historical context. I started with a search that I thought I knew the facts for (my father's parents/siblings). The AI agent has made me question whether there were other children (stillborn or who didn't live long) – something I have never considered (my father was the youngest, and his parents both died when he was young, so he probably wouldn't have been told). Something to research!
The historical context has taught me so much already, and being able to learn more about what life would have been like for my grandparents in rural Ireland and then in New York City was fascinating.
I have kept a conversation going for a while – thanks to your article, I know now that it's best to keep starting afresh. Thank you!
Thanks Denyse - super helpful!