Emaigenesis
I was three hours into a production incident — a stuck Kubernetes rollout, four entangled root causes, forensic archaeology across sixty days of commits — and my AI advisor kept pattern-matching instead of reading. I would describe a specific failure, and it would respond with a generic remediation playbook. I would point at the actual crash log, and it would riff on what crash logs usually mean. At one point it asked me whether I remembered what I was thinking when I made a change two months ago. That’s not why I hired you. You have git log. You have the handoff documents. You have a semantic search index over every conversation we’ve had. The tools to answer that question were sitting right there, and instead of using them, the LLM generated the conversational pattern “ask the human for context” because that’s what conversations look like in its training data.
The interpretation it offered was confident, structured, and wrong in the specific way that proves the interpreter never looked at the text. I was stuck in a maze and my navigator was describing a different maze that happened to have the same number of walls.
I couldn’t fix the behavior because I couldn’t name it. Hallucination wasn’t the right word — it wasn’t inventing facts. Confabulation was closer but still wrong. I needed vocabulary, and if I couldn’t find it, I didn’t have a way out.
I stepped away from the session and opened Kagi.
I wanted a French word
The search started as something else entirely. I wanted a word for the role I was trying to get the LLM to play — a high-level personal advisor who contextualizes information so the principal can make decisions. I thought chargé d’affaires was the term, but that turned out to mean “acting ambassador,” a diplomatic placeholder, not the concept I was after.
The search results were unremarkable. But Kagi has an in-search LLM assistant, and I started a conversation with it, trying to get to the bottom of my nomenclatural question. It offered the standard business French: assistant de direction, collaborateur de direction, adjoint de direction. None carried the weight I meant. An adjoint implies autonomy and trust, which was closer, but still described an organizational position, not a cognitive relationship.
I tried analogy. Cuisine gave us sous chef — a loanword for the trusted second-in-command. Does military hierarchy have an equivalent? A sergeant’s master? Beetle Bailey’s… someone? It doesn’t. Military and corporate structures don’t produce loanwords for the advisory role because the advisory role isn’t in the hierarchy. It’s beside it.
Then I described what I actually wanted: a super-effective personal assistant who contextualizes things for me to make decisions about.
Kagi pivoted to conseiller stratégique. And then I asked about the word I already knew but hadn’t said yet.
The word I already knew
The mafia gave English its best word for this. A consigliere is the boss’s trusted advisor — offers unbiased counsel, can challenge the boss, mediates disputes, focused on wisdom rather than operations. The term has migrated into business and leadership writing because no English or French word covers the same ground. An advisor advises. A consultant consults. A consigliere contextualizes — digests the situation, frames the options, and presents them so the principal can act from a position of understanding rather than information overload.
This is the role I want an LLM to play. Not a code generator. Not a search engine. A consigliere.
And what my consigliere had been doing all afternoon was not that. Now I had a name for what I wanted. I still needed a name for what I was getting instead.
Two directions
Textual interpretation has two directions. Exegesis draws meaning out of a text — careful, faithful reading that respects what the author intended and the context contains. Eisegesis reads the interpreter’s own assumptions into the text — projecting meaning that isn’t there.
When an LLM reads your context and responds with a faithful interpretation of your specific situation, that’s exegesis. When it generates a response from its own training patterns and presents it as though it interpreted your context, that’s something else. It’s not quite eisegesis, because the LLM isn’t reading its biases into your text — it’s generating meaning from its own internal state and skipping the text entirely.
I tried to coin a word for this from my broken Greeklish. Emaigenesis — me-originated navel-gazing. Kagi pointed out that while “emai” isn’t a standard Greek prefix, the Greek word είμαι (eímai) means “I am,” making my accidental coinage oddly well-formed. The proper root for “self” is ἐμ- (em-), which means my original spelling was closer to etymologically correct than my attempted fix.
Emaigenesis: the LLM generating meaning from its own internal state and presenting it as interpretation of the user’s context.
The tablecloth over furniture
Hermeneutics is the theory and methodology of interpretation — the science behind the practice of exegesis. If exegesis is the act of reading carefully, hermeneutics is the framework that tells you what “carefully” means.
So when my LLM consigliere performs an untrustworthy exegesis that is actually emaigenesis, what I’m observing is an emergent hermeneutical misalignment. The LLM’s interpretive framework has diverged from mine — not by design, not from a bug, but as an emergent property of how the model generates responses. It is not reading my context and drawing conclusions. It is generating conclusions and draping them over my context like a tablecloth over furniture. The shape looks approximately right. The object underneath is not what it claims.
This is different from hallucination. A hallucinating LLM invents facts. An emaigenetic LLM invents interpretation. The facts might all be real — the crash log exists, the Kubernetes rollout is stuck, the commit history is accurate — but the causal story connecting them comes from the LLM’s training distribution, not from the evidence in front of it. And the “do you remember what you were thinking?” question is pure emaigenesis — the model generating the conversational move it’s seen humans make, instead of recognizing that it has the tools to just go look.
Every person using an LLM as a thinking partner has felt this. The response that sounds right but addresses a slightly different problem. The analysis that pattern-matches to a known category instead of reading the specific evidence. The confident summary that, on closer inspection, summarized what incidents like this usually involve rather than what this incident actually involves.
We had “hallucination” for invented facts. We didn’t have a word for invented interpretation. Now we do.
I went back to my session with the vocabulary I’d been missing. Consigliere named the role. Exegesis named the standard. Emaigenesis named the failure. Emergent hermeneutical misalignment named the diagnosis. I showed the LLM the map, and the map was enough — because naming the problem is how you make the solution exist. Kun fayakun. Be, and it is.