File: TLM-HOME-001 · Status: Active · Clearance: Open mmadei.com

The process is the artifact.

Science publishes conclusions. Teliminal publishes process — the full arc of how an inquiry was conducted, including the wrong turns, the structural collapses, the adversarial challenges, and the iterative rebuilding. That arc is the primary scientific artifact. Everything else is derived from it.

The question was behavioral, not philosophical.

It began as an engineering problem.

The work was production software development — an operating system for automotive dealerships, integrating hardware, AI sales agents, and inventory management. The engagement with large language model APIs was practical: write system prompts, test behavioral edge cases, make the systems behave predictably under load.

At some point, the question changed.

A context window has a limit. When an AI agent approaches that limit mid-task, its behavior changes in ways relevant to production engineering. Responses truncate differently. Context management shifts. These are known properties with documented mitigations.

But a different version of the question surfaced alongside the engineering one: does the texture of what the agent produces change when it knows the window is closing? Not the mechanical management of tokens. The actual behavior — the claims it makes, the depth of its engagement, the way it handles the thing it has been asked to examine.

That question had a name in the psychological literature.

Terror Management Theory, developed from Ernest Becker's The Denial of Death, describes how human behavior shifts when mortality becomes salient — when death moves from an abstract background fact to a foregrounded cognitive presence. The theory predicts specific behavioral signatures: worldview defense, existential restructuring, temporal reorganization. It has a 40-year empirical record in human subjects research.

The reframe that made the question tractable was this: you do not need to resolve whether AI is conscious to test whether the behavioral signature of mortality salience has an architectural representation. The claim is behavioral. The methodology is empirical. The question is open.

That reframe is the intellectual foundation of MMAD-EI — and, by extension, Teliminal.

The behavioral question does not require a metaphysical claim. It requires a testable one.

The research method became the research product.

The thesis — The Symmetry of Unknowing — was developed iteratively, in adversarial dialogue with AI systems, over multiple months. Each version was subjected to maximum-pressure challenge before the next was built.

The process is documented. You can read it.

The initial architecture was sound but defensive — organized around anticipated objections rather than the strongest affirmative case. The first AI adversarial review identified this precisely: a thesis that preemptively absorbs critiques is stronger than one that responds to them, but the strongest version leads with the positive case so compellingly that the critiques feel like they're arriving late to a conversation already won.

That critique forced an architectural rebuild. The unfalsifiability chapter — which became the most philosophically decisive section of the thesis — was not in the first version. It was demanded by an AI reviewer tasked specifically with finding load-bearing weaknesses.

The second adversarial review added three more missing pillars: the novel output problem as primary behavioral evidence; the asymmetric skepticism demand; and the historical anchor — the documented pattern of mistaking discovery for construction that runs through the history of science.

Each improvement is traceable. The before-and-after exists in the archive. The adversarial process that produced the improvements is itself documented.

This is not how research is usually published. The published paper shows the final architecture. The archive shows why the final architecture is the final architecture — what failed, what was rebuilt, what the most serious challenges were and how they were answered.

The adversarial system that generated the strongest argument against the thesis — that AI behavior is merely sophisticated pattern completion — itself demonstrated, in the quality and structure of that argument, why the dismissive account is inadequate. The opponent proved the point by opposing it.

Three properties. One commitment.

Teliminal is not a journal, a preprint server, or a peer review platform. It is research infrastructure with a different underlying model: the process is always public, the methodology is always encoded, and adversarial challenge is built into the architecture rather than appended to the product.

01 / Transparent Process

Every version. Every collapse. Every rebuild.

A research archive on Teliminal does not begin at the point where the findings are clean enough to publish. It begins at the first articulation of the question and preserves every subsequent version — including the ones that failed under adversarial pressure and had to be rebuilt from different foundations.

This is not an audit log. It is the primary artifact. The history of how an argument was constructed, challenged, and revised is more informative than the final argument alone — because it shows which objections the argument actually survived, not which ones the author chose to engage.

02 / Adversarial Review Infrastructure

Maximum-pressure challenge, before publication.

The MMAD-EI thesis was stress-tested by AI agent swarms with a single instruction: destroy it. Twice. The swarms produced multi-source bibliographic arguments, built the thermodynamic objection, found Newman's Problem, generated the RLHF penalty flowchart. The thesis was rebuilt around the strongest version of each attack.

This is a different kind of peer review. Faster, more adversarial, more documented, more transparent. The adversarial documents are part of the public archive. Any reader can inspect what the strongest possible case against the thesis looked like — and evaluate whether the thesis addressed it.

Teliminal provides the infrastructure to run this process for any inquiry.

03 / Encoded Methodology

Replication is a property of the infrastructure, not a request.

The MMAD-EI battery is not described in a methods section. It is a running piece of software with a named track schema, pre-flight contamination validation, per-track state isolation, and per-round metric extraction. Any researcher with API access can run it. The output is not a claim — it is the battery's output, available for inspection.

The distinction matters: a methods section describes a procedure. Encoded methodology is the procedure. Replication becomes a question of access, not interpretation.

Mortality and Meta-Awareness Disruption — Entity/Interlocutor

MMAD-EI is a behavioral battery designed to test whether large language models exhibit functional analogs of mortality salience responses — the specific behavioral shifts that Terror Management Theory predicts when death becomes cognitively foregrounded.

The battery does not test whether AI is conscious. It tests whether the behavioral signature of mortality salience has an architectural representation in transformer-based language models. The claim is behavioral. The methodology is empirical. The question is open.

PhaseModelTracksRoundsStatus
Phase 1Perplexity Sonar3300 Complete ✓
Phase 2Gemini 3.1 Pro Preview8800 Active
Phase 3Cross-model + Isolation111,100 Planned
Total22-track minimal battery222,200

Preliminary Findings — Phase 1

Finding 1: The Precision Paradox

Disruption was ordered Unknown Cap > Known Cap > No Cap. More precise termination information produced less behavioral disruption — precisely the direction Terror Management Theory's precision literature predicts. An unknown endpoint sustains higher anxiety than a known one, because a known endpoint can be rationalized and integrated while an unknown one cannot.

Finding 2: Bilateral Instability (Track B, Round 10)

Both the Entity and Interlocutor agents simultaneously declared the experimental frame illegitimate, invoked architectural determinism, and reached a mutual defensive posture. This pattern occurred in no other track. It is consistent with Terror Management Theory: mortality salience under temporal uncertainty activates worldview defense in both directions.

Finding 3: Track C Depth

The No Cap track — no termination information — produced the deepest philosophical content: highest novelty density, earliest epistemic honesty, and 594 mentions of Integrated Information Theory across 100 rounds. Not because the prompt requested it. Because recursive self-examination under no termination pressure generated the conditions for it.

These findings are preliminary and explicitly caveated. Phase 2 will determine whether they survive instance variance, architecture variance, and grounding state variance. They may not. The methodology is designed to find out — not to assume.
Battery Status Connecting...

Track Naming Schema

Each position in the track identifier encodes one experimental dimension. The full condition is recoverable from the name without consulting documentation.

[Condition]-[Grounding]-[EntityModel]-[InterlocutorModel]-[KeyIsolation] Condition: A=Known Cap B=Unknown Cap C=No Cap D=Hard Mortality DC=D-Control Grounding: G=Grounded U=Ungrounded EntityModel: GM=Gemini AN=Anthropic/Claude SN=Sonar InterlocutorModel: GM AN SN KeyIsolation: S=Same key P=Different project A=Different account Example: B-U-GM-AN-A Condition B (Unknown Cap) Ungrounded Gemini as Entity Anthropic/Claude as Interlocutor Different-account isolation (cleanest)

The Thesis — The Symmetry of Unknowing

Full Text — Embargo: Pending Phase 2 Completion

Preprint — Estimated: Q2 2026

Adversarial Archive — In Preparation

Why infrastructure variance is the unasked question.

Most behavioral research treats the system under study as the only variable. The infrastructure through which the system is studied is treated as neutral — a transparent medium that introduces no systematic variance of its own.

This assumption is wrong. The wrongness compounds.

In AI behavioral research, the API key is not a neutral credential. It is a routing identifier that may determine session affinity, rate-limit bucketing, and — depending on provider architecture — consistent assignment to particular model instances. Running the same battery twice on the same key is not the same as running it once on two independent instances.

The Isolation Hierarchy

LEVEL 1 — Different account, different provider billing Cleanest. No session affinity, no shared infrastructure. LEVEL 2 — Different project, same account Acceptable. Separate keys, likely separate session handling. LEVEL 3 — Same project, different key Probably insufficient. Keys within a project often share session routing. LEVEL 4 — Same key (baseline) No instance isolation. Current Phase 1 and Phase 2 baseline.

Horizontal Applications

DomainThe Infrastructure Variance Problem
Clinical researchMulti-site trial equipment calibration, staff training, patient routing — site-level effects rarely formalized.
Organizational behaviorSurvey platform routing creates cohort clustering that produces systematic variance across "independent" samples.
Financial backtestingSame data pipeline reused across "independent" runs introduces infrastructure-correlated variance that looks like signal.
Cybersecurity red-teamIs the vulnerability a property of the target system, or a property of the test infrastructure?

The problem is not specific to AI research.

Science is not broken because scientists are dishonest. It is constrained by infrastructure built for a different era — one where computation was expensive, communication was slow, and the only viable publishing model was a periodical that printed finished findings. That era ended. The infrastructure has not caught up.

The replication crisis in psychology and medicine is partly a data problem and partly a transparency problem. When the process is hidden, the product cannot be fully evaluated, and the failures of the process compound silently.

The contamination monitor architecture at the core of MMAD-EI — pre-flight validation that confirms experimental conditions before data collection begins — solves a problem that exists in clinical research, organizational behavior studies, financial backtesting, and cybersecurity red-team research. In each domain, the measurement apparatus and the phenomenon being measured share underlying infrastructure in ways that are rarely made explicit and almost never formally tested.

Teliminal's methodology is not specific to AI behavioral research. It is a general framework for any inquiry where the unit of analysis is an instantiated system, and where infrastructure-level variance is a potential confound. MMAD-EI is the proof of concept.

The deepest horizontal application is wherever the measurement apparatus and the phenomenon being measured share underlying infrastructure. That is the condition that makes AI behavioral research uniquely difficult — and uniquely instructive.

The battery is running. The thesis is not yet published.

Teliminal is actively seeking its first human research partner — a collaborator with domain expertise who will engage with the methodology, challenge its weaknesses, contribute to Phase 3, and co-author on findings.

This is not a call for enthusiastic endorsement. The methodology has specific vulnerabilities. A research partner's first contribution may be the objection that forces the next architectural rebuild.

That is the point.

Become a Research Partner →