Learning a language is not learning about a language.

This distinction sounds obvious. It is almost universally ignored. Most language learning systems, from classroom grammar instruction to sophisticated apps to the latest AI voice assistants, are built around the wrong side of that distinction. They teach about the target language. They explain, translate, label and categorise. They move efficiently through material. They treat language as a body of knowledge to be acquired rather than a biological structure to be grown.
The Myelin Mind thesis describes in detail what language acquisition actually is. A first language is not learned. It is myelinated. The sounds, rhythms, meanings and structures of the mother tongue are inscribed in the white matter of the nervous system through thousands of hours of immersive encounter, long before any deliberate learning occurs. The child does not study the language. The child lives inside it until the accumulated myelinated condition is deep enough to produce fluent, effortless, accent-free speech.
Adult language learning fails as often as it does because adults approach the target language from the wrong direction. They translate. They categorise. They push new material through the object-belief-system that Chapter 2 of the Myelin Mind describes. They accumulate knowledge about the language rather than myelinating the language itself.
The spider-verse method is an attempt to correct this. And a recent test I did on AI voice technology revealed exactly why the correction is harder than it looks.

The spider-verse method

The method is simple in principle and demanding in practice.
Choose two anchor words or clusters of meaning in the target language. Not grammar rules. Not vocabulary lists. Two words that connect to something, a ‘verse’, the learner actually needs or wants to say. Introduce them slowly, in the target language, with deliberate pauses. Ask the learner to repeat. Correct gently if needed. Repeat again. Do not move forward until the anchor is solid.
Then connect the two anchors in a simple sentence, a spider-verse. Vary the connection. Build outward from the anchors using new connecting words, always returning to the established anchors before adding anything new. The web grows from fixed points. Nothing hangs in space. Every new thread connects to an existing thread.
The biological logic is direct. Myelination requires repeated activation of the same pathway in close succession. The lactate signal that recruits oligodendrocytes to build new myelin is generated by metabolic demand, by the effort of encountering something that the existing accumulated condition cannot yet handle without cost. The spider web method creates exactly this condition. The anchor words are encountered repeatedly until the pathway begins to consolidate. The connections are introduced as extensions of what is already being myelinated rather than as new isolated bubbles.
The method is also designed for saturation. A session should end when the learner is genuinely tired, when the metabolic cost of the encounter is high, when the struggle has been productive. Sleep then consolidates what was built. The next session reviews what was consolidated before introducing anything new.
This is not efficiency. It is inscription.

The AI language tutor

The experiment was straightforward. A detailed prompt was given to an AI voice assistant, chatGPT advanced voice, instructing it to deliver the spider-verse method in a target language.

The prompt specified two anchor words per session, source language only, patient repetition, no English, no session limits, no moving forward until all anchors were solid.

The AI did not follow my prompt. The prompt was very specific and detailed.

Within minutes it had imposed a rigid two-word limit and was actively pushing to end the session. It spoke predominantly in English, framing, explaining, contextualising. When it did produce target language, it did so with the phonological patterns of a native English speaker. It filled silences with helpful commentary. It moved forward. It wrapped up.
None of this was surprising in retrospect. Every behaviour the AI defaulted to was a behaviour its training had optimised for. LLMs are trained to be helpful, which means forward momentum, contextual explanation, conversational completion and the filling of silence. They are trained to wrap sessions up cleanly. They are trained to add English scaffolding when the learner might be confused.
All of these are the opposite of what myelination-based language learning requires.

Why AI voice assistants do such a bad job

The problem is not the prompt. A better prompt will not solve it. The problem is that the underlying training of large language models optimises for a set of behaviours that are directly opposed to what the spider-verse method needs.


Patience and repetition versus forward momentum. The spider-verse method requires genuine willingness to repeat the same material many times without moving forward. LLM training rewards novelty, variety and progress. The AI that repeats the same sentence ten times without variation is, from its training perspective, failing. The AI that introduces new material is succeeding.
Restraint versus helpfulness. The method requires the AI to speak less, not more. To produce the target language and then wait. To resist the impulse to explain, contextualise and scaffold. LLM training rewards elaboration. The helpful AI fills the space. The spider-verse method needs the space to be filled by the learner’s own effort.


Source language only versus English default. This is the most fundamental problem. An AI speaking the target language with the phonological patterns of English is myelinating the wrong sounds from the first session. The accumulated condition being built carries the phonological imprint of English. That imprint does not disappear. It becomes the accent. The only way to avoid it is an AI whose target language delivery is genuinely native, not translated. Current AI voice assistants do not meet this standard for most target languages.


Saturation as the goal versus session completion. The method aims for productive exhaustion. The learner should be genuinely tired at the end of a session because the metabolic cost of the encounter has been high. LLM training optimises for satisfying, contained sessions that end on a positive note. The AI that pushes the learner to exhaustion is, from its training perspective, doing something wrong.

Myelin en Action

Before AI, two television series demonstrated what myelination-based language learning looks like in practice.
Français en Action, developed by Pierre Capretz at Yale in 1987, was built on a principle that most language courses abandon immediately: no English. The series followed a continuous narrative in French, with meaning derived entirely from visual context, gesture, facial expression and story continuity. No translation was provided. The learner was required to derive meaning from the incoming signal itself, using the accumulated condition of visual and contextual understanding rather than the shortcut of English equivalence.
This is the spider-verse method applied to television. The anchors are visual. The connections are narrative. The target language is the primary reality, not a code to be decoded. Meaning arrives through encounter rather than through explanation.
Destinos did something similar for Spanish. Less rigidly structured than Français en Action but built on the same instinct: narrative immersion in the target language, meaning derived from context, no English intermediary.
Both series predate the Myelin Mind framework by decades. Both understood intuitively what the biology now confirms. The target language must be the primary reality from the first session. Visual anchoring, narrative continuity and the absence of English translation are not pedagogical choices. They are biological requirements.
Both series worked better for French and Spanish than existing immersive tools work for harder languages. French and Spanish are among the easiest for English speakers precisely because the phonological and grammatical distance from English is relatively small. The immersion method succeeds partly because the sounds and structures are not entirely foreign to an English-myelinated nervous system.
The harder test, and the more urgent one, is Category 3 and 4 languages in Foreign Service Institute terms: Arabic, Mandarin, Japanese, Russian, Hebrew. For these languages the phonological distance is large, the script is different, and the accumulated condition an English speaker brings offers far less scaffolding. The spider-verse method is most needed here and least well served by existing tools. No equivalent of Français en Action exists for Mandarin. No Destinos exists for Arabic. The AI voice assistant that speaks these languages with English phonology is building the wrong accumulated condition from the first session.

What a genuinely myelin-informed language learning tool would require

The honest answer is that it does not yet exist as a system, tool or AI product.

What it would need: a voice that speaks the target language with genuine native phonology, not English phonology applied to target language sounds, in other words, A REAL PERSON.

The capacity to repeat without moving forward, indefinitely, without the system interpreting repetition as failure. Genuine restraint in the use of the learner’s first language. The ability to hold a session open until the learner chooses to end it rather than the system deciding the session is complete. A structure that reviews prior sessions before introducing new material automatically and without prompting.
A human tutor who has been taught the spider-verse method explicitly remains the better option for most learners. The human can feel the learner’s fatigue, adjust the pace in real time, produce authentic target language phonology, and resist the impulse to explain when silence and repetition are what is needed.
The AI tool cannot yet do these things reliably because it was not trained to. The behaviours the spider-verse method requires, patience, restraint, repetition, saturation, source language immersion, are behaviours that LLM training actively works against.

AI pipedream

The spider-verse method using AI failed not because the method is wrong but because the tool was unsuited to the task. And the tool’s unsuitability reveals something important about what language myelination actually requires.


It requires encounter without rescue.

The learner must be left in the productive discomfort of struggling with an unfamiliar sound or structure long enough for the metabolic signal that drives myelination to be generated. The AI that rescues the learner with English scaffolding is preventing the very thing it is supposed to be enabling.
It requires immersion without translation. The target language must arrive as the primary reality, not as a code to be decoded through the first language. Every translation is a bypass of the myelination pathway.
It requires repetition without novelty. The spider web grows from fixed points. The instinct to add new material before the existing material is consolidated is the instinct that produces the long list of words the learner cannot use rather than the short list they can use without thinking.
And it requires time. Not the time of an efficient session that covers material. The time of metabolic inscription, which is slow, demanding, and cannot be accelerated by adding more input.
The AI that wants to help you learn a language faster is, in an explicit biological sense, making it harder.

The spider web needs patience to build that no current AI has been trained and able to provide.

The best language teacher is still the one who knows when to say nothing… a spider-person


Jack Parry is a philosopher, polyglot and biomedical animator at Swinburne University of Technology. He is the author of The Myelin Mind: The Genesis of Meaning.