
Research Article Summary: Subtitled speech: Phenomenology of tickertape synesthesia (March 2023)
Citation: Hauw, F., El Soudany, M., & Cohen, L. (2023). Subtitled speech: Phenomenology of tickertape synesthesia. Cortex, 160, 167–179. https://doi.org/10.1016/j.cortex.2022.11.005
Introduction
The study explores ticker-tape synesthesia (TTS), a condition where individuals automatically visualize accurate and vivid written images of spoken words. The researchers from the Sorbonne and the Paris Brain Institute suggest that TTS arises from an unusual configuration of the brain's reading system, with increased influence of phonology (sounds) on orthography (letters). They identified 26 individuals with TTS and conducted a questionnaire to gather information about the characteristics of TTS, including visual and temporal aspects, triggers, control, and impact on language processing. The study also examined synesthetic experiences caused by auditory stimuli such as non-speech sounds, pseudowords, and words with different sound-letter relationships. The researchers discuss potential brain mechanisms underlying these features, propose that TTS can provide valuable insights into written language processing and learning, and outline future research directions.
Methods
The study recruited participants for investigating ticker-tape synesthesia (TTS) through email and social networks, targeting students, university members, and groups interested in synesthesia, psychology, and neuroscience. Participants were required to be over 18 years old, have no history of neurological or psychiatric conditions, and self-report TTS. Written informed consent was obtained, and the study was approved by the institutional review board. Participants completed a questionnaire to gather information about their demographics, associated synesthesias, and subjective features of TTS. The questionnaire covered various dimensions, including the onset and awareness of TTS, triggers and modulation of synesthesia, visual and temporal features of TTS, non-word triggers, and the interference of TTS with daily life. The study also involved presenting participants with auditory stimuli, such as non-speech sounds, exception words, homophone words, and pseudowords, and participants were asked to report the subtitles induced by these stimuli. The audio stimuli were carefully selected to explore the influence of lexical knowledge, spelling variations, and non-lexical mechanisms on TTS.
Results
The study identified 26 participants with ticker-tape synesthesia (TTS) and collected data through a detailed questionnaire and auditory transcription task. The results are summarized as follows:
1. Demographic features: The participants were all French native speakers, predominantly female, with a median age of 40 years. Most had a higher educational level and normal sight and hearing. A significant proportion had a family history of TTS and other types of synesthesia.
2. History and general features of TTS: TTS typically emerged around the time of reading acquisition or earlier. Most participants became aware of their TTS during adulthood. Some considered TTS to be both an advantage and a hindrance, with advantages including spelling support and disadvantages such as difficulty focusing in crowded places. Participants had varying levels of control over TTS.
3. Speech modalities triggering TTS: TTS was triggered by watching someone else speak, and it could persist even when the speaker was out of sight. Participants' own overt and covert speech, as well as musical lyrics and movies, also triggered TTS. Some participants reported seeing subtitles while dreaming.
4. Non-speech sounds triggering TTS: Listening to novel words was a common trigger for TTS, followed by human and non-human noises. TTS was less commonly triggered by music without lyrics.
5. Visual features of subtitles: Participants experienced subtitles either as projected in the outside world or within their internal mental space. The location of subtitles varied, with some near the speaker's mouth or in the center of the visual field. Subtitles were typically considered as nearby and visualized in lower- or upper-case black letters. Subtitles could be perceived for multiple words at the same time or one word at a time. When reading surrounded by people talking, TTS interfered with reading for most participants.
6. Temporal features of subtitles: TTS onset was typically immediate, and the content of subtitles could be updated by chunks of several words or one word at a time. Subtitles persisted briefly after their appearance, with some variation in duration.
7. Synesthesia for number words: When numbers were used to denote quantities, participants visualized them as Arabic digits, a mix of digits and letters, or alphabetic form, depending on the context.
Discussion
The study aimed to gather descriptive data on ticker tape synesthesia (TTS) and provide an overview of its spectrum. The prevalence of TTS in the general population is estimated to be a few percent, similar to other synesthesias. The study found a non-significant 2:1 female/male ratio among participants, consistent with previous estimates. More than 2/3 of participants reported associated synesthesias, with space-time synesthesia being the most frequent. TTS did not show a strong association with grapheme-color synesthesia. Family predisposition was observed, suggesting the importance of genetic factors in TTS.
Regarding the paths from sounds to letters in TTS, it was found that TTS is triggered by the perception of speech and activated by an internal phonological code. It can be triggered by inner speech and is not affected by factors like the gender of the speaker. The translation from phonology to orthography in TTS follows the parallel pathways of the two-route model of spelling and reading. Non-speech sounds can also elicit TTS if they undergo phonological coding. TTS subtitles can be influenced by semantic knowledge and the output of routes linking phonology to orthography.
The visual features of TTS include location, distance, color, and case. TTS subtitles are typically black, reflecting the color of printed words, and their location is often at the bottom of the visual field or next to the speaker's mouth, similar to actual subtitles and comic bubbles. Some participants experienced colored TTS subtitles, indicating the chaining of TTS and grapheme-color synesthesia. The visual features of subtitles can also be modulated by the emotional content of the input, suggesting a potential association with "affective synesthesia." A majority of TTS participants were associators, who perceive TTS internally, rather than projectors who perceive it in the external world.
Overall, the study provides foundational information about TTS and highlights the need for future studies to further explore its prevalence, gender ratio, family history, association with other synesthesias or developmental disorders, and genetics.
Conclusions
Tickertape synesthesia is a unique and rare phenomenon that offers new insights into reading and its acquisition. It is similar to dyslexia and arises from an unusual configuration of the brain's reading system. A study involving 26 participants used questionnaires and controlled auditory stimuli to describe tickertape synesthesia. However, there are still unanswered questions that require further research. Future empirical studies should investigate the genetic and environmental factors contributing to tickertape synesthesia, identify objective behavioral indicators of the condition, examine its development in children with and without dyslexia, explore the cerebral mechanisms using brain imaging techniques, and investigate the brain connections to understand individual variations in the tickertape synesthesia experience. These research areas are crucial for understanding the cognitive foundations of tickertape synesthesia and establishing objective criteria to validate its existence beyond subjective self-reporting.
The Short Version: Overview and Conclusions
Hypothesis
The hypothesis "is that TTS reflects an enhanced top-down influence of speech processing on orthographic representations, in the form of vivid and automatic mental images of written words".
The Abstract:
Most literate individuals can, with effort, imagine vague mental images of written words they hear, thanks to the connections between sounds, meaning, and letters. However, individuals with ticker-tape synesthesia (TTS) spontaneously and accurately perceive vivid mental images of written words when they hear them. The researchers suggest that TTS arises from an unconventional configuration of the brain's reading system, with a stronger influence of top-down processing: phonology on orthography. To better understand TTS, they identified 26 individuals with the condition and administered a questionnaire to explore various aspects of synesthetes experiences, including visual and temporal features, triggers, voluntary control, and interference with language processing. They also examined the synesthetic percepts induced by auditory stimuli, such as non-speech sounds, pseudowords, and words with different sound-letter correspondences. They discuss potential brain mechanisms underlying these features, propose that TTS offers unique insights into written language processing and acquisition, and outline future research directions.
The Short Conclusion:
Tickertape synesthesia is a rare phenomenon that sheds light on reading and its acquisition. Similar to dyslexia, it stems from an unusual brain reading system. A study with 26 participants used questionnaires and auditory stimuli to describe tickertape synesthesia. However, further research is needed to address unanswered questions. Future studies should investigate the genetic and environmental factors, identify objective behavioral markers, examine its development in children with and without dyslexia, explore cerebral mechanisms through brain imaging, and investigate individual variations in the experience. These areas are vital for understanding tickertape synesthesia and establishing objective criteria for validation.
The bulk of the text of this article was generated by ChatGPT or Chatsonic; AI language models designed to produce natural language output.