Speech and Language Disabilities
Demographics
Speech sound disorders in children depend on age group, ranging from 5 to 25%. In adults, the prevalence is 1 to 2% of the population.
Selective music affects 0.47% to 0.76% of the global population.
Aphasia, while the global incidence is not known, affects 2 million people (at least) in the United States, and 250 thousand people (at least) in Great Britain.
Medical Details
Speech disorders affect the way people make coherent vocal sounds that can be identified as words to other people. Language disorders concern the ability to form, create and share ideas independent of the vocal medium. They are often lumped together because having a language disorder almost always affects speech, while people with speech disorders do not necessarily have disordered language ability.
It's actually a pretty vast umbrella with many umbrellas underneath it. Something like Huntington's can be referred to as a language disorder. But for the CPACC, we're going to zero in on four non-specific profiles, three of them speech disorders, and the fourth is a language disorder.
Organic Speech Sounds Disorders
Organic speech sounds are when there is a speech impairment that can be explained.
According to ASHA (linked in the Body of Knowledge), three explanations may be offered: motor/neurological, structural and perceptual.
Apraxia of Speech (AOS)
According to the NIH (a linked BoK source), apraxia of speech is a motor/neurological organic speech sounds disorder. People with apraxia of speech know what words they want to say, but they cannot properly plan and sequence the required speech movements. There are two types: childhood apraxia of speech, and acquired apraxia of speech. Symptoms range from mild to severe.
Dysarthria
Dysarthria is another motor/neurological organic speech sounds disorder. According to ASHA, while apraxia interferes at the level of planning, dysarthria interferes at the level of execution, where the muscles and nerves lack enough coordination to perform the movements that the brain is telling them to do. Dysarthria is especially hard to diagnose when it is comorbid with apraxia (which can indeed happen!)
Structural Organic Speech Sounds Disorders
According to ASHA, these disorders can result from orofacial anomalies such as a cleft palate. They can also happen due to trauma or surgery. For example, some people must undergo a laryngectomy (surgical removal of the voicebox). In relearning to speak with a technique like esophageal speech, their voice may sound raspy and potentially indistinct.
Perceptual Organic Speech Sounds Disorder
Children who have severe hearing loss will naturally struggle to reproduce oral speech because they have no reference for it and must resort to unconventional means of learning, such as laying a hand on the throat to feel the vibrations, and learning the theory of how different letters are produced. They are unable to acquire oral language naturally. They can acquire sign language naturally, and most people would recommend this approach nowadays.
It is often the parents who make the decision for deaf children whether they want them to learn to speak orally or not. There are many children who first learn language skills through sign language (whereupon they are able to learn oral languages like English more easily). Historically, parents following the advice of educators have forced their children to learn with an oral-first approach. This is risky, as many children do not take well to this approach. It often leads to language deprivation and language disorders.
Functional Speech Sound Disorders
No identifiable cause is attributed to functional speech sound disorders (we straight-up don't know how they happen, how interesting!)
According to ASHA, (linked in the BoK), functional speech sound disorders have historically been referred to under two different profiles: articulation disorders, and phonological disorders. Errors in articulation disorders are random and involve swapping sounds, distorted sounds. Errors in phonological disorders are predictable and rule-based (ie. consistent deletion of a final consonant).
No Speech
Also known medically as 'mutism.' Absence of speech can be caused by brain injury, in which case it is called 'neurogenic.'
This is in contrast to the 'psychogenic' type, where the causes are psychological. Elective mutism (choosing not to speak), selective mutism (able to speak only in certain situations) and total mutism (no speech at all) are three types of psychogenic mutism.
According to ASHA, selective mutism is primarily seen in children, though it can follow one into adulthood.
Aphasia
Aphasia is caused by neurological injury, and is therefore a neurogenic disorder. But unlike other neurogenic disorders like dysarthria and apraxia of speech which deal with the production of oral speech, aphasia interferes with one's ability to wield language itself. People with aphasia experience deficiencies at the level of comprehension, production, reading ability and writing ability.
Caused by brain injury, most people who experience aphasia get it after a stroke. But it can also result from brain tumours, infections, head trauma, etc.
According to the National Aphasia Association (linked to in the BoK), there are nine distinct profiles of aphasia based on the presence or absence of three separate language-impairment symptoms:
- Is speech fluent?
- Can the person comprehend spoken messages?
- Can the person repeat words or phrases?
A person whose speech is fluent, who comprehends spoken messages, and can repeat words or phrases has the mildest form of aphasia, characterized by a constant groping for words: 'a persistent inability to supply the words for the very things they want to talk about.' On the other hand, aphasia can leave one with a total absence of speech and comprehension. It's a very broad spectrum of severity.
Accommodations
As we've just learned, there's a broad range of presentations for language and speech disorders. As always, each individual will have their own strategies that make life easier for them.
Supporting Speech
People who struggle with speaking vocally benefit from being offered more time, patience, and understanding when in one-on-one communication environments. They also benefit when text-based alternatives to speaking are offered. For example, a business has the option of using a real-time text chat with employees for people who'd prefer not to call in traditionally.
There are technologies out there, according to the linked source Common Assistive Technologies for Speech Disorders, that can apparently help people with a vocal stutter speak fluently by playing the sound of the user's voice back at them. They are called 'Electronic Fluency Devices' and their reception within the stuttering community on the subreddit r/Stutter seems to be mixed.
Augmentative and Alternative Communication
Those who do not have access to speech and language reliably may rely on AAC to get their point across. It was a little bit hard for me to find a consistent definition of AAC, but across my reading, here's my understanding of it.
AAC is a descriptive definition, rather than a prescriptive one. There are a bunch of methods and instruments that have popped up in order to solve one particular problem. The word 'AAC' can be used to refer to these techniques collectively.
AAC is 'alternative' communication, in that it is specifically meant to address those who cannot access mainstream forms of communication due to a speech or language disability
AAC may be 'augmentative' in that it brings in tools and techniques outside the scope of mainstream communication protocols.
In describing the kinds of AAC that are out there, I rely primarily on the Common Assistive Technologies for Speech Disorders (linked BoK resource) as well as ASHA's page on AAC.
Unaided AAC
This refers to methods that don't require any technology or props. Gestures and facial expressions. The person also may be able to make small vocalizations or know a few words or signs. These can also be incorporated.
Aided AAC
These methods require the use of props or technology and are divided into 'low tech' and 'high-tech' solutions. Sometimes the person just needs pen and paper to write on, or they point to various images on a cardboard picture board. High-tech solutions include Speech Generating Devices (SDG), tablet applications, specialized devices.
Mixed AAC
Often, individuals will use a combination of aided and unaided AAC. This is the approach that I personally use when I have verbal shutdowns. I have a speech generator app on my phone, and I have very good miming skills. It's much easier though if the people I'm around know sign language.
Are signed languages a form of AAC?
Deaf communities have historically fought against their languages being considered 'tools' or 'instruments' that merely facilitate communication with the hearing world. For a long time, linguists seriously thought that sign didn't constitute a language of its own. William Stokoe first used the phrase 'American Sign Language'' in 1965.
Calling sign language 'AAC' also downplays the sophistication of world sign languages. In all the techniques we've mentioned so far, the AAC-user isn't speaking a different language than the people they are communicating with. If their companions are speaking English, an AAC-user may point to different English words. It would be a bit odd if their AAC system relied entirely on Japanese while all the members of their community didn't speak that language.
But that's kinda the case for American Sign Language, which has a grammatical structure more similar to Japanese than English, and which has an enormous amount of influence from French sign language. Sign languages are indeed languages and people learn sign languages for personal, non-accessibility related reasons.
In summary, I would argue that sign language is not AAC for three reasons:
- Signed languages are sophisticated languages, not mere 'methods' or 'tools.'
- The primary aim of signed language isn't just to communicate with people who "speak normally."
- The users of signed languages don't consider it to be AAC.
However, there are artificial signing systems such as Makaton that are specifically designed by educators to function as AAC. Deaf and hard of hearing people are not the primary target of Makaton: it's aimed at people with developmental and intellectual disabilities. As the Wikipedia page puts it, 'Makaton is not a sign language.'
Accommodations for Aphasia
In addition to all the other accommodation techniques listed above, people with aphasia (depending on the profile) may require support in comprehending speech. Offering plain language materials and otherwise reducing the amount of uncommon words in your speech is one way to support language reception.
They also may have trouble with writing, which can be supported through the use of writing templates, organizational tools, word prediction and spell checkers.