TORIma Academy Logo TORIma Academy
Selective auditory attention
Arts

Selective auditory attention

TORIma Academy — Cognitive Psychology

Selective auditory attention

Selective auditory attention

Selective auditory attention , or selective hearing , is a process of the auditory system where an individual selects or focuses on certain stimuli for…

Selective auditory attention, also known as selective hearing, constitutes a cognitive mechanism within the auditory system, enabling an individual to prioritize specific auditory stimuli for processing while concurrently disregarding others. This selective mechanism is crucial due to the inherent limitations in human cognitive processing and memory capacity. When individuals engage in selective hearing, environmental sounds are initially registered by the auditory system; however, only a subset of this auditory input is subsequently chosen for cerebral processing.

Typically, auditory attention is preferentially allocated to stimuli deemed most salient or relevant to the individual. Rather than representing a physiological disorder, selective hearing denotes a common human cognitive ability to filter out extraneous sounds and noise. Fundamentally, it involves the deliberate disregard of specific environmental inputs within the surrounding environment.

The Bottleneck Effect

Krans, Isbell, Giuliano, and Neville (2013) elucidated selective auditory attention through the lens of the bottleneck effect, a neural mechanism that constrains the simultaneous processing of multiple stimuli. For instance, a student concentrating on a teacher's lecture may disregard the ambient sounds of a boisterous classroom. Consequently, the instructional content from the teacher is encoded into the student's long-term memory, while the distracting classroom stimuli are entirely overlooked, as if nonexistent. The human brain is inherently incapable of continuously assimilating all sensory data within a complex, real-world environment; thus, only the most pertinent information undergoes comprehensive processing.

Historical Context

The foundational investigations into selective auditory attention commenced in 1953 with Colin Cherry's introduction of the "cocktail party problem." During that era, air traffic controllers in control towers received pilot communications via loudspeakers. The superimposition of multiple voices through a singular loudspeaker significantly complicated this task. Cherry's experimental design, which simulated the challenges encountered by air traffic controllers, required participants to attend to and vocalize one of two distinct messages presented concurrently from a single loudspeaker. This methodology subsequently became known as the dichotic listening task.

While Colin Cherry initially conceived the concept, Donald Broadbent is widely credited with the systematic application of dichotic listening tests in his pioneering research. Broadbent employed dichotic listening to investigate how individuals selectively attend to stimuli amidst an overload of auditory input, subsequently utilizing his findings to formulate the filter model of attention in 1958. He posited that the human information processing system operates with a limited-capacity "bottleneck," necessitating an "early selection" mechanism prior to the comprehensive processing of auditory data. Broadbent's theory suggested that auditory information initially enters an unlimited sensory buffer, from which a single stream is filtered and passed through the bottleneck for coherent processing, whereas unselected streams rapidly diminish in salience and remain unprocessed. However, Broadbent's model presents a contradiction with the cocktail party phenomenon, as it predicts that individuals would not respond to their names from unattended sources, given that unselected information is purportedly discarded pre-processing.

In 1963, Deutsch and Deutsch introduced their late selection model, which offered a competing theoretical framework to Broadbent's early selection model. Their model posits that all incoming information and sensory input receive attention and undergo processing for semantic content. Subsequently, during the processing routine and prior to entry into short-term memory, a filtering mechanism evaluates the semantic attributes of the information, allowing relevant stimuli to proceed to short-term memory while discarding irrelevant data. Regarding selective auditory attention, Deutsch and Deutsch's model proposes that a diminished response to unattended stimuli stems from an internal assessment of informational relevance, whereby more critical stimuli are prioritized for initial entry into working memory.

In 1964, Anne Treisman, a graduate student of Broadbent, refined Broadbent's theory by introducing her attenuation model. This model posits that unattended information is attenuated, or reduced in intensity, relative to attended information, yet it continues to undergo processing. For instance, if an individual is ordering a drink in a coffee shop amidst three extraneous sound sources—chatter, a coffee brewer, and music—Treisman's model suggests that these background sounds would still be perceived, albeit muffled, while primary attention is directed towards the cashier. Furthermore, Treisman proposed a threshold mechanism within selective auditory attention, whereby specific words from the unattended information stream can capture an individual's focus. Words possessing a low threshold, indicating high semantic significance or urgency, such as one's name or a warning like "watch out," can effectively redirect attention to critical stimuli.

Developmental Trajectories in Youth

Selective auditory attention constitutes a critical element of broader auditory attention, encompassing arousal, orienting responses, and attention span. Investigating selective auditory attention proves more straightforward in children and adults than in infants, primarily due to infants' restricted capacity for comprehending and responding to verbal instructions. Consequently, much of the insight into auditory selection in infancy originates from related research domains, such as speech and language perception and discrimination. Nevertheless, limited instances of auditory selection have been documented in infants, demonstrating preferences for their mother's voice over that of another female, their native language over a foreign one, and infant-directed speech compared to adult-to-adult communication.

With advancing age, older children exhibit an enhanced capacity to detect and select auditory stimuli when contrasted with younger age groups. This observation implies that selective auditory attention is an age-dependent cognitive ability, improving in correlation with advancements in the automatic processing of information.

Given that younger children exhibit a diminished capacity to detect and select auditory stimuli compared to their older peers, their ability to differentiate relevant from irrelevant information is similarly reduced. The proficiency in allocating attention to a single message amidst competing stimuli progressively improves with age, notably between five and twelve years, subsequently stabilizing.

Contributing factors to these enhanced abilities include improved language proficiency and increased word familiarity, which both correlate with advancing age.

Furthermore, older children may possess a greater capacity to comprehend task requirements, including associated rewards or punishments, thereby enabling more effective elimination of extraneous stimuli. Research employing the incidental learning paradigm indicates that children aged 11 and above demonstrate a reduced propensity to process incidental stimuli, a development attributed to the emergence of strategies for prioritizing relevant information over irrelevant.

Ultimately, the challenges in filtering irrelevant information or effectively allocating attention to pertinent stimuli are indicative of developmentally immature attentional allocation mechanisms.

Functional Neuroimaging of Auditory Attention

Recent advancements in neuroimaging techniques, including Positron Emission Tomography (PET) and Functional Magnetic Resonance Imaging (fMRI), have significantly enhanced the investigation of neural operations, offering high spatial resolution. Specifically, fMRI has been instrumental in multiple studies demonstrating attentional effects within the auditory cortex. Furthermore, research utilizing classical dichotic selective listening paradigms has also yielded significant findings. These studies revealed more pronounced effects in the cortex contralateral to the attended direction, interpreted as a "selective tuning of the left or right auditory cortices according to the direction of attention."

Prevalence

The prevalence of selective hearing remains an under-researched area. Nevertheless, some researchers contend that its incidence is notably higher in males compared to females. A 2010 study by Ida Zündorf, Hans-Otto Karnath, and Jörg Lewald explored sex-based differences in the localization of auditory information. Their investigation employed a sound localization task, specifically focusing on the cocktail party effect, where male and female participants were required to identify sounds from a designated source amidst competing auditory stimuli. The findings indicated superior overall performance among male participants, while females experienced greater difficulty in pinpointing target sounds within a multi-source environment. Zündorf et al. posited that variations in attentional processes might account for these sex differences in locating target sounds within complex auditory fields. Despite these observed distinctions in selective auditory processing, both men and women encounter challenges with multitasking, particularly when concurrent tasks share similar characteristics.

Disorder Classification

Selective hearing is not recognized as a physiological or psychological disorder. According to the World Health Organization (WHO), a hearing disorder is characterized by a complete loss of auditory function, signifying an inability to perceive sound. From a technical perspective, selective hearing does not equate to deafness to specific auditory messages. Instead, it represents an individual's capacity to selectively attend to particular sound messages. The entire auditory input is physically registered by the ear, but the brain actively filters out irrelevant information to prioritize salient components of the message. Consequently, selective hearing should not be misconstrued as a physiological hearing impairment. Selective auditory attention constitutes a normal sensory process of the brain; however, abnormalities in this process can manifest in individuals with sensory processing disorders, including autism, attention deficit hyperactivity disorder, post-traumatic stress disorder, schizophrenia, selective mutism, and isolated auditory processing disorders.

Targeted Speech Enhancement

The concept of targeted speech enhancement has been advanced for integration into hearable devices, such as headsets and hearing aids, to enable users to isolate and perceive a specific individual's voice within a crowded environment. This technology leverages real-time neural networks to acquire the unique vocal characteristics of a designated speaker. Subsequently, these learned characteristics are utilized to amplify the target speaker's voice while simultaneously suppressing other voices and ambient noise. The deep learning-powered device facilitates speaker enrollment by requiring the wearer to visually focus on the target speaker for a brief period, typically three to five seconds. Following enrollment, the hearable device can effectively cancel all other environmental sounds, rendering only the enrolled speaker's voice in real time, even if the listener changes position or orientation relative to the speaker. This innovation holds potential benefits for individuals experiencing hearing loss and those with sensory processing disorders.

Acoustic Bubbles

The integration of neural networks with noise-canceling technology has led to the development of headsets featuring customizable auditory zones, termed 'acoustic bubbles.' These devices empower wearers to concentrate on speakers within a specified spatial region while simultaneously attenuating extraneous sounds. Central to this technology is a neural network engineered for real-time audio signal processing and analysis (within one-hundredth of a second) on resource-constrained headset platforms. This optimized network is subsequently trained to discern the quantity of sound sources both within and outside the acoustic bubble, isolate these distinct sounds, and approximate the distance of each source—a computational challenge considered demanding even for human cognition. The neural networks are integrated into noise-canceling headsets equipped with an array of microphones, thereby creating a system capable of generating an acoustic bubble with a programmable radius spanning from 1 to 2 meters. Such acoustic bubble headsets facilitate selective auditory focus on proximate sounds while suppressing those originating from greater distances.

Cognitive inhibition

References

Çavkanî: Arşîva TORÎma Akademî

About this article

What is Selective auditory attention?

A short guide to Selective auditory attention, its main features, uses and related topics.

Topic tags

What is Selective auditory attention Selective auditory attention guide Selective auditory attention explained Selective auditory attention basics Art articles Art in Kurdish

Common searches on this topic

  • What is Selective auditory attention?
  • What is Selective auditory attention used for?
  • Why is Selective auditory attention important?
  • Which topics are related to Selective auditory attention?

Category archive

Torima Akademi Neverok Archive: Art

Dive into a rich collection of art articles covering a vast spectrum of creative expression. Explore global art movements, from abstract expressionism to academic art, alongside the unique heritage of Kurdish art. Our

Home Back to Arts