11

The bandwidth of human hearing by empirical data is $20 \; Hz$ to $20 \; kHz$. A cochlear implant stimulates the auditory or acoustic or Cochlear nerve directly so that the hearing can be improved in the case of stimulation mechanism upstream of the Cochlear nerve has degraded.

Let us assume that the ear mechanism has not degraded (such as in a young and healthy adult). The Cochlear implant can likely improve hearing, even in this case, by increasing the bandwidth by amplifying the effect of the ear drum vibration (sensor actuation). However the neurons connecting the Cochlear nerve to the hearing region of the brain have an upper limit on the sampling rate on the order of $1 \; kHz$.

Does the the Nyquist sampling theorem limit the superhuman hearing and sound localization capability made possible by a Cochlear implant?

https://en.wikipedia.org/wiki/Hearing_range#/media/File:Animal_hearing_frequency_range.svg

kbakshi314
  • 331
  • 1
  • 16
  • 2
    I think it's just the physiology that gets in the way: there simply aren't sensors and neurons specialized for those frequencies (i.e. there is no internal clock). – a concerned citizen Apr 08 '21 at 07:13
  • 3
    This is better asked on biology.SE. A greater cochlear frequency wouldn't necessarily imply the rest of involved components can actually process the signal for hearing, I'd imagine. – OverLordGoldDragon Apr 08 '21 at 08:16
  • 2
    The number of neurons allocated is dynamic. Blinded kids may reallocate visual neurons to learn better audio echo location than sighted adults. – hotpaw2 Apr 08 '21 at 09:59
  • 2
    BTW, that's a very interesting plot, I didn't know the hearing range of so many species was known. I wonder, though, why does a cow need that dynamic range?? – MBaz Apr 08 '21 at 13:30
  • 1
    @MBaz Evolution gets rid of things if they're harmful, and lets useless things degrade. If it provides a tiny advantage, this hearing range will tend to be preserved. – wizzwizz4 Apr 08 '21 at 13:51
  • 1
    @wizzwizz4 Indeed. To me, it's intriguing to think of the evolutionary paths that led to such wide hearing range for, say, a guinea pig. Some hearing ranges are obviously adapted to specific scenarios (like a bat's for echolocation, a cat's as a one more tool in a killing machine's toolkit), but others are not so clear to me. – MBaz Apr 08 '21 at 14:05
  • 1
    @MBaz Evolution is not purposeful. – wizzwizz4 Apr 08 '21 at 14:06
  • 3
    @wizzwizz4 - evolution is not purposeful in a sense of their being an active agent directing it, but it's not random either. Natural selection does quite a good job at steering evolution towards a better adapted forms of life. – Davor Apr 08 '21 at 14:42
  • 1
    @wizzwizz4 Evolution can be considered an algorithm to find optimum solutions in a vast search space. In that sense, it arguably has a purpose, or at least it is not random (over the long term), as Davor says. – MBaz Apr 08 '21 at 15:24
  • 1
    @hotpaw2: at the lower and peripheral levels neurons have more specialization and less plasticity than at the higher functions. You can't re-allocate neurons from the optical nerve to the cochlea nerve. For starters: they are in the wrong spot so you can't "wire" them in – Hilmar Apr 08 '21 at 17:07
  • 1
    @Davor It steers up gradients. If there's no gradient, it doesn't do much; it won't remove hearing ability just because it's not necessary, unless there's some energy cost (or something) in keeping it around. Evolution doesn't have any sense of “too complicated for the current requirements”, if it already has something lying around that it can re-purpose. – wizzwizz4 Apr 08 '21 at 18:56
  • 1
    @MBaz Evolution doesn't have goals. No designs it's trying to reach. It's not trying to make a cow; it's “trying” to produce something slightly different to what's currently alive, in a way that makes it better at reproduction. (Where reproduction includes surviving to reproduce, and perhaps ensuring that the offspring survive to reproduce.) There's no accurate simplification of evolution that isn't just a description of evolution. – wizzwizz4 Apr 08 '21 at 19:00
  • @wizzwizz4 I think you are misinterpreting what I have said. Let's leave it at that, since it is quite off-topic. – MBaz Apr 08 '21 at 23:15
  • This is a great plot. Fascinating to see that humans compared to these other animals are really good at perceiving very low frequencies. But then again it means perceiving something as tone rather than repetition. Is it really higher perception in that sense? – Kafein Apr 09 '21 at 10:18
  • this is a really cool chart. – robert bristow-johnson Apr 09 '21 at 17:00

2 Answers2

28

Does the Nyquist frequency of the Cochlear nerve impose the fundamental limit on human hearing?

No.

A quick run-through the human auditory system:

  1. The outer ear (pinnae, ear canal), spatially "encodes" the sound direction of incidence and funnel the sound pressure towards the
  2. ear drum, which converts sound into physical motions, i.e. mechanical energy
  3. The middle ear (ossicles) is a mechanical transformer (with some protective limiting built-in) that impedance matches the air-loaded ear drum to the liquid-loaded oval window of the
  4. Cochlea (inner ear). The vibration excites a bending wave on the basilar membrane. The membrane is highly resonant and transcodes frequency into location: for any given frequency the location of the resonance peak is in a different spot. High frequencies wiggle very close to the oval window, low frequencies towards the end of it. This motion is picked up by the
  5. Cochlea neurons, which transmit the intensity of the excitation at their location to the brain. About 20% of the neurons are efferent (come out of the brain) and are used to actively tune the resonance with a feedback loop (which causes tinnitus if misadjusted)

So in essence the Basilar Membrane performs sort of a mechanical Fourier transform. The frequency selectivity of the Neurons is NOT determined by the firing pattern but simply by their location. A neuron at the beginning of the basilar membrane is sensitive to high frequencies and a neuron at the end detects low frequencies. But they are more or less the same type of Neurons.

The Nyquist criteria doesn't come into play at all since no neuron is trying to pick up the original time domain waveform. The couldn't anyway: human neurons have a maximum firing rate of less than 1000 Hz and average firing rates are way below that. The firing rate of a cochlea neuron represents "Intensity at a certain frequency" where that frequency is determine by the location of that specific neuron.

So you can think of it as a short term Fourier Transform. Instead of a single time domain signal you get a parallel stream of frequency domain signals where each individual signal has a much lower bandwidth.

A cochlea implant basically does the short term Fourier transform internally and then connects the output for each frequency range to the "matching" neurons in the cochlea nerve. Theoretically you can create ">20 kHz" hearing with an implant that can actually receive and process higher frequencies and simply routes them to existing neurons, i.e. you could feed 40 kHz activity to the 10 kHz Neuron. The human would have a sensation when exposed to 40 kHz but it's unclear what they could do with that: they would have "relearn" how to hear. Aside from the highly questionable practical and ethical issues, it probably wouldn't be useful. In order to get to 40 kHz you'd have to give some other frequencies, and chances are that evolution has chosen the current "normal" range for humans pretty carefully.

Hilmar
  • 44,604
  • 1
  • 32
  • 63
  • 1
    "you could feed 40 kHz activity to the 10 kHz Neuron" isn't this how cochlear implants work in some cases? If a person can only hear in a narrow range of frequencies, then they can get an implant or hearing aid that down shifts the sound into that range? – eipi Apr 08 '21 at 16:38
  • 2
    Depends. Cochlea implants are most valuable if the cochlea itself or the cilia (hair cells) are damaged but the cochlea nerve is still mostly intact. In this case you would route the frequency bands to those nerve endings that are the most "natural" fit. If there is also damage to the cochlea nerve, you can indeed try to reroute the most important frequency bands (speech intelligibility) to whatever nerve endings are still functional. That tricky though. – Hilmar Apr 08 '21 at 17:03
  • 2
    I'm always so amazed at how many parts there are to the auditory system, my best friend is a Speech Language Pathologist and we've talked about this before. Just a two way conversation between two people, like hearing someone talk and saying something back has so many parts to it, and so many ways for it to go wrong. We all take it for granted. it's such a fascinating crossover between biology and engineering. I would have loved to go into it as a dsp engineer. – eipi Apr 08 '21 at 19:09
  • @Hilmar thanks for the answer. Is it a mistake where the answer says 'you could feed 40 kHz activity to the 10 kHz Neuron' since you mentioned that the highest frequency of neuronal firing is ~ 1 kHz earlier in the answer? Second, the answer implies that although the sampling is occuring at ~1 kHz at the neuronal level you can feed 40 kHz to it. Since this means aliased input is being provided to the brain, the natural corollary is that the brain can used aliased input for, say sound localization but it is subject to lower accuracy due to the aliasing. Please comment on the corollary. – kbakshi314 Apr 08 '21 at 19:20
  • 1
    @kb314 "The 10 kHz neuron" meaning the one that's triggered by the hair that responds to 10 kHz sounds. You could run your own Fourier transform and trigger that neuron for 40 kHz sounds instead. – Tavian Barnes Apr 08 '21 at 19:50
  • @TavianBarnes I see that you explained that the '10 kHz' neuron refers to the input frequency of the hair corresponding to the neuron we're addressing. So the implant, technically, can trigger the hair at 40 kHz or 80 kHz to increase the possible maximum analog frequency. However, the second question still stands (brief summary repeated next). Although the sampling is occurring at ~1 kHz at neuronal level you feed 40 kHz to it. Since aliased input is provided to the brain, the corollary is that the brain can use aliased signals for localization subject to lower accuracy due to the aliasing. – kbakshi314 Apr 08 '21 at 19:59
  • 2
    @kb314: the firing rate of the neuron is NOT related to the frequency itself. It's related to the intensity at that frequency band. It's like the cone cells in your eye: the S-cell does not try to follow the actual light waveform (which is a whopping 714 Tera Hz) but it encodes the intensity at that frequency. The firing rate of neuron is typically in the 10s to 100s per second for strong simulation regardless of frequency – Hilmar Apr 08 '21 at 19:59
  • @Hilmar I see. Does this mean that the way nature deals with the filtering issue is redundancy? Different hair correspond to different frequencies, so that knowing which hair is stimulated and it's amplitude provide sufficient information about the intensity as a function of frequency. However, if you had just one hair the brain would have terrible prediction accuracy. Please comment on this related to the problem I'm indicating as follows. Can an aliased signal be used for a binary classification task via artificial neural network or is it destined to poor prediction accuracy due to aliasing? – kbakshi314 Apr 08 '21 at 20:09
  • 1
    @kb314: Redundancy is really the wrong word here. Each ear has about 4000 afferent neurons. It's a massively parallel processing architecture by design and it works remarkably well by all quantitative metrics. Trying to compare this to what you can do with single hair cell is pointless. If you have issues with a neural network, it may be best to ask a new question. – Hilmar Apr 08 '21 at 20:49
  • @Hilmar thanks for the comment. I think I understand. I agree that separate post for a different question is best. – kbakshi314 Apr 09 '21 at 00:57
  • 1
    I don't see any ethical issues with someone reprogramming their own cochlear implant to hear ultrasonics, presumably with a toggle switch. A bit similar to the way people get magnet implants to sense magnetic fields. – user253751 Apr 09 '21 at 11:42
  • 1
    Great answer, so we basically have a whole lot of microphones in our ears which only respond to specific frequencies (which can be tuned?) and combine the volume of all those microphones to one impression. – Arsenal Apr 09 '21 at 13:43
  • 1
    Feeding a 40kHz frequency bin to a 10kHz neuron would essentially be a heterodyning operation... It's not quite the same as giving someone 40kHz hearing. The brain would just hear it as 10 kHz, not 40k, and you don't need a cochlea implant to do this, you can do it with a DSP processor and a pair of headphones... – stix Apr 09 '21 at 17:45
2

The auditory system encodes sound in frequency domain, i.e. the activation levels of auditory nerve fibers represent the amplitude or energy in a frequency band assigned to that particular fiber. The ear itself does the transformation from time to frequency domain.

If you somehow modified the ear itself to be sensitive to higher frequencies, the output axons would have no problem handling that, because the frequency information is encoded in the selection of the axon, not in its firing rate. Modifying a functional ear is generally frowned upon, but if the ear is non-functional, it's all bets off. E.g. a cochlear implant can give you decent echolocation abilities if it has wider input bandwidth and maps this wider frequency range to the existing axons. With a suitable (even purely mechanical) clicker that produces short acoustic pulses, you can "see" with your ears. Humans have some rudimentary echolocation ability even without such aids - it requires practice to be able to use it, of course. Widening the ear's functional bandwidth improves the spatial resolution.