Current Position :homepage  News  News

News

Professor Nai Ding’s group published in eLife: Low-frequency neural activity reflects high-level speech processing

【Publisher】:生物医学工程与仪器科学学院【Time】:2022-05-13 【Frequency】:33

    From digital personal assistants like Siri and Alexa to customer service chatbots, computers are slowly learning to talk to us. But as anyone who has interacted with them will appreciate, the results are often imperfect.

    Each time we speak or write, we use grammatical rules to combine words in a specific order. These rules enable us to produce new sentences that we have never seen or heard before, and to understand the sentences of others. But computer scientists adopt a different strategy when training computers to use language. Instead of grammar, they provide the computers with vast numbers of example sentences and phrases. The computers then use this input to calculate how likely for one word to follow another in a given context. The sky is blue is more common than the sky is green, for example.

    But is it possible that the human brain also uses this approach? When we listen to speech, the brain shows patterns of activity that correspond to units such as sentences. But previous research has been unable to tell whether the brain is using grammatical rules to recognise sentences, or whether it relies on a probability-based approach like a computer.

    Using a simple artificial language, professor Nai Ding’s group have now managed to tease apart these alternatives. Healthy volunteers listened to lists of words while lying inside a brain scanner. The volunteers had to group the words into pairs, otherwise known as chunks, by following various rules that simulated the grammatical rules present in natural languages. Crucially, the volunteers’ brain activity tracked the chunks – which differed depending on which rule had been applied – rather than the individual words. This suggests that the brain processes speech using abstract rules instead of word probabilities.

    While computers are now much better at processing language, they still perform worse than people. Understanding how the human brain solves this task could ultimately help to improve the performance of personal digital assistants.


This research was supported by the National Natural Science Foundation of China (Grant No. 1771248), Major Scientific Research Project of Zhejiang Lab (Grant No. 2019KB0AC02), Zhejiang Provincial Natural Science Foundation of China (Grant No. LY20C090008), and Fundamental Research Funds for the Central Universities. 

Jin, P., Lu, Y., and Ding, N.* (2020), Low-frequency Neural Activity Reflects Rule-based Chunking during Speech Listening. eLife 9:e55613 doi: 10.7554/eLife.55613