• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

HSE Scientists Uncover How Authoritativeness Shapes Trust

HSE Scientists Uncover How Authoritativeness Shapes Trust

© iStock

Researchers at the HSE Institute for Cognitive Neuroscience have studied how the brain responds to audio deepfakes—realistic fake speech recordings created using AI. The study shows that people tend to trust the current opinion of an authoritative speaker even when new statements contradict the speaker’s previous position. This effect also occurs when the statement conflicts with the listener’s internal attitudes. The research has been published in the journal NeuroImage.

Modern deepfakes are becoming increasingly difficult to distinguish from genuine recordings and are ever more often used to spread false information. In healthcare, the consequences of disinformation are particularly dangerous, as they pose a threat to public health.

Researchers from the HSE Institute for Cognitive Neuroscience (ICN) conducted an experiment to examine how people perceive audio deepfakes attributed to celebrities who speak either in favour of or against COVID-19 vaccination.

The study involved 61 participants. Half of them supported vaccination, while the other half opposed it. The participants listened to AI-generated audio recordings of well-known opinion leaders—a doctor who supports vaccination and a popular actress known for her anti-vaccination stance. While listening, the participants’ brain activity was recorded using electroencephalography (EEG). At a certain point, the speakers uttered statements that contradicted their real public positions: the doctor unexpectedly said that COVID vaccinations were unnecessary, while the actress, on the contrary, emphasised the need for vaccination. In these cases, the EEG recorded the N400 component—a brain response to semantic incongruity that occurs approximately 400 milliseconds after we see or hear an unexpected stimulus. The greater the incongruity, the stronger the signal.

Experiment Design
© E. Monahhova et al.; ERP correlates of semantic inconsistencies in deepfakes; NeuroImage, 2026

Data analysis showed that, regardless of their own attitudes, participants rated the doctor’s statements more highly across all measures: they found them more persuasive and authoritative, considered them more trustworthy, and were more willing to share the broadcasted information with friends and acquaintances. The EEG recorded the N400 component when the doctor spoke out against COVID-19 vaccination; by contrast, this response was significantly weaker or entirely absent when contradictory statements came from the actress, who is less authoritative in medical matters.

Eliana Monahhova

‘Initially, we assumed that participants’ internal attitudes would influence how they perceived the audio recording. That is why we first established whether they supported vaccination or opposed it and divided them into two groups. In addition, we carried out special personality tests to assess their level of analytical thinking, need for cognition, and conformity. However, when listening to the deepfakes, it turned out that all these parameters were almost irrelevant. The decisive factor was speakers’ authoritativeness in the medical field,’ explains Eliana Monahhova, first author of the article and Junior Research Fellow at the Centre for Cognition and Decision Making, HSE Institute for Cognitive Neuroscience.

The findings are important for understanding the mechanisms behind the spread of disinformation. They show that messages attributed to authoritative sources can have a strong impact on audiences even if they contain internal contradictions and diverge from the speaker’s public stance.

‘To the best of our knowledge, this is the first study to examine the neurocognitive mechanisms involved in processing semantic contradictions in deepfakes from the perspective of message and source credibility. Understanding these mechanisms makes it possible to develop more effective strategies to counter digital fraud and information manipulation,’ said Eliana Monahhova.

The study was carried out with the support of Russian Science Foundation grant No. 24-18-00432, ‘Neurophysiological Mechanisms of Perceiving Manipulative Information: Factors and Strategies of Resilience’.

See also:

HSE University to Host Second ‘Genetics and the Heart’ Congress

HSE University, the National Research League of Cardiac Genetics, and the Central State Medical Academy of the Administrative Directorate of the President will hold the Second ‘Genetics and the Heart’ Congress with international participation. The event will take place on February 7–8, 2026, at the HSE University Cultural Centre.

HSE University Develops Tool for Assessing Text Complexity in Low-Resource Languages

Researchers at the HSE Centre for Language and Brain have developed a tool for assessing text complexity in low-resource languages. The first version supports several of Russia’s minority languages, including Adyghe, Bashkir, Buryat, Tatar, Ossetian, and Udmurt. This is the first tool of its kind designed specifically for these languages, taking into account their unique morphological and lexical features.

Language Mapping in the Operating Room: HSE Neurolinguists Assist Surgeons in Complex Brain Surgery

Researchers from the HSE Center for Language and Brain took part in brain surgery on a patient who had been seriously wounded in the SMO. A shell fragment approximately five centimetres long entered through the eye socket, penetrated the cranial cavity, and became lodged in the brain, piercing the temporal lobe responsible for language. Surgeons at the Burdenko Main Military Clinical Hospital removed the foreign object while the patient remained conscious. During the operation, neurolinguists conducted language tests to ensure that language function was preserved.

HSE MIEM and AlphaCHIP Innovation Centre Sign Cooperation Agreement

The key objectives of the partnership include joint projects in microelectronics and the involvement of company specialists in supervising the research activities of undergraduate and postgraduate students. Plans also focus on the preparation of joint academic publications, the organisation of industrial placements and student internships, and professional development programmes for the company’s specialists.

AI Overestimates How Smart People Are, According to HSE Economists

Scientists at HSE University have found that current AI models, including ChatGPT and Claude, tend to overestimate the rationality of their human opponents—whether first-year undergraduate students or experienced scientists—in strategic thinking games, such as the Keynesian beauty contest. While these models attempt to predict human behaviour, they often end up playing 'too smart' and losing because they assume a higher level of logic in people than is actually present. The study has been published in the Journal of Economic Behavior & Organization.

HSE University and InfoWatch Group Sign Cooperation Agreement

HSE University and the InfoWatch Group of Companies marked the start of a new stage in their collaboration with the signing of a new agreement. The partnership aims to develop educational programmes and strengthen the practical training of specialists for the digital economy. The parties will cooperate in developing and reviewing curricula, and experts from InfoWatch will be involved in teaching and mentoring IT and information security specialists at HSE University.

Scientists Discover One of the Longest-Lasting Cases of COVID-19

An international team, including researchers from HSE University, examined an unusual SARS-CoV-2 sample obtained from an HIV-positive patient. Genetic analysis revealed multiple mutations and showed that the virus had been evolving inside the patient’s body for two years. This finding supports the theory that the virus can persist in individuals for years, gradually accumulate mutations, and eventually spill back into the population. The study's findings have been published in Frontiers in Cellular and Infection Microbiology.

HSE Scientists Use MEG for Precise Language Mapping in the Brain

Scientists at the HSE Centre for Language and Brain have demonstrated a more accurate way to identify the boundaries of language regions in the brain. They used magnetoencephalography (MEG) together with a sentence-completion task, which activates language areas and reveals their functioning in real time. This approach can help clinicians plan surgeries more effectively and improve diagnostic accuracy in cases where fMRI is not the optimal method. The study has been published in the European Journal of Neuroscience.

For the First Time, Linguists Describe the History of Russian Sign Language Interpreter Training

A team of researchers from Russia and the United Kingdom has, for the first time, provided a detailed account of the emergence and evolution of the Russian Sign Language (RSL) interpreter training system. This large-scale study spans from the 19th century to the present day, revealing both the achievements and challenges faced by the professional community. Results have been published in The Routledge Handbook of Sign Language Translation and Interpreting.

HSE Scientists Develop DeepGQ: AI-based 'Google Maps' for G-Quadruplexes

Researchers at the HSE AI Research Centre have developed an AI model that opens up new possibilities for the diagnosis and treatment of serious diseases, including brain cancer and neurodegenerative disorders. Using artificial intelligence, the team studied G-quadruplexes—structures that play a crucial role in cellular function and in the development of organs and tissues. The findings have been published in Scientific Reports.