People perceive silence in the same way as they hear sounds. This is evidenced by the results of a study published in the Proceedings of the National Academy of Sciences, writes The New York Times.

While the testing doesn’t provide insight into how our brains might process silence, the results suggest that people perceive silence as its own type of “sound,” rather than just a gap between sounds.

Chaz Firestone, a cognitive scientist at Johns Hopkins University and one of the authors of the study, said that if silence “not really a sound, and yet it turns out that we can hear it, then evidently, hearing is about more than just sounds.”

So he, along with Rui Zhe Goh, a graduate student in cognitive science and philosophy at Johns Hopkins University, and philosopher Ian Phillips, wondered: Does the mind treat silence the same way it treats sounds?

A series of sound illusions was used for the study. The first test compared one long sound with two short ones. The two shorter sounds together made up the same amount of time as the longer sound. But when people listened to them, they perceived one sound as lasting longer.

To apply this illusion to silence, Rui Zhe Goh and his colleagues inverted the test. The researchers used the sounds of restaurants, markets, trains or playgrounds, and inserted fragments of silence so that the participants could compare them.

The researchers hypothesized that if people perceive silence as their own type of sound, then silence must be subject to the same illusion as sounds. One long silence should be perceived as longer than the sum of two shorter silences. But if people perceive silence as the absence of sound, the illusion may not exist.

In other tests, silence was “placed” in different contexts to create more sound illusions. In each case, listeners perceived the illusion of a longer period of silence in the same way that they would perceive the illusion of a longer sound.

By the way, previously the British startup Sonantic achieved significant progress in creation of artificial voices that are difficult to distinguish from human speech. The company has developed an AI capable of vocalizing such complex emotions as love, anger, fear, coquetry, shyness.