What is sound?
Physics tells us that sound is a vibration that propagates as a mechanical wave under pressure, traveling through a medium such as air or water. In human terms, sound is our brain’s perception of that wave based on its effect upon our ears.
Frequency is the number of sound vibrations per second, measured in Herz (Hz). Decibels (dB) is a measure of volume – the amplitude, or pressure, of the sound wave. A normal speaking voice is about 60 dB, while a shotgun blast might be 140 dB.
Acoustics is the study of these mechanical waves, including vibration, ultrasound (sound frequency higher than the human ear can detect), and infrasound (frequencies lower than the human ear can detect). A scientist working in these areas might be considered an acoustical engineer, while an audio engineer is concerned with recording, manipulating, and reproducing sound.
Acoustic applications are found in virtually every aspect of modern life, from WiFi devices to a child’s slide whistle. Acoustical engineers might also be concerned with reducing sound or the elimination of white noise due to electrical signal dissonance.
How does sound work?
Sound waves are generated by an oscillating source, such as our own vocal cords or the vibrating diaphragm of a stereo speaker. These vibrations travel through a surrounding medium such as air or even solid objects due to displacement of particles by the wave. While the average position of the particles doesn’t change, we’ve all seen that high-frequency or high-amplitude sounds can damage or destroy certain objects – including components of the human ear. Obviously the pressure and velocity of sound will differ according to the relative densities of the media through which it passes and the distance from the source.
The propagation of sound waves is generally determined by
1. The relative pressure and density between the sound wave and the medium
2. The relative motion of the medium and sound source (for example, sound in strong wind as opposed to still air, or sound from a passing vehicle vs a stationary source)
3. The resistance of the medium, usually negligible in air and water. Higher resistance means faster degradation of the sound wave.
When sound passes through a medium without consistent physical properties, it may be dispersed, as in sound-proofing ceiling tiles, or even focused into a powerful wave as used by various police and military as a crowd-control device. Sound cannot travel through a vacuum (an absence of any particles).
The speed of a sound wave is also determined not by volume (intensity), but the medium through which it passes. In natural atmosphere at sea level, that’s approximately 767 mph.
How do we hear it?
Sound is a pressure wave, so our idea of sound is really an effect of the mind. Psychoacoustics is the study of this effect. Physical reception of sound is limited by our own ears. Normal range for humans is between 20 and 20,000 Hz. That differs widely between species; dogs can hear higher sounds than humans, but not sounds below 40 Hz.
In virtually all species sound is a major sense in detecting change in the outside world, as an alarm for potential danger, and for communicating. Some species, such as bats or dolphins, can generate their own sound to aid in detecting objects through echo-location. Every species makes it’s own unique sound; only a few species, such as humans and certain birds, can imitate the sounds of others.
There are six factors that determine how our brains interpret sound:
1. Pitch – this is perceived as the high or low vibration of sound. Pitch perceptions can vary widely in individuals when it comes to complex sounds, accounting in differences for appreciation of tastes in music. This often comes from personal experience, which also affects the pitch we use when making our own sounds. With music, and to some extent language, the sounds are analyzed as a pattern of rising and falling pitch.
2. Duration – how long or short a single sound is relates to our nervous system’s on-off response. The duration of sound really lasts from the moment we perceive it until the moment it ceases, or until we perceive it as having changed. In an environment such as a busy restaurant where many conversations are taking place at once, all the stops and starts are blurred into a continuous stream of white noise which we learn to tune out.
3. Volume – this is our recognition of the “loudness” of a particular sound. This is really determined by the total auditory nerve stimulation. A very short sound such as a firecracker can sound softer than a human shout even though it actually has greater amplitude just because our brains have less time to perceive and interpret it.
4. Timbre – is the perception of different sound qualities and represents the “sonic footprint” our brains relate to the sound. The way a sound differs as we hear it provides our brain most of the information for determining timbre. For example, the same notes played on a flute and a saxophone might have very similar loudness, pitch, and duration, but our brains interpret them as two entirely different timbres.
5. Texture – this is the number of sound sources and the interaction or pattern between them. A full symphony orchestra has far more texture than a string quartet, and a string quartet far more than a barking dog. This relates to our cognitive separation of the sounds – if that orchestra was composed entirely of trumpets it would sound much more like unpleasant noise.
6. Location – spacial location within an environment has a lot to do with how we perceive sound. The relative timbre, angle, and distance of the sound source are characteristics our cognitive brains recognize and interpret. That’s why we can often focus on a single voice even in that busy restaurant.
Any sound we don’t like, or which we hear but can’t interpret – that’s just noise.