Digital Sound

Signal Converters (ADC & DAC)

Signal Converter

The digital era of audio reproduction continues to build off of the advances made in the electrical era. It relies on several discoveries, two of the most important being: the invention of the transistor, and audio signal converters.

Signal converters were basically circuits with semi-conductive chips that functioned as converters or comparators of incoming electric signals. Since the 1930’s sound waves had been converted into electricity, and then back into sound. However what signal converters allowed for, was for electricity to be converted into digital information. Many variations on signal converters exist, and no single person is wholly responsible for any one of them, but they typically come in two types: Digital Analog Converters (DAC’s) and Analog Digital Converter’s (ADC’s): the first converted binary into electricity, and the second converted electricity into binary (Kester, 2015).

This process was not without its limitations however. Historically the early quantization of information, particularly of sound, relied on hugely cumbersome and unwieldy machinery. The earliest computers were basically modifications of the telegraph/telephone, with great potential, but limited functionality. They could send only slightly more information than a telegraph or telephone. However, as proof of concepts they prooved revolutionary. Importantly, however electric signals in these early machines still relied on vacuum tube’s for amplification, which consumed enormous amounts of power, and forced designs that were relatively large and dangerous (Computer History: Timeline of Computers). All of this changed with the invention of the transistor. 

 

The Transistor

A Transistor

A transistor performs the same function as a triode, without many of the downsides. It was discovered, like many other inventions, by accident. Dr’s Bardeen and Bratain, two scientists at Bell Labs, happened to connect two gold tipped wire-ends of a circuit to a piece of germanium, and found that the metalloid amplified the current. The implications were enormous: unlike tube amps, transistors used only the power of the circuit. Moreover transistors could be incredibly small, down to 7 nano meters, or 0.0000000007 m, which directly contributed to the explosion in portable audio machines: from radios, to tape-players, to boom boxes, and keyboards (ETHW: Invention of the First Transistor) (NPR: IBM Announces Smallest Transistor).

The combination of signal converters with transistors made digital recording of sound a reality. By taking the known decay of some electric signal in a circuit, and then sampling an audio signal at some variable rate based on the former signals decay, it was possible to store the information encoded in the vibrational energy of the air as a series of bytes, or chunks of sound. Where this differed from previous recording techniques is that analog sound i.e. the sound you hear from a violin, or your own voice, exists as a continuum of information and energy with no theoretical upper limit — it has infinite resolution. Computers, unlike our brains and ears, function linearly, and something with infinite resolution is impossible to calculate, let alone sample or fathom. Fortunately for human beings, in order to enjoy a piece of music, we don’t need to. This is because there are certain physiological and psychological limits to what our brain records as music. A fact that the Motion Picture Experts Group and to a lesser extent Sony and Philips would take advantage of with the standardization of the MP3 compression algorithm, and the Compact Disc (Peek, 2010) (Stanford: The MP3).

Sony and Phillips Premier Compact Disc

Compact Disc

MPEG 1-2 (MP3)

MP3 File Format

It was the research conducted at Philips and Sony that set the standard sample rate at which an audio sample is now generally taken ~ 44 KHz and 16 bits per sample. It is of course not a coincidence that this resulted in ~ 22KHz for playback, which is just over the upper limit of human hearing. The audio files on CD’s, while of good quality, tended to be too large for the network bandwidth available at the time. Which is when compression algorithms stepped in to offer a neat, but not uncontroversial work-around (Peek, 2010).

MP3’s were designed from the bottom up as a way of maximizing audio quality, while also minimizing file size. They did this by using an algorithm to remove the non-essential parts of a audio track, like background fuzz, or the squeaks of a chair, or a random crash, or the humming of the performer, so that the sound file would be small enough that it could be sent over telephone networks. Without MP3’s there would be no Napster, no Spotify, and no iPod. Napster, an older form of what would basically become the model for the iTunes music store, relied on the internet and compression algorithms to share music files between users. Similarly, Spotify, a company that streams music digitally also depends on the reliable compression of audio files without which the buffering time for individual songs would be closer to minutes rather than seconds: in fact, nearly all forms of modern digital audio playback rely in some way on compression (Menn, 2003).

Napster P2P Music Sharing (mp3’s) The release of Apple’s iPod

the iPod

Spotify Streaming Service Launches

It is not surprising then that it is compression, and its drawbacks, that tends to be the point around which much of the debate around audio fidelity in the digital era pivots. With the exception of the relatively niche demographic of 8-bit musicians, and other sound artists, low-bit rate mp3’s are widely accepted as sounding bad. In the early days of music distribution, the low bandwidth of most internet cables tended to force distributors to emphasize smaller file size over better audio quality. Indeed, Napster’s early downloads were infamously unpredictable, and often full of encoding errors, in which significant sections of audio could be missing, or jumbled, resulting in pops and crackles in playback. This situation prompted many audio-dilletantes like Joseph Plambeck of the New York Times to lament (oddly 10 years after Napster, in 2010) that music listening had fundamentally changed because of audio compression: "“People used to listen to music”…instead music is often carried from place to place, played in the background…” (Plambeck, 2010)

Aside from the fact that Plambeck willfully ignores other aspects of audio playback beyond file size that are just as important to audio quality (is it being listened to over headphones? speakers? in cars?) it is the sentiment as a whole that is misplaced. The notion that somehow our ability to appreciate music has fundamentally changed because of audio compression relies on the same “heights and abyss” argument regarding the phonograph that Adorno made a decade prior. It presumes an era in which we ‘really’ listened, in which ‘true’ sound existed (Adorno, 1990). We know however, that this was never really the case. Music is a highly contingent experience (Harper, 2012). Bit rate makes a difference in how a piece sounds, but only within a certain limit. A more useful discussion would look at the difference between say a low quality phonograph record, or low quality cassette tape, and low quality MP3, or a 320 kbps MP3 and a Red Book CD; otherwise we are comparing apples and oranges.