Language and the Digital Age

0

April 2012
Ivan Obolensky

It is a digital age, but what does that mean? Everyone has heard about digital devices and their endless strings of ones and zeros. “Digital” has been synonymous with technology and progress. The world has moved from watches with hands to those with numbers. Analog has given way to digital.

But if digital is so good, why aren’t our hearing and our speech digital? Nature chose analog. What does this tell us about our capability to speak and process language?

To be able to answer this question, we have to understand the difference between analog and digital as well as why our world turned digital in the first place.

A good example of an analog system was a 1970s stereo that played vinyl records. Sound was recorded onto a plastic disc in the form of little wiggles in the plastic as it turned. On playback a turntable would spin the record and a needle was placed in the groove. The vibrations of the needle would be converted into electrical signals that would be amplified and sent to speakers. In those days the catchphrase was high fidelity. Every nuance was to be faithfully reproduced.

When digital stereo equipment started coming out along with Meridian* high-end digital compact disc players in the early ’80s, the debate started among audiophiles as to which was better. At the high end of the scale there was never any debate: analog was superior but only at the very upper end of the scale.

I remember listening to a stereo system of that time period that cost as much as an expensive car. It was a marvel. In a particular recording of a world-famous church choir, if one listened carefully one could hear the sound of a car starting outside and a bird whistling. I have never heard anything like it since. One did not listen to that system, one experienced it.

Digital gradually replaced the analog systems. Why? One could answer quickly with the word “cost” but that does not really get to the heart of the matter.

The world went digital for one reason: noise. Noise is the hiss one hears when one turns up the volume. It is the crackle one listens to on the radio as a thunderstorm approaches. It is extraneous sound that is not part of the signal or message that is being transmitted. Low-end analog stereo systems were notorious for the noise that was added to the sound by the system itself. More expensive systems had various filters that would reduce the noise but not completely, and would detract from the playback. Extremely high-end systems made sure that the noise made by the components was absolutely minimal in every step of the process. This went so far as to having gold connectors and gold cables between the amplifiers and the speakers.

Noise in terms of communication started to become an issue in the late 1800s with the invention of the telegraph. The farther one sent the message the more noise interfered and made unrecognizable even simple Morse code. Relay stations were built to pass messages down the line when cross-country distances were involved. By the time transatlantic cables were strung, noise on long-distance lines was a serious problem. Even in the 1950s, one often had to shout over transatlantic calls to be understood.

Research into noise reduction became a major focus of long-distance telephone companies. Bell labs (founded in 1925) was a leader in this research and developed the transistor and the laser, to name only a few of their many discoveries.

World War II brought the need for complex calculations requiring many sequential steps that could be represented by a “tree” of different pathways. Each pathway was a series of tiny logical steps based on true or false decisions, represented by 1s and 0s. These steps were duplicated electronically by switches called “gates” that responded to the presence or absence of an electronic signal. By sequencing a large number of gates, very complex calculations could be undertaken. Initially the gate function was performed by vacuum tubes, then by transistors made out of silicon.

Today a Nehalem class microprocessor from Intel has approximately one billion of these individual gates on a single chip. In order for these gates to work accurately there must be a system to reduce noise. Because there are so many interconnected switches, the network is extremely sensitive to extraneous signals. A gate could respond to a supposed input that in reality was a burst of static, creating false outputs and wrong calculations.1

To overcome this, a protocol was developed called “the static discipline”. This discipline holds the originator of a signal to a high standard of output. The output signal voltage can only vary by a tiny percentage. The receiver, on the other hand, has a looser set of constraints and can receive any signal within, say three times that range in case there is a random noise that might mask the signal.

This discipline results in the noise immunity property. Namely that, each gate is cleaning up the input it receives by transmitting a new clean signal which ultimately allows one billion gates, as in the Nehalem chip, to function smoothly regardless of any spurious signals. Noise is virtually non-existent.2

So digital is able to keep noise to a minimum even with a mind-boggling number of parts, but there is a downside. The downside is that inputs such as video and audio are not high fidelity but digitized. The inputs do not precisely and continuously track every nuance of the input received. Rather each input is broken down into little bits. For example, a human retina perceives color on a continuous scale while a digitized image breaks the scale into 256 levels.

It is like comparing a staircase to a smoothly inclined ramp. The ramp contains every fractional change in height while the staircase has only as many different heights as there are steps. Fractional step heights are omitted.

This differentiation into discrete levels is more than adequate to represent most visible colors, but it is still a reduction in the available input compared to an analog signal even if one cannot consciously tell the difference.

At the very high end, analog is a more faithful representation of the physical world than a digitized input. But with a digitized input, the input can be modified. With a digitized recording of a voice or an instrument one can push the volume of certain portions of the sound spectrum because there is no noise to distort the sound. This is called “compression”; and if one wonders why films sound so loud whether it is a whisper, an explosion, or a musical score, this is why.

Humans have analog inputs and with good reason. Analog is best but only at the high end. Our need to hear, to speak, and to have language, demands the highest fidelity in our senses. We know that a great deal of speech processing takes place at a level below our conscious awareness. Our design allows for every available slice of input to be available to our processing. That is why we are analog creatures.


*A high-end manufacturer of disc players

1 MIT. (2012). Circuits & Electronics 6.002x Lecture Series course. Retrieved March 29, 2012, from https://6002x.mitx.mit.edu/

2 Agarwal, A. & Lang, J. (2005). Foundations of Analog and Digital Electronic Circuits. San Francisco, CA: Morgan Kaufmann.


If you would like to sign up for our monthly articles, please click here.

Interested in reprinting our articles? Please see our reprint requirements.

© 2012 Ivan Obolensky. All rights reserved. No part of this publication can be reproduced without the written permission from the author.

Leave a Reply