Telephone

Telephone

telephone, an instrument designed for the simultaneous transmission and reception of the human voice. The telephone is inexpensive, is simple to operate, and offers its users an immediate, personal type of communication that cannot be obtained through any other medium. As a result, it has become the most widely used telecommunications device in the world. Billions of telephones are in use around the world.

This article describes the functional components of the modern telephone and traces the historical development of the telephone instrument. In addition it describes the development of what is known as the public switched telephone network (PSTN). For discussion of broader technologies, see the articles telecommunications system and telecommunications media. For technologies related to the telephone, see the articles mobile telephonevideophonefax and modem.

The telephone instrument

The word telephone, from the Greek roots tēle, “far,” and phonē, “sound,” was applied as early as the late 17th century to the string telephone familiar to children, and it was later used to refer to the megaphone and the speaking tube, but in modern usage it refers solely to electrical devices derived from the inventions of Alexander Graham Bell and others. Within 20 years of the 1876 Bell patent, the telephone instrument, as modified by Thomas WatsonEmil BerlinerThomas Edison, and others, acquired a functional design that has not changed fundamentally in more than a century. Since the invention of the transistor in 1947, metal wiring and other heavy hardware have been replaced by lightweight and compact microcircuitry. Advances in electronics have improved the performance of the basic design, and they also have allowed the introduction of a number of “smart” features such as automatic redialing, call-number identification, wireless transmission, and visual data display. Such advances supplement, but do not replace, the basic telephone design. That design is described in this section, as is the remarkable history of the telephone’s development, from the earliest experimental devices to the modern digital instrument.

Working components of the telephone

As it has since its early years, the telephone instrument is made up of the following functional components: a power source, a switch hook, a dialer, a ringer, a transmitter, a receiver, and an anti-sidetone circuit. These components are described in turn below.

ornithopter. Airplane and Aircraft. 3D illustration of Leonardo da Vinci's plans for an ornithopter, a flying machine kept aloft by the beating of its wings; about 1490.
Britannica Quiz
Inventions: From Bayonets to Jet Engines

Power source

In the first experimental telephones the electric current that powered the telephone circuit was generated at the transmitter, by means of an electromagnet activated by the speaker’s voice. Such a system could not generate enough voltage to produce audible speech in distant receivers, so every transmitter since Bell’s patented design has operated on a direct current supplied by an independent power source. The first sources were batteries located in the telephone instruments themselves, but since the 1890s current has been generated at the local switching office. The current is supplied through a two-wire circuit called the local loop. The standard voltage is 48 volts.

Cordless telephones represent a return to individual power sources in that their low-wattage radio transmitters are powered by a small (e.g., 3.6-volt) battery located in the portable handset. When the telephone is not in use, the battery is recharged through contacts with the base unit. The base unit is powered by a transformer connection to a standard electric outlet.

Get Unlimited Access
Try Britannica Premium for free and discover more.

Switch hook

The switch hook connects the telephone instrument to the direct current supplied through the local loop. In early telephones the receiver was hung on a hook that operated the switch by opening and closing a metal contact. This system is still common, though the hook has been replaced by a cradle to hold the combined handset, enclosing both receiver and transmitter. In some modern electronic instruments, the mechanical operation of metal contacts has been replaced by a system of transistor relays.

When the telephone is “on hook,” contact with the local loop is broken. When it is “off hook” (i.e., when the handset is lifted from the cradle), contact is restored, and current flows through the loop. The switching office signals restoration of contact by transmitting a low-frequency “dial tone”—actually two simultaneous tones of 350 and 440 hertz.

Dialer

The dialer is used to enter the number of the party that the user wishes to call. Signals generated by the dialer activate switches in the local office, which establish a transmission path to the called party. Dialers are of the rotary and push-button types.

The traditional rotary dialer, invented in the 1890s, is rotated against the tension of a spring and then released, whereupon it returns to its position at a rate controlled by a mechanical governor. The return rotation causes a switch to open and close, producing interruptions, or pulses, in the flow of direct current to the switching office. Each pulse lasts approximately one-tenth of a second; the number of pulses signals the number being dialed.

In push-button dialing, introduced in the 1960s, the pressing of each button generates a “dual-tone” signal that is specific to the number being entered. Each dual tone is composed of a low frequency (697, 770, 852, or 941 hertz) and a high frequency (1,209, 1,336, or 1,477 hertz), which are sensed and decoded at the switching office. Unlike the low-frequency rotary pulses, dual tones can travel through the telephone system, so that push-button telephones can be used to activate automated functions at the other end of the line.

In both rotary and push-button systems, a capacitor and resistor prevent dialing signals from passing into the ringer circuit.

Ringer

The ringer alerts the user to an incoming call by emitting an audible tone or ring. Ringers are of two types, mechanical or electronic. Both types are activated by a 20-hertz, 75-volt alternating current generated by the switching office. The ringer is commonly activated in two-second pulses, with each pulse separated by a pause of four seconds.

The traditional mechanical ringer was introduced with the early Bell telephones. It consists of two closely spaced bells, a metal clapper, and a magnet. Passage of alternating current through a coil of wire produces alternations in the magnetic attraction exerted on the clapper, so that it vibrates rapidly and loudly against the bells. Volume can be muted by a switch that places a mechanical damper against the bells.

In modern electronic ringers, introduced in the 1980s, the ringer current is passed through an oscillator, which adjusts the current to the precise frequency required to activate a piezoelectric transducer—a device made of a crystalline material that vibrates in response to an electric current. The transducer may be coupled to a small loudspeaker, which can be adjusted for volume.

The ringer circuit remains connected to the local loop even when the telephone is on hook. A larger voltage is necessary to activate the ringer because the ringer circuit is made with a high electrical impedance in order to avoid draining power from the transmitter-receiver circuit when the telephone is in use. A capacitor prevents direct current from passing through the ringer once the handset has been lifted off the switch hook.

Transmitter

The transmitter is essentially a tiny microphone located in the mouthpiece of the telephone’s handset. It converts the vibrations of the speaker’s voice into variations in the direct current flowing through the set from the power source.

In traditional carbon transmitters, developed in the 1880s, a thin layer of carbon granules separates a fixed electrode from a diaphragm-activated electrode. Electric current flows through the carbon against a certain resistance. The diaphragm, vibrating in response to the speaker’s voice, forces the movable electrode to exert a fluctuating pressure on the carbon layer. Fluctuations in the carbon layer create fluctuations in its electrical resistance, which in turn produce fluctuations in the electric current.

In modern electret transmitters, developed in the 1970s, the carbon layer is replaced by a thin plastic sheet that has been given a conductive metallic coating on one side. The plastic separates that coating from another metal electrode and maintains an electric field between them. Vibrations caused by speech produce fluctuations in the electric field, which in turn produce small variations in voltage. The voltages are amplified for transmission over the telephone line.

Receiver

The receiver is located in the earpiece of the telephone’s handset. Operating on electromagnetic principles that were known in Bell’s day, it converts fluctuating electric current into sound waves that reproduce human speech. Fundamentally, it consists of two parts: a permanent magnet, having pole pieces wound with coils of insulated fine wire, and a diaphragm driven by magnetic material that is supported near the pole pieces. Speech currents passing through the coils vary the attraction of the permanent magnet for the diaphragm, causing it to vibrate and produce sound waves.

Through the years the design of the electromagnetic system has been continuously improved. In the most common type of receiver, introduced in the Bell system in 1951, the diaphragm, consisting of a central cone attached to a ring-shaped armature, is driven as a piston to obtain efficient response over a wide frequency range. Telephone receivers are designed to have an accurate response to tones with frequencies of 350 to 3,500 hertz—a dynamic range that is narrower than the capabilities of the human ear but sufficient to reproduce normal speech.

Anti-sidetone circuit

The anti-sidetone circuit is an assemblage of transformers, resistors, and capacitors that perform a number of functions. The primary function is to reduce sidetone, which is the distracting sound of the speaker’s own voice coming through the receiver from the transmitter. The anti-sidetone circuit accomplishes this reduction by interposing a transformer between the transmitter circuit and the receiver circuit and by splitting the transmitter signals along two paths. When the divided signals, having opposite polarities, meet at the transformer, they almost entirely cancel each other in crossing to the receiver circuit. The speech signal coming from the other end of the line, on the other hand, arrives at the transformer along a single, undivided path and crosses the transformer unimpeded.

The anti-sidetone circuit also matches the low electrical impedance of the telephone instrument’s circuits to the higher electrical impedance of the telephone line. Impedance matching allows a more efficient flow of current through the system.

Development of the telephone instrument

Early sound transmitters

Beginning in the early 19th century, several inventors made a number of attempts to transmit sound by electric means. The first inventor to suggest that sound could be transmitted electrically was a Frenchman, Charles Bourseul, who indicated that a diaphragm making and breaking contact with an electrode might be used for this purpose. In the 1850s Italian American inventor Antonio Meucci had electrical devices in his home called telettrofoni that he used to communicate between rooms, though he did not patent his inventions. By 1861 Johann Philipp Reis of Germany had designed several instruments for the transmission of sound. The transmitter Reis employed consisted of a membrane with a metallic strip that would intermittently contact a metallic point connected to an electrical circuit. As sound waves impinged on the membrane, making the membrane vibrate, the circuit would be connected and interrupted at the same rate as the frequency of the sound. The fluctuating electric current thus generated would be transmitted by wire to a receiver, which consisted of an iron needle that was surrounded by the coil of an electromagnet and connected to a sounding box. The fluctuating electric current would generate varying magnetic fields in the coil, and these in turn would force the iron needle to produce vibrations in the sounding box. Reis’s system could thus transmit a simple tone, but it could not reproduce the complex waveforms that make up speech.

Gray and Bell: the transmission of speech

The first devices

In the 1870s two American inventors, Elisha Gray and Alexander Graham Bell, each independently, designed devices that could transmit speech electrically. Gray’s first device made use of a harmonic telegraph, the transmitter and receiver of which consisted of a set of metallic reeds tuned to different frequencies. An electromagnetic coil was located near each of the reeds. When a reed in the transmitter was vibrated by sound waves of its resonant frequency—for example, 400 hertz—it induced an electric current of corresponding frequency in its matching coil. This coil was connected to all the coils in the receiver, but only the reed tuned to the transmitting reed’s frequency would vibrate in response to the electric current. Thus, simple tones could be transmitted. In the spring of 1874 Gray realized that a receiver consisting of a single steel diaphragm in front of an electromagnet could reproduce any of the transmitted tones. Gray, however, was initially unable to conceive of a transmitter that would transmit complex speech vibrations and instead chose to demonstrate the transmission of tones via his telegraphic device in the summer of 1874.

Bell, meanwhile, also had considered the transmission of speech using the harmonic telegraph concept, and in the summer of 1874 he conceived of a membrane receiver similar to Gray’s. However, since Bell too had no transmitter, the membrane device was never constructed. Following some earlier experiments, Bell postulated that, if two membrane receivers were connected electrically, a sound wave that caused one membrane to vibrate would induce a voltage in the electromagnetic coil that would in turn cause the other membrane to vibrate. Working with a young machinist, Thomas Augustus Watson, Bell had two such instruments constructed in June 1875. The device was tested on June 3, 1875, and, although no intelligible words were transmitted, “speechlike” sounds were heard at the receiving end.

An application for a U.S. patent on Bell’s work was filed on February 14, 1876. Several hours later that same day, Gray filed a caveat on the concept of a telephone transmitter and receiver. A caveat was a confidential, formal declaration by an inventor to the U.S. Patent Office of an intent to file a patent on an idea yet to be perfected; it was intended to prevent the idea from being used by other inventors. At this point neither Gray nor Bell had yet constructed a working telephone that could convey speech. On the basis of its earlier filing time, Bell’s patent application was allowed over Gray’s caveat. On March 7, 1876, Bell was awarded U.S. patent 174,465. This patent is often referred to as the most valuable ever issued by the U.S. Patent Office, as it described not only the telephone instrument but also the concept of a telephone system.

The search for a successful transmitter

Gray had earlier come up with an idea for a transmitter in which a moving membrane was attached to an electrically conductive rod immersed in an acidic solution. Another conductive rod was immersed in the solution, and, as sound waves impinged on the membrane, the two rods would move with respect to each other. Variations in the distance between the two rods would produce variations in electric resistance and, hence, variations in the electric current. In contrast to the magnetic coil type of transmitter, the variable-resistance transmitter could actually amplify the transmitted sound, permitting use of longer cables between the transmitter and the receiver.

ornithopter. Airplane and Aircraft. 3D illustration of Leonardo da Vinci's plans for an ornithopter, a flying machine kept aloft by the beating of its wings; about 1490.
Britannica Quiz
Inventions: From Bayonets to Jet Engines

Again, Bell also worked on a similar “liquid” transmitter design; it was this design that permitted the first transmission of speech, on March 10, 1876, by Bell to Watson, which Bell transcribed in his lab notes as “Mr. Watson—come here—I want to see you.” The first public demonstrations of the telephone followed shortly afterward, featuring a design similar to the earlier magnetic coil membrane units described above. One of the earliest demonstrations occurred in June 1876 at the Centennial Exposition in Philadelphia. Further tests and refinement of equipment followed shortly afterward. On October 9, 1876, Bell conducted a two-way test of his telephone over a 5-km (2-mile) distance between Boston and Cambridgeport, Massachusetts. In May 1877 the first commercial application of the telephone took place with the installation of telephones in offices of customers of the E.T. Holmes burglar alarm company.

The poor performance of early telephone transmitters prompted a number of inventors to pursue further work in this area. Among them was Thomas Alva Edison, whose 1886 design for a voice transmitter consisted of a cavity filled with granules of carbonized anthracite coal. The carbon granules were confined between two electrodes through which a constant electric current was passed. One of the electrodes was attached to a thin iron diaphragm, and, as sound waves forced the diaphragm to vibrate, the carbon granules were alternately compressed and released. As the distance across the granules fluctuated, resistance to the electric current also fluctuated, and the resulting variations in current were transmitted to the receiver. Edison’s carbon transmitter was sufficiently simple, effective, cheap, and durable that it became the basis for standard telephone transmitter design through the 1970s.

Development of the modern instrument

The telephone instrument continued to evolve over time, as can be illustrated by the succession of American instruments described below. The concept of mounting both the transmitter and the receiver in the same handle appeared in 1878 in instruments designed for use by telephone operators in a New York City exchange. The earliest telephone instrument to see common use was introduced by Charles Williams, Jr., in 1882. Designed for wall mounting, this instrument consisted of a ringer, a hand-cranked magneto (for generating a ringing voltage in a distant instrument), a hand receiver, a switch hook, and a transmitter. Various versions of this telephone instrument remained in use throughout the United States as late as the 1950s. As is noted in the section Switching, the telephone dial originated with automatic telephone switching systems in 1896.

Desk instruments were first constructed in 1897. Patterned after the wall-mounted telephone, they usually consisted of a separate receiver and transmitter. In 1927, however, the American Telephone & Telegraph Company (AT&T) introduced the E1A handset, which employed a combined transmitter-receiver arrangement. The ringer and much of the telephone electronics remained in a separate box, on which the transmitter-receiver handle was cradled when not in use. The first telephone to incorporate all the components of the station apparatus into one instrument was the so-called combined set of 1937. Some 25 million of these instruments were produced until they were superseded by a new design in 1949. The 1949 telephone was totally new, incorporating significant improvements in audio quality, mechanical design, and physical construction. Push-button versions of this set became available in 1963.

Modern telephone instruments are largely electronic. Wire coils that performed multiple functions in older sets have been replaced by integrated circuits that are powered by the line voltage. Mechanical bell ringers have given way to electronic ringers. The carbon transmitter dating from Edison’s time has been replaced by electret microphones, in which sound waves cause a thin, metal-coated plastic diaphragm to vibrate, producing variations in an electric field across a tiny air gap between the diaphragm and an electrode. The telephone dial has given way to the keypad, which can usually be switched to generate either pulses similar to those of the dial mechanism or dual-tone signals as in AT&T’s Touch-Tone system. Finally, a number of other features have become available on the telephone instrument, including last-number recall and speed-dialing of multiple telephone numbers.

Cordless telephones

Cordless telephones are devices that take the place of a telephone instrument within a home or office and permit very limited mobility—up to 100 metres (330 feet). Because they communicate with a base unit that is plugged directly into an existing telephone jack, they essentially serve as a wireless extension to existing home or office wiring. The first cordless phones employed analog modulation methods and operated over a pair of frequencies, 1.7 megahertz and 49 megahertz. Beginning in the 1980s, cordless phones operated over a pair of frequencies in the 46- and 49-megahertz bands, and in the late 1990s phones operating in the 902–928-megahertz band began to appear. These phones employed either analog modulation, digital modulation, or spread-spectrum modulation. Some digital cordless telephones now operate in the gigahertz region—for example, 5.8 gigahertz. Generally speaking, each successive generation of cordless phones has offered improved quality and range to the consumer.

Personal communication systems

In a number of countries throughout the world, a wireless service called the personal communication system (PCS) is available. In the broadest sense, PCS includes all forms of wireless communication that are interconnected with the public switched telephone network, including mobile telephone and aeronautical public correspondence systems, but the basic concept includes the following attributes: ubiquitous service to roving users, low subscriber terminal costs and service fees, and compact, lightweight, and unobtrusive personal portable units.

More From Britannica
human-factors engineering: Push-button telephone

The first PCS to be implemented was the second-generation cordless telephony (CT-2) system, which entered service in the United Kingdom in 1991. The CT-2 system was designed at the outset to serve as a telepoint system. In telepoint systems, a user of a portable unit might originate telephone calls (but not receive them) by dialing a base station located within several hundred metres. The base unit was connected to the PSTN and operated as a public (pay) telephone, charging calls to the subscriber. Because of its limited coverage, the CT-2 system went out of service, giving way to the popular GSM digital cellular system (see mobile telephone).

Meanwhile, the European Conference on Posts and Telecommunications (CEPT) had begun work on another personal communication system, known as DECT (Digital Enhanced Cordless Telecommunications, formerly Digital European Cordless Telephone). The DECT system was designed initially to provide cordless telephone service for office environments, but its scope soon broadened to include campus-wide communications and telepoint services. By 1999 DECT had reached 50 percent of the European cordless market.

In Japan a PCS based loosely on the DECT concepts, the Personal Handy-Phone System (PHS), was introduced to the public in 1994. The PHS became popular throughout urban areas as an alternative to cellular systems. Supporting data traffic at 32 and 64 kilobits per second, it could perform as a high-speed wireless modem for access to the Internet.

In the United States in 1994–95 the Federal Communications Commission (FCC) sold a number of licenses in the 1.85–1.99-gigahertz region for use in PCS applications.

The telephone network

In order to understand the many concepts represented in the public switched telephone network (PSTN), it is helpful to review the processes that take place in the making of a single call on a traditional wired telephone. To make a call, a telephone subscriber begins by taking the telephone “off-hook”—in the process, signaling the local central office that service is requested. The central office, which has been monitoring the telephone line continuously (a process known as attending), responds with a dial tone. Upon receiving the dial tone, the customer enters the called party’s telephone number. The central office stores the entered number, translates the number into an equipment location and a path to that location, and tests whether the called party’s line is already in use (or “busy”). The called party’s number may lie in the same central office (in which case the call is designated intraoffice), or it may lie in another central office (requiring an interoffice call). If the call is intraoffice, the central office switch will handle the entire call process. If the call is interoffice, it will be directed either to a nearby central office or to a distant central office via a long-distance network. In the case of interoffice calls, a separate signaling network is employed to coordinate the call progression through a multitude of switches and telephone trunks. Assuming, however, that the call is an intraoffice call, if the called party’s line is busy and does not have call waiting (in which the current call can be suspended), the telephone switch will return a busy signal until the calling party returns to the “on-hook” condition. If the called party’s line is not busy or does have call waiting, it will be alerted, or “rung.” At the same time that the line is rung, an audible signal will be returned to the calling party to indicate that ringing is taking place. If the called party answers by going off-hook, ringing will be discontinued and a voice path will be established through the switching system to both the calling and called parties. The voice path is maintained until either party goes back on-hook. At that moment the voice path is disconnected, and call charging is recorded.

From the example described above, it is evident that telephone systems consist of four major components:

  1. Switching, between telephone sets and between trunks, as required.
  2. Signaling, between the telephone sets and the central offices as well as between central offices when needed.
  3. Transmission, between the central switching office and subscribers’ telephone sets and also between central offices.

Each of these major components of a telephone system is discussed in turn in this section.

Switching

Switching systems

Manual switching

From the earliest days of the telephone, it was observed that it was more practical to connect different telephone instruments by running wires from each instrument to a central switching point, or telephone exchange, than it was to run wires between all the instruments. In 1878 the first telephone exchange was installed in New Haven, Connecticut, permitting up to 21 customers to reach one another by means of a manually operated central switchboard. The manual switchboard was quickly extended from 21 lines to hundreds of lines. Each line was terminated on the switchboard in a socket (called a jack), and a number of short, flexible circuits (called cords) with a plug on both ends of each cord were also provided. Two lines could thus be interconnected by inserting the two ends of a cord in the appropriate jacks.

Electromechanical switching

The idea of automatic switching appeared as early as 1879, and the first fully automatic switch to achieve commercial success was invented in 1889 by Almon B. Strowger, the owner of an undertaking business in Kansas City, Missouri. The Strowger switch consisted of essentially two parts: an array of 100 terminals, called the bank, that were arranged 10 rows high and 10 columns wide in a cylindrical arc; and a movable switch, called the brush, which was moved up and down the cylinder by one ratchet mechanism and rotated around the arc by another, so that it could be brought to the position of any of the 100 terminals. The ratcheting action on the brush gave Strowger’s invention the common name step-by-step switch. The stepping movement was controlled directly by pulses from the telephone instrument. In the original systems, the caller generated the pulses by rapidly pushing a button switch on the instrument. Later, in 1896, Strowger’s associates devised a rotary dial for generating the necessary pulses. (The rotary dialing system is described below in Rotary dialing.)

In 1913 J.N. Reynolds, an engineer with Western Electric (at that time the manufacturing division of AT&T), patented a new type of telephone switch that became known as the crossbar switch. The crossbar switch was a grid composed of five horizontal selecting bars and 20 vertical hold bars. Input lines were connected to the hold bars and output lines to the selecting bars.

The five selecting bars could be rotated either upward or downward to make connections with the hold bars, thus effectively providing the switch with 10 horizontal rows. With the appropriate movement of the hold and selecting bars, any column could be connected to any row, and up to 10 simultaneous connections could be provided by the switch. The first crossbar system was demonstrated by Televerket, the Swedish government-owned telephone company, in 1919. The first commercially successful system, however, was the AT&T No. 1 crossbar system, first installed in Brooklyn, N.Y., in 1938. A series of improved versions followed the No. 1 crossbar system, the most notable being the No. 5 system. First deployed in 1948, the No. 5 crossbar system became the workhorse of the Bell System and by 1978 accounted for the largest number of installed lines throughout the world. Originally designed to serve 27,000 lines, it was later upgraded to handle 35,000 voice circuits. Further revisions of the AT&T crossbar systems continued until 1974, by which time new switching systems had shifted from electromechanical to electronic technology.

Electronic switching

As telephone traffic continued to grow through the years, it was realized that large numbers of common control circuits would be required to switch this traffic and that switches of larger capacity would have to be created to handle it. Plans to provide new services via the telephone network also created a demand for innovative switch designs. With the advent of the transistor in 1947 and with subsequent advances in memory devices as well as other electronic devices and switches, it became possible to design a telephone switch that was based fundamentally on electronic components rather than on electromechanical switches.

Between 1960 and 1962 AT&T conducted field trials of a new electronic switching system (ESS) that would employ a variety of devices and concepts. The first commercial version, placed in service in 1965, became known as the No. 1 ESS. The No. 1 ESS employed a special type of reed switch known as a ferreed. Normally, a reed switch is constructed of two thin metal strips, or reeds, which are sealed in a glass tube. When an electromagnetic coil surrounding the tube is energized, the reeds close, making an electrical contact. In a ferreed a magnetic alloy known as Remendur is added to two sides of the reed relay. When the coil is energized, the Remendur material retains the magnetism and polarity, thus acting as a switch with a memory. In addition to this new switch device, the No. 1 ESS incorporated a new read-only memory device and a new random-access memory device. These innovations allowed the No. 1 system to serve as many as 65,000 two-way voice circuits, and it permitted hundreds of new features to be handled by the switching equipment. It underwent a number of revisions, including the adoption of semiconductor memory in 1977.

Digital switching

All the automatic telephone switches, both electromechanical and electronic, discussed up to this point are classified as space-division switches. Space-division switches are characterized by the fact that the speech path through a telephone switch is continuous throughout the exchange. That speech path is a metallic circuit, in the sense that it is provided entirely through the metallic contacts of the switch. Other forms of switching, however, are made possible by converting the fluctuating electric signal transmitted by the telephone instrument into digital format. In one of the first digital systems, known as time-division switching, the digitized speech information is sliced into a sequence of time intervals, or slots. Additional voice circuit slots, corresponding to other users, are inserted into this bit stream of data, in effect achieving a “time multiplexing” of several voice circuits. Switching essentially consists of interchanging the time position of one user’s slot with that of another user in a determined manner. Time-division switches may also employ space-division switching; an appropriate mixture of time-division and space-division switching is advantageous in various circumstances.

The first time-division switching system to be deployed in the United States was the AT&T-designed No. 4 ESS, placed into service in 1976. The No. 4 ESS was a toll system capable of serving a maximum of 53,760 two-way trunk circuits. It was soon followed by several other time-division systems for switching local calls. Among these was the AT&T No. 5 ESS, improved versions of which could handle 100,000 lines.

The switching network

As the telephone network evolved, it became necessary to organize it into a hierarchical system that would permit any customer to call any other customer. In order to support such an organization, switching centres in the American telephone system were organized into three classes: local, tandem, and toll. A local office (or end office) was a switching centre that connected directly to the customers’ telephone instruments. A tandem office was one that served a cluster of local offices. Atoll office was involved in switching traffic over long-distance (or toll) circuits.

During the 1990s the telephone network significantly changed, because of a combination of several trends: an increased amount of traffic due to new telephone subscribers and to use of the telephone network to access the Internet; the advent of new “packet-switching” techniques (described below); new protocols for voice traffic over data networks; and the availability of a tremendous amount of bandwidth in the long-distance network. As a result of these developments, the hierarchical telephone network of the 1950s and ’60s collapsed to mostly two levels of switching. End offices are now known as class 5 offices and are owned by the local service operators, or “local exchange carriers.” The old toll and tandem offices are now known as class 4 offices; they are owned by long-distance service providers, or “interexchange carriers.” Even this distinction between local and long-distance providers, however, became less clear with continued deregulation of the telephone industry.

While much telephone voice traffic continues to flow through the class 5 and class 4 switches, several alternatives have arisen for switching voice traffic through the telephone network. For instance, by digitizing, compressing, and packetizing voice signals, telephone traffic can be sent over conventional packet-switched data networks instead of dedicated circuits. Several approaches to packet switching are possible, based on whether variable-length or fixed-length packets are used. When variable-length packets are used and Internet protocol (IP) is the underlying protocol for the data network, the mechanism is called “voice over IP” (VoIP). In such a configuration, voice traffic is switched over the Internet using a router, a device consisting of input and output ports from the network, a switching fabric to switch between input and output, and a processor to execute the routing protocols and perform network management. When the digitized voice signal is packed into fixed-length packets and sent over an asynchronous transfer mode (ATM) network, the method is known as “voice over ATM” (VoATM). Within the network, ATM switches direct packets from source to destination.

Signaling

A major component of any telephone system is signaling, in which electric pulses or audible tones are used for alerting (requesting service), addressing (e.g., dialing the called party’s number at the subscriber set), supervision (monitoring idle lines), and information (providing dial tones, busy signals, and recordings).

In general, signaling may occur either within the subscriber loop—that is, within the circuit between the individual telephone instrument and the local office—or in circuits between offices.

Call-number dialing

Rotary dialing

The first automatic switching systems, based on the Strowger switch described in the section Electromechanical switching, were activated by a push button on the calling party’s telephone. More accurate call dialing was permitted by the advent of the rotary dial in 1896. A number of different dial designs were placed in service until 1910, when designs were standardized, and after 1910 the design and operation of the rotary dial did not change in its essentials.

In a rotary dial, a number of pulses, or interruptions in current flow, are transmitted to the switching office in proportion to the rotation of the dial. When the dial is rotated, a spring is wound, and when the dial is subsequently released, the spring causes the dial to rotate back to its original position. Inside the dial a governor device ensures a constant rate of return rotation, and a shaft on the governor turns a cam that opens and closes a switch contact. An open switch contact stops current from flowing into the telephone set, thereby creating a dial pulse. Each dial pulse corresponds to one additional digit—i.e., two pulses correspond to the digit 2, three pulses correspond to the digit 3.

The rotary dial was designed for operating an electromechanical switching system, so that the speed of operation of the dial was limited by the operating speed of the switches. Within the Bell System the dial pulse period is nominally one-tenth of a second long, permitting a rate of 10 pulses per second. Modern telephones are now wired for push-button dialing (see below), but even they can usually generate pulse signals when the push-button pad is operated in conjunction with electronic timing circuits.

Push-button dialing

In the 1950s, after conducting extensive studies, AT&T concluded that push-button dialing was about twice as efficient as rotary dialing. Trials had already been conducted of special telephone instruments that incorporated mechanically vibrating reeds, but in 1963 an electronic push-button system, known as Touch-Tone dialing, was offered to AT&T customers. Touch-Tone soon became the standard U.S. dialing system, and eventually it became the standard worldwide.

The Touch-Tone system is based on a concept known as dual-tone multifrequency (DTMF). The 10 dialing digits (0 through 9) are assigned to specific push buttons, and the buttons are arranged in a grid with four rows and three columns. The pad also has two more buttons, bearing the star (*) and pound (#) symbols, to accommodate various data services and customer-controlled calling features. Each of the rows and columns is assigned a tone of a specific frequency, the columns having higher-frequency tones and the rows having tones of lower frequency. When a button is pushed, a dual-tone signal is generated that corresponds to the frequencies assigned to the column and row that intersect at that point. This signal is translated into a digit at the local office.

Interoffice signaling

Interoffice signaling also has undergone a notable evolution, changing over from simple “in-band” methods to fully digitized “out-of-band” methods.

In-band signaling

In the earliest days of the telephone network, signaling was provided by means of direct current (DC) between the telephone instrument and the operator. As long-distance circuits and automatic switching systems were placed into service, the use of DC became obsolete, since long-distance circuits could not pass the DC signals. Hence, alternating current (AC) began to be used over interoffice circuits. Until the mid-1970s, interoffice circuits employed what has become known as in-band signaling, in which the same circuits that were used to connect two telephone instruments and serve as the voice path were also used to transmit the AC signals that set up the switches employed in the circuit. Single-frequency tones were used in the switching network to signal availability of a trunk. Once a trunk line became available, multiple-frequency tones were used to pass the address information between switches. Multiple-frequency signaling employed pairs of six tones, similar to the signaling used in Touch-Tone dialing.

Out-of-band signaling

Despite the simplicity of the in-band method, this type of signaling presented a number of problems. First, because the in-band signals by necessity fell within the bandwidth of speech signals, speech signals could at times interfere with the in-band signals. Second, in-band signaling did not always make efficient use of the available telephone circuits. For example, if a called party’s telephone instrument was in use, the called party’s central office would generate a busy signal that was carried by the already established voice path through the public switched telephone network to the calling party’s handset. Hence, a full voice-circuit path through the network would be tied up merely to convey a busy signal.

In order to overcome these issues and to speed the call set-up process in long-distance calls, another form of interoffice signaling, known as common channel signaling (CCS), was developed. In CCS an “out-of-band” circuit (that is, a separate circuit from that used to establish the voice connection) is dedicated to serve as a data link, carrying address information and certain other information signals between the microprocessors employed in telephone switches. The first version of CCS was developed between 1964 and 1968 by the International Telegraph and Telephone Consultative Committee (CCITT), a predecessor of the Telecommunication Standardization Sector of the International Telecommunication Union. The first system was standardized internationally as CCITT-6 signaling; within North America, CCITT-6 was modified by AT&T and became known as common channel interoffice signaling, CCIS. CCIS was first installed in the Bell System in 1976.

Although CCITT-6 was standardized by an international body, it was never universally deployed. Recognizing this shortcoming as well as the still-growing amount of international traffic within the worldwide telephone network, the CCITT between 1980 and 1991 developed a successor version known as CCITT-7. Within North America, CCITT-7 was implemented as Signaling System 7, or SS7.

Transmission

Development of long-distance transmission

From single-wire to two-wire circuits

The first telephone lines employed the same type of outdoor circuits as telegraph lines—namely, a single noninsulated iron or steel wire supported by wooden poles with glass insulators. Since electric signals require two wires, the second “wire” was a ground return through the earth. Unfortunately, the use of a single wire made the telephone circuit extremely susceptible to interference by other signals. This problem was addressed by the use of a two-wire, or “metallic,” circuit; the first demonstration of such a system occurred in 1881 on a telephone line between Providence, Rhode Island, and Boston.

As the distances between telephone instruments began to increase beyond those served by local exchange offices, a number of technical problems arose that had not been experienced in earlier telegraph systems. Even with the two-wire system, it soon became apparent that telephone signals could be transmitted only a fraction of the distance of telegraph signals, because of the greater attenuation in iron and steel of the higher frequencies of telephone signals. The principal difference between telegraph systems and the telephone system was that the frequencies of the signals carried by telephone lines were as much as 30 times greater than those of telegraph signals. Several individuals noted that copper wire greatly improved the situation, but manufacturing techniques produced brittle wire that was not self-supporting over the spans between poles. The problem was solved in 1877 with the invention of hard-drawn copper wire. In 1884 the first test of hard-drawn copper wire for long-distance telephone service was conducted between New York City and Boston.

Problems of interference and attenuation

Two-wire copper circuits did not solve all the problems of long-distance telephony, however. As the number of lines grew, interference (or cross talk) from adjacent lines on the same crossarm of the telephone pole became significant. It was found that transposing the wires by twisting them at specified intervals canceled the cross talk. Another major problem was caused by distance: over the lengths of long-distance lines, even the two-wire copper circuit attenuated the telephone signal significantly. In a series of theoretical papers published in book form in 1892, Oliver Heaviside, an English physicist, developed the theory behind the transmission of signals over two-wire circuits. In the United States, Michael I. Pupin of Columbia University in New York City and George A. Campbell of AT&T both read Heaviside’s papers and realized that introducing inductive coils (loading coils) at regular intervals along the length of the telephone line could significantly reduce the attenuation of signals within the voice band (i.e., at frequencies less than 3.5 kilohertz). Both Campbell and Pupin applied for a patent on the concept of loading coils; after extended patent interference proceedings, the patent was finally awarded to Pupin in 1904. The first long-distance application of loading coils occurred in 1900, over a 40-km (24-mile) circuit in Boston. It was followed later that year by a test over a 1,000-km (600-mile) circuit. By 1925 approximately 1.25 million loading coils were in use over 3 million km (1.8 million miles) of wire circuits.

Even with the use of loading coils, telephone communication across countries as large as the United States was not possible without some form of amplification. A mechanical amplifier, which made use of an electromagnet receiver and a carbon transmitter, was installed in a commercial circuit between New York City and Chicago in 1904, but it was not until the patenting of the vacuum tube by Lee de Forest in 1907 that truly transcontinental telephone communication was possible. In 1915 the first transcontinental line, between New York City and San Francisco, was placed in service. Although this system was commercially viable, its cost and limited capacity (only one two-way circuit) prevented substantial growth of transcontinental telephony until carrier multiplexing techniques were introduced beginning in 1918. With carrier multiplexing, four or more two-way voice channels could be transmitted simultaneously over two-wire or four-wire circuits. By 1927 more than 5 million km (3 million miles) of long-distance circuits covered the entire United States—more than 10 times the circuitry present in 1900.

From analog to digital transmission

Until the early 1980s the bulk of long-distance transmission was provided by analog systems in which individual telephone conversations were stacked in four-kilohertz intervals across the transmission band—a process known as frequency-division multiplexing (FDM). However, particularly with the development of fibre optics (see below), these analog systems were rapidly replaced by digital systems. In digital transmission, which may also be carried over the coaxial and microwave systems, the telephone signals are first converted from an analog format to a quantized, discrete time format. The signals are then multiplexed together using time-division multiplexing (TDM), a method in which each digitized telephone signal is assigned a specific slot within a fixed time frame. In order to provide standard interfaces between transmission and switching equipment, multiplexed signals are further combined or aggregated in hierarchical arrangements.

Coaxial cable

Long-distance coaxial cable systems were introduced in the United States in 1946. Employing analog FDM methods, the first coaxial system could support 1,800 two-way voice circuits by bundling together three working pairs of cable, each pair transmitting 600 voice signals simultaneously. In the last analog coaxial system, deployed in 1978, each pair of cables transmitted 13,200 voice signals, and the cable bundle contained 10 working pairs; this combination supported 132,000 two-way voice circuits. Digital coaxial systems were introduced into the U.S. long-distance network beginning in 1962. TDM, a digital cable system first deployed in 1975, can support up to 40,320 two-way voice circuits over 10 working pairs of coaxial cable.

Microwave link

Long-distance transmission also has been provided by radio link in the form of point-to-point microwave systems. First employed in 1950, microwave transmission has the advantage of not requiring access to all contiguous land along the path of the system. Because microwave systems are line-of-sight media, radio towers must be spaced approximately every 42 km (25 miles) along the route. Point-to-point microwave systems generally operate in the frequency ranges of 3.7–4.2 gigahertz or 5.925–6.425 gigahertz; some systems operate at 11 or 18 gigahertz. Following the trend of coaxial cable systems, the first microwave links were analog systems. Early systems had a capacity of 2,400 two-way voice circuits, and later systems could support 61,800 two-way circuits. Beginning in 1981, digital microwave systems began to be deployed in the U.S. system that could support the wide range of digital services available over the PSTN.

Optical-fibre cable

Because of their great bandwidth, reliability, and low cost, optical fibres became the preferred medium in both short-haul and long-haul transmission systems following their first deployment in 1979. Since 1990 there has been significant progress in the development of fibre optics, permitting transmission at ever higher data rates. Several different technologies have been essential in this development: so-called nonzero-dispersion optical fibres, which permit the transmission of multiple wavelengths of light at high data rates; erbium-doped fibre amplifiers, which use a laser pump source to amplify optical signals over long distances; and “tunable” lasers, which generate light at several frequencies, thereby permitting transmission of multiple wavelengths over a single optical fibre. Multiple wavelength transmission, known as wave division multiplexing (WDM), allows higher data rates to be achieved over a single fibre; when 40 or more different wavelengths are multiplexed, the technique is known as dense wave division multiplexing (DWDM). DWDM technology has permitted data transmission at rates of 400 gigabits per second, each wavelength supporting approximately 10 gigabits per second. These data rates are equivalent to some 6,000,000 voice circuits per fibre and 150,000 voice circuits per wavelength. Long-distance carriers in the developed world make use of optical fibre technology at a variety of data rates. Most systems employ the standardized hierarchy of digital transmission rates known as the synchronous optical network (SONET) or optical carrier (OC) in the United States and as the synchronous digital hierarchy (SDH) elsewhere, as shown in the table.

Standardized digital transmission rates for the synchronous digital hierarchy (SDH), the synchronous optical network (SONET), and the optical carrier (OC) hierarchy*
*SDH is the transmission hierarchy established by the International Telegraph and Telephone Consultative Committee (CCITT); SONET and OC are transmission hierarchies established by the American National Standards Institute (ANSI).
SDH systemSONET systemOC leveltransmission rate in megabits per second (Mbps) or gigabits per second (Gbps)maximum voice channels per circuit
STS-1OC-151.84 Mbps783
STM-1STS-3OC-3155.52 Mbps2,349
STM-4STS-12OC-12622.08 Mbps9,396
OC-241,244.16 Mbps18,792
STM-16STS-48OC-482.4888 Gbps37,584

Overseas transmission

Terrestrial radio

The extension of telephone service to other countries and continents was a goal set in the earliest days of telephone systems. In North America, service to Canada and Mexico was a natural extension of the long-distance methods used within the United States, but transmission across the ocean to Europe called for a significant amount of ingenuity. While transatlantic telegraph cables had been in service since 1866, these same cables could not be used for voice transmission, because of bandwidth limitations. Instead, the first transatlantic telephone service made use of radio. Regular service via radio between the United States and Europe was first established in 1927 using long-wave frequencies in the range of 58.5 to 61.5 kilohertz. Within the first year this system supported 11,000 calls. By 1929 additional circuits were added in the range of 6–25 megahertz.

Undersea cable

It was soon realized that the number of transatlantic telephone calls would rapidly outgrow available radio spectrum. Accordingly, transoceanic cable technology was developed that made use of amplifiers or repeaters placed at regular intervals along the length of the cable. Early deployment of undersea cables had been accomplished previously in 1921, with a 184-km-long (114-mile-long) cable between Cuba and Key West, Florida. The first transatlantic cable was laid in 1956 between Canada and Scotland—specifically, between Clarenville, Newfoundland, Canada, and Oban, Scotland, a distance of 3,584 km (2,226 miles). This system made use of two coaxial cables, one for each direction, and used analog FDM to carry 36 two-way voice circuits. With the availability of the cable system, transatlantic telephone traffic increased dramatically, from 1.7 million calls in 1955 to 3.7 million in 1960. Six additional coaxial cables, representing four successive generations of cable design, were laid across the Atlantic Ocean between 1956 and 1983. Each generation of cable system supported a greater number of voice circuits—the last supporting 4,200. In order to improve the voice channel capacity of transoceanic cable systems, a method of voice data reduction known as time assignment speech interpolation, or TASI, was introduced. In TASI the natural pauses occurring in speech were used to carry other speech conversations. In this way a coaxial cable system designed for 4,200 two-way voice circuits could support 10,500 circuits.

Developments in fibre optics also had a significant effect on the deployment of undersea cable. From 1989 to 2001 a total of 15 new transatlantic optical fibre cables were deployed, along with a similar number of transpacific cables. Many other short-segment undersea cables were deployed to connect various countries within a continent. Since 1996 many of these optical cables have employed erbium-doped fibre amplifiers and wave division multiplexing, permitting the highest-quality data transmission at very high rates. One of the more ambitious programs, the TAT-14, deployed in 2001, connects the United States, FranceGermanyDenmark, and the United Kingdom with a 15,428-km (9,581-mile) undersea cable. As deployed, the cable has four fibre pairs and has a protected capacity of 640 gigabits per second, corresponding to roughly 9.6 million voice circuits. Owing to such capacity, TASI is no longer needed to increase the number of voice circuits over undersea cable.

Satellite

About the same time that transatlantic cables were being installed, another transmission method, satellite communication, was being investigated. In 1962 AT&T in conjunction with the National Aeronautics and Space Administration (NASA) launched the communication satellite Telstar into an elliptical medium Earth orbit, its apogee, or farthest distance from Earth, being some 5,600 km (3,500 miles). Telstar 1 served as a repeater in the sky; that is, it simply translated all frequencies within its receiving bandwidth in the six-gigahertz band to frequencies in its four-gigahertz transmitting band. The 32-megahertz transmission bandwidth of Telstar 1 could support one one-way television signal or multiple two-way telephone conversations.

Because of its low orbit, Telstar was not always in view of the communications ground stations. This problem was solved in July 1963 with the launch of the first geostationary communication satellite, Syncom 2, which followed a circular path some 35,900 km (22,300 miles) above the Earth. Syncom 2 was followed by a series of geostationary satellites, each providing a capacity greater than the previous generation. For instance, the Intelsat 11 satellite, launched October 5, 2007, which orbits above the Equator at longitude 43° W (just east of Brazil), uses 12 active C-band transponders to relay digital data over most of North and South America and uses 18 Ku-band transponders primarily for relaying television broadcasts in Brazil.

Unfortunately, geostationary satellites, because of their great distance above the Earth, introduce a quarter-second signal delay, sometimes making two-way voice conversation difficult. For this reason, and also because of the availability of high-capacity undersea cables, geostationary satellites are no longer used for common-carrier telephone communication in much of the world. However, since optical-fibre connections are not available everywhere, geostationary satellites continue to be launched to support voice as well as data traffic.

David E. Borth

mobile telephone

Also known as: mobile phone

News 

How Student Phones and Social Media Are Fueling Fights in Schools • Dec. 15, 2024, 11:45 AM ET (New York Times)

mobile telephone, portable device for connecting to a telecommunications network in order to transmit and receive voice, video, or other data. Mobile phones typically connect to the public switched telephone network (PSTN) through one of two categories: cellular telephone systems or global satellite-based telephony.

Cellular telephones

Cellular telephones, or simply cell phones, are portable devices that may be used in motor vehicles or by pedestrians. Communicating by radio waves, they permit a significant degree of mobility within a defined serving region that may range in area from a few city blocks to hundreds of square kilometres. The first mobile and portable subscriber units for cellular systems were large and heavy. With significant advances in component technology, though, the weight and size of portable transceivers have been significantly reduced. In this section, the concept of cell phones and the development of cellular systems are discussed.

Cellular communication

All cellular telephone systems exhibit several fundamental characteristics, as summarized in the following:

  1. The geographic area served by a cellular system is broken up into smaller geographic areas, or cells. Uniform hexagons most frequently are employed to represent these cells on maps and diagrams; in practice, though, radio waves do not confine themselves to hexagonal areas, so the actual cells have irregular shapes.
  2. All communication with a mobile or portable instrument within a given cell is made to a base station that serves the cell.
  3. Because of the low transmitting power of battery-operated portable instruments, specific sending and receiving frequencies assigned to a cell may be reused in other cells within the larger geographic area. Thus, the spectral efficiency of a cellular system (that is, the uses to which it can put its portion of the radio spectrum) is increased by a factor equal to the number of times a frequency may be reused within its service area.
  4. As a mobile instrument proceeds from one cell to another during the course of a call, a central controller automatically reroutes the call from the old cell to the new cell without a noticeable interruption in the signal reception. This process is known as handoff. The central controller, or mobile telephone switching office (MTSO), thus acts as an intelligent central office switch that keeps track of the movement of the mobile subscriber.
  5. As demand for the radio channels within a given cell increases beyond the capacity of that cell (as measured by the number of calls that may be supported simultaneously), the overloaded cell is “split” into smaller cells, each with its own base station and central controller. The radio-frequency allocations of the original cellular system are then rearranged to account for the greater number of smaller cells.

Frequency reuse between discontiguous cells and the splitting of cells as demand increases are the concepts that distinguish cellular systems from other wireless telephone systems. They allow cellular providers to serve large metropolitan areas that may contain hundreds of thousands of customers.

Development of cellular systems

In the United States, interconnection of mobile transmitters and receivers with the public switched telephone network (PSTN) began in 1946, with the introduction of mobile telephone service (MTS) by the American Telephone & Telegraph Company (AT&T). In the U.S. MTS system, a user who wished to place a call from a mobile phone had to search manually for an unused channel before placing the call. The user then spoke with a mobile operator, who actually dialed the call over the PSTN. The radio connection was simplex—i.e., only one party could speak at a time, the call direction being controlled by a push-to-talk switch in the mobile handset. In 1964 AT&T introduced the improved mobile telephone service (IMTS). This provided full duplex operation, automatic dialing, and automatic channel searching. Initially 11 channels were provided, but in 1969 an additional 12 channels were made available. Since only 11 (or 12) channels were available for all users of the system within a given geographic area (such as the metropolitan area around a large city), the IMTS system faced a high demand for a very limited channel resource. Moreover, each base-station antenna had to be located on a tall structure and had to transmit at high power in order to provide coverage throughout the entire service area. Because of these high power requirements, all subscriber units in the IMTS system were motor-vehicle-based instruments that carried large storage batteries.

The iPod nano, introduced by Apple CEO Steve Jobs in San Francisco, May 2007. A revolutionary full-featured iPod that holds 1,000 songs and is thinner than a standard #2 pencil. MP3 player, music player, digital music
Britannica Quiz
Electronics & Gadgets Quiz

During this time a truly cellular system, known as the advanced mobile phone system, or AMPS, was developed primarily by AT&T and Motorola, Inc. AMPS was based on 666 paired voice channels, spaced every 30 kilohertz in the 800-megahertz region. The system employed an analog modulation approach—frequency modulation, or FM—and was designed from the outset to support subscriber units for use both in automobiles and by pedestrians. It was publicly introduced in Chicago in 1983 and was a success from the beginning. At the end of the first year of service, there were a total of 200,000 AMPS subscribers throughout the United States; five years later there were more than 2,000,000. In response to expected service shortages, the American cellular industry proposed several methods for increasing capacity without requiring additional spectrum allocations. One analog FM approach, proposed by Motorola in 1991, was known as narrowband AMPS, or NAMPS. In NAMPS systems each existing 30-kilohertz voice channel was split into three 10-kilohertz channels. Thus, in place of the 832 channels available in AMPS systems, the NAMPS system offered 2,496 channels. A second approach, developed by a committee of the Telecommunications Industry Association (TIA) in 1988, employed digital modulation and digital voice compression in conjunction with a time-division multiple access (TDMA) method; this also permitted three new voice channels in place of one AMPS channel. Finally, in 1994 there surfaced a third approach, developed originally by Qualcomm, Inc., but also adopted as a standard by the TIA. This third approach used a form of spread spectrum multiple access known as code-division multiple access (CDMA)—a technique that, like the original TIA approach, combined digital voice compression with digital modulation. (For more information on the techniques of information compression, signal modulation, and multiple access, see telecommunications.) The CDMA system offered 10 to 20 times the capacity of existing AMPS cellular techniques. All of these improved-capacity cellular systems were eventually deployed in the United States, but, since they were incompatible with one another, they supported rather than replaced the older AMPS standard.

Although AMPS was the first cellular system to be developed, a Japanese system was the first cellular system to be deployed, in 1979. Other systems that preceded AMPS in operation include the Nordic mobile telephone (NMT) system, deployed in 1981 in Denmark, Finland, Norway, and Sweden, and the total access communication system (TACS), deployed in the United Kingdom in 1983. A number of other cellular systems were developed and deployed in many more countries in the following years. All of them were incompatible with one another. In 1988 a group of government-owned public telephone bodies within the European Community announced the digital global system for mobile communications, referred to as GSM, the first such system that would permit any cellular user in one European country to operate in another European country with the same equipment. GSM soon became ubiquitous throughout Europe.

Get Unlimited Access
Try Britannica Premium for free and discover more.

The analog cellular systems of the 1980s are now referred to as “first-generation” (or 1G) systems, and the digital systems that began to appear in the late 1980s and early ’90s are known as the “second generation” (2G). Since the introduction of 2G cell phones, various enhancements have been made in order to provide data services and applications such as Internet browsing, two-way text messaging, still-image transmission, and mobile access by personal computers. One of the most successful applications of this kind is iMode, launched in 1999 in Japan by NTT DoCoMo, the mobile service division of the Nippon Telegraph and Telephone Corporation. Supporting Internet access to selected Web sites, interactive games, information retrieval, and text messaging, iMode became extremely successful; within three years of its introduction, more than 35 million users in Japan had iMode-enabled cell phones.

Beginning in 1985, a study group of the Geneva-based International Telecommunication Union (ITU) began to consider specifications for Future Public Land Mobile Telephone Systems (FPLMTS). These specifications eventually became the basis for a set of “third-generation” (3G) cellular standards, known collectively as IMT-2000. The 3G standards are based loosely on several attributes: the use of CDMA technology; the ability eventually to support three classes of users (vehicle-based, pedestrian, and fixed); and the ability to support voice, data, and multimedia services. The world’s first 3G service began in Japan in October 2001 with a system offered by NTT DoCoMo. Soon 3G service was being offered by a number of different carriers in Japan, South Korea, the United States, and other countries. Several new types of service compatible with the higher data rates of 3G systems have become commercially available, including full-motion video transmission, image transmission, location-aware services (through the use of global positioning system [GPS] technology), and high-rate data transmission.

The increasing demands placed on mobile telephones to handle even more data than 3G could led to the development of 4G technology. In 2008 the ITU set forward a list of requirements for what it called IMT-Advanced, or 4G; these requirements included data rates of 1 gigabit per second for a stationary user and 100 megabits per second for a moving user. The ITU in 2010 decided that two technologies, LTE-Advanced (Long Term Evolution; LTE) and WirelessMan-Advanced (also called WiMAX), met the requirements. The Swedish telephone company TeliaSonera introduced the first 4G LTE network in Stockholm in 2009.

Airborne cellular systems

In addition to the terrestrial cellular phone systems described above, there also exist several systems that permit the placement of telephone calls to the PSTN by passengers on commercial aircraft. These in-flight telephones, known by the generic name aeronautical public correspondence (APC) systems, are of two types: terrestrial-based, in which telephone calls are placed directly from an aircraft to an en route ground station; and satellite-based, in which telephone calls are relayed via satellite to a ground station. In the United States the North American terrestrial system (NATS) was introduced by GTE Corporation in 1984. Within a decade the system was installed in more than 1,700 aircraft, with ground stations in the United States providing coverage over most of the United States and southern Canada. A second-generation system, GTE Airfone GenStar, employed digital modulation. In Europe the European Telecommunications Standards Institute (ETSI) adopted a terrestrial APC system known as the terrestrial flight telephone system (TFTS) in 1992. This system employs digital modulation methods and operates in the 1,670–1,675- and 1,800–1,805-megahertz bands. In order to cover most of Europe, the ground stations must be spaced every 50 to 700 km (30 to 435 miles).

Satellite-based telephone communication

In order to augment the terrestrial and aircraft-based mobile telephone systems, several satellite-based systems have been put into operation. The goal of these systems is to permit ready connection to the PSTN from anywhere on Earth’s surface, especially in areas not presently covered by cellular telephone service. A form of satellite-based mobile communication has been available for some time in airborne cellular systems that utilize Inmarsat satellites. However, the Inmarsat satellites are geostationary, remaining approximately 35,000 km (22,000 miles) above a single location on Earth’s surface. Because of this high-altitude orbit, Earth-based communication transceivers require high transmitting power, large communication antennas, or both in order to communicate with the satellite. In addition, such a long communication path introduces a noticeable delay, on the order of a quarter-second, in two-way voice conversations. One viable alternative to geostationary satellites would be a larger system of satellites in low Earth orbit (LEO). Orbiting less than 1,600 km (1,000 miles) above Earth, LEO satellites are not geostationary and therefore cannot provide constant coverage of specific areas on Earth. Nevertheless, by allowing radio communications with a mobile instrument to be handed off between satellites, an entire constellation of satellites can assure that no call will be dropped simply because a single satellite has moved out of range.

The first LEO system intended for commercial service was the Iridium system, designed by Motorola, Inc., and owned by Iridium LLC, a consortium made up of corporations and governments from around the world. The Iridium concept employed a constellation of 66 satellites orbiting in six planes around Earth. They were launched from May 1997 to May 1998, and commercial service began in November 1998. Each satellite, orbiting at an altitude of 778 kilometres (483 miles), had the capability to transmit 48 spot beams to Earth. Meanwhile, all the satellites were in communication with one another via 23-gigahertz radio “crosslinks,” thus permitting ready handoff between satellites when communicating with a fixed or mobile user on Earth. The crosslinks provided an uninterrupted communication path between the satellite serving a user at any particular instant and the satellite connecting the entire constellation with the gateway ground station to the PSTN. In this way, the 66 satellites provided continuous telephone communication service for subscriber units around the globe. However, the service failed to attract sufficient subscribers, and Iridium LLC went out of business in March 2000. Its assets were acquired by Iridium Satellite LLC, which continued to provide worldwide communication service to the U.S. Department of Defense as well as business and individual users.

Another LEO system, Globalstar, consisted of 48 satellites that were launched about the same time as the Iridium constellation. Globalstar began offering service in October 1999, though it too went into bankruptcy, in February 2002; a reorganized Globalstar LP continued to provide service thereafter.

David E. Borth

telecommunications network

telecommunications network, electronic system of links and switches, and the controls that govern their operation, that allows for data transfer and exchange among multiple users.

When several users of telecommunications media wish to communicate with one another, they must be organized into some form of network. In theory, each user can be given a direct point-to-point link to all the other users in what is known as a fully connected topology (similar to the connections employed in the earliest days of telephony), but in practice this technique is impractical and expensive—especially for a large and dispersed network. Furthermore, the method is inefficient, since most of the links will be idle at any given time. Modern telecommunications networks avoid these issues by establishing a linked network of switches, or nodes, such that each user is connected to one of the nodes. Each link in such a network is called a communications channel. Wire, fibre-optic cable, and radio waves may be used for different communications channels.

Types of networks

Switched communications network

A switched communications network transfers data from source to destination through a series of network nodes. Switching can be done in one of two ways. In a circuit-switched network, a dedicated physical path is established through the network and is held for as long as communication is necessary. An example of this type of network is the traditional (analog) telephone system. A packet-switched network, on the other hand, routes digital data in small pieces called packets, each of which proceeds independently through the network. In a process called store-and-forward, each packet is temporarily stored at each intermediate node, then forwarded when the next link becomes available. In a connection-oriented transmission scheme, each packet takes the same route through the network, and thus all packets usually arrive at the destination in the order in which they were sent. Conversely, each packet may take a different path through the network in a connectionless or datagram scheme. Since datagrams may not arrive at the destination in the order in which they were sent, they are numbered so that they can be properly reassembled. The latter is the method that is used for transmitting data through the Internet.

Broadcast network

A broadcast network avoids the complex routing procedures of a switched network by ensuring that each node’s transmissions are received by all other nodes in the network. Therefore, a broadcast network has only a single communications channel. A wired local area network (LAN), for example, may be set up as a broadcast network, with one user connected to each node and the nodes typically arranged in a busring, or star topology, as shown in the figure. Nodes connected together in a wireless LAN may broadcast via radio or optical links. On a larger scale, many satellite radio systems are broadcast networks, since each Earth station within the system can typically hear all messages relayed by a satellite.

Network access

Since all nodes can hear each transmission in a broadcast network, a procedure must be established for allocating a communications channel to the node or nodes that have packets to transmit and at the same time preventing destructive interference from collisions (simultaneous transmissions). This type of communication, called multiple access, can be established either by scheduling (a technique in which nodes take turns transmitting in an orderly fashion) or by random access to the channel.

Scheduled access

In a scheduling method known as time-division multiple access (TDMA), a time slot is assigned in turn to each node, which uses the slot if it has something to transmit. If some nodes are much busier than others, then TDMA can be inefficient, since no data are passed during time slots allocated to silent nodes. In this case a reservation system may be implemented, in which there are fewer time slots than nodes and a node reserves a slot only when it is needed for transmission.

A variation of TDMA is the process of polling, in which a central controller asks each node in turn if it requires channel access, and a node transmits a packet or message only in response to its poll. “Smart” controllers can respond dynamically to nodes that suddenly become very busy by polling them more often for transmissions. A decentralized form of polling is called token passing. In this system a special “token” packet is passed from node to node. Only the node with the token is authorized to transmit; all others are listeners.

Get Unlimited Access
Try Britannica Premium for free and discover more.

Random access

Scheduled access schemes have several disadvantages, including the large overhead required for the reservation, polling, and token passing processes and the possibility of long idle periods when only a few nodes are transmitting. This can lead to extensive delays in routing information, especially when heavy traffic occurs in different parts of the network at different times—a characteristic of many practical communications networks. Random-access algorithms were designed specifically to give nodes with something to transmit quicker access to the channel. Although the channel is vulnerable to packet collisions under random access, various procedures have been developed to reduce this probability.

Carrier sense multiple access

One random-access method that reduces the chance of collisions is called carrier sense multiple access (CSMA). In this method a node listens to the channel first and delays transmitting when it senses that the channel is busy. Because of delays in channel propagation and node processing, it is possible that a node will erroneously sense a busy channel to be idle and will cause a collision if it transmits. In CSMA, however, the transmitting nodes will recognize that a collision has occurred: the respective destinations will not acknowledge receipt of a valid packet. Each node then waits a random time before sending again (hopefully preventing a second collision). This method is commonly employed in packet networks with radio links, such as the system used by amateur radio operators.

It is important to minimize the time that a communications channel spends in a collision state, since this effectively shuts down the channel. If a node can simultaneously transmit and receive (usually possible on wire and fibre-optic links but not on radio links), then it can stop sending immediately upon detecting the beginning of a collision, thus moving the channel out of the collision state as soon as possible. This process is called carrier sense multiple access with collision detection (CSMA/CD), a feature of the popular wired Ethernet. (For more information on Ethernet, see computer: Local area networks.)

Spread-spectrum multiple access

Since collisions are so detrimental to network performance, methods have been developed to allow multiple transmissions on a broadcast network without necessarily causing mutual packet destruction. One of the most successful is called spread-spectrum multiple access (SSMA). In SSMA simultaneous transmissions will cause only a slight increase in bit error probability for each user if the channel is not too heavily loaded. Error-free packets can be obtained by using an appropriate control code. Disadvantages of SSMA include wider signal bandwidth and greater equipment cost and complexity compared with conventional CSMA.

Open systems interconnection

Different communication requirements necessitate different network solutions, and these different network protocols can create significant problems of compatibility when networks are interconnected with one another. In order to overcome some of these interconnection problems, the open systems interconnection (OSI) was approved in 1983 as an international standard for communications architecture by the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT). The OSI model, as shown in the figure, consists of seven layers, each of which is selected to perform a well-defined function at a different level of abstraction. The bottom three layers provide for the timely and correct transfer of data, and the top four ensure that arriving data are recognizable and useful. While all seven layers are usually necessary at each user location, only the bottom three are normally employed at a network node, since nodes are concerned only with timely and correct data transfer from point to point.

Data recognition and use

The application layer is difficult to generalize, since its content is specific to each user. For example, distributed databases used in the banking and airline industries require several access and security issues to be solved at this level. Network transparency (making the physical distribution of resources irrelevant to the human user) also is handled at this level. The presentation layer, on the other hand, performs functions that are requested sufficiently often that a general solution is warranted. These functions are often placed in a software library that is accessible by several users running different applications. Examples are text conversiondata compression, and data encryption.

User interface with the network is performed by the session layer, which handles the process of connecting to another computer, verifying user authenticity, and establishing a reliable communication process. This layer also ensures that files which can be altered by several network users are kept in order. Data from the session layer are accepted by the transport layer, which separates the data stream into smaller units, if necessary, and ensures that all arrive correctly at the destination. If fast throughput is needed, the transport layer may establish several simultaneous paths in the network and send different parts of the data over each path. Conversely, if low cost is a requirement, then the layer may time-multiplex several users’ data over one path through the network. Flow control is also regulated at this level, ensuring that data from a fast source will not overrun a slow destination.

Data transfer

The network layer breaks data into packets and determines how the packets are routed within the network, which nodes (if any) will check packets for errors along the route, and whether congestion control is needed in a heavily loaded network. The data-link layer transforms a raw communications channel into a line that appears essentially free of transmission errors to the network layer. This is done by breaking data up into data frames, transmitting them sequentially, and processing acknowledgment frames sent back to the source by the destination. This layer also establishes frame boundaries and implements recovery procedures from lost, damaged, or duplicated frames. The physical layer is the transmission medium itself, along with various electric and mechanical specifications.

Robert K. Morrow

VoIP

communications
Also known as: Internet telephone service, voice over IP, voice over Internet protocol
In full:
Voice over Internet Protocol
Also called:
IP telephony
Related Topics:
Skype
protocol

VoIP, communications technology for carrying voice telephone traffic over a data network such as the Internet. VoIP uses the Internet Protocol (IP)—one half of the Transmission Control Protocol/Internet Protocol (TCP/IP), a global addressing system for sending and receiving packets of data over the Internet.

VoIP works by converting sound into a digital signal, which is then sent over a data network such as the Internet. The conversion is done by a device, such as a personal computer (PC) or special VoIP phone, that has a high-speed, or broadband, Internet connection. The digital signal is routed through the network to its destination, where a second VoIP device converts the signal back to sound. Because of the digital nature of VoIP, call quality is normally much higher than that of a standard telephone. Another advantage is that VoIP frequently costs less than standard telephone and long-distance service.

Initially, there were problems with VoIP, not the least of which was how VoIP connected to 911 emergency systems. Because of this, the U.S. Federal Communications Commission (FCC) required VoIP providers to provide connections to 911, although these systems sometimes worked differently from conventional 911 systems. Another, more persistent, problem that sometimes arises is that VoIP systems will often not work during a power outage.

Several companies provide VoIP service that allow people to use their PC or a special phone with the service. Larger organizations sometimes handle their own VoIP traffic.

This article was most recently revised and updated by Erik Gregersen.

Alexander Graham Bell

American inventor
Quick Facts
Born:
March 3, 1847, EdinburghScotland
Died:
August 2, 1922, Beinn Bhreagh, Cape Breton IslandNova ScotiaCanada (aged 75)
Awards And Honors:
Hall of Fame (1950)
Top Questions
Who was Alexander Graham Bell?
What did Alexander Graham Bell invent?
How did Alexander Graham Bell’s telephone work?

Alexander Graham Bell (born March 3, 1847, Edinburgh, Scotland—died August 2, 1922, Beinn Bhreagh, Cape Breton Island, Nova Scotia, Canada) was a Scottish-born American inventor, scientist, and teacher of the deaf whose foremost accomplishments were the invention of the telephone (1876) and the refinement of the phonograph (1886).

Alexander (“Graham” was not added until he was 11) was born to Alexander Melville Bell and Eliza Grace Symonds. His mother was almost deaf, and his father taught elocution to the deaf, influencing Alexander’s later career choice as teacher of the deaf. At age 11 he entered the Royal High School at Edinburgh, but he did not enjoy the compulsory curriculum, and he left school at age 15 without graduating. In 1865 the family moved to London. Alexander passed the entrance examinations for University College London in June 1868 and matriculated there in the autumn. However, he did not complete his studies, because in 1870 the Bell family moved again, this time immigrating to Canada after the deaths of Bell’s younger brother Edward in 1867 and older brother Melville in 1870, both of tuberculosis. The family settled in Brantford, Ontario, but in April 1871 Alexander moved to Boston, where he taught at the Boston School for Deaf Mutes. He also taught at the Clarke School for the Deaf in Northampton, Massachusetts, and at the American School for the Deaf in Hartford, Connecticut.

One of Bell’s students was Mabel Hubbard, daughter of Gardiner Greene Hubbard, a founder of the Clarke School. Mabel had become deaf at age five as a result of a near-fatal bout of scarlet fever. Bell began working with her in 1873, when she was 15 years old. Despite a 10-year age difference, they fell in love and were married on July 11, 1877. They had four children, Elsie (1878–1964), Marian (1880–1962), and two sons who died in infancy.

Stick figure illustrations holding hands with the Canadian flag.
Britannica Quiz
Spot the Canadian Quiz

While pursuing his teaching profession, Bell also began researching methods to transmit several telegraph messages simultaneously over a single wire—a major focus of telegraph innovation at the time and one that ultimately led to Bell’s invention of the telephone. In 1868 Joseph Stearns had invented the duplex, a system that transmitted two messages simultaneously over a single wire. Western Union Telegraph Company, the dominant firm in the industry, acquired the rights to Stearns’s duplex and hired the noted inventor Thomas Edison to devise as many multiple-transmission methods as possible in order to block competitors from using them. Edison’s work culminated in the quadruplex, a system for sending four simultaneous telegraph messages over a single wire. Inventors then sought methods that could send more than four; some, including Bell and his great rival Elisha Gray, developed designs capable of subdividing a telegraph line into 10 or more channels. These so-called harmonic telegraphs used reeds or tuning forks that responded to specific acoustic frequencies. They worked well in the laboratory but proved unreliable in service.

A group of investors led by Gardiner Hubbard wanted to establish a federally chartered telegraph company to compete with Western Union by contracting with the Post Office to send low-cost telegrams. Hubbard saw great promise in the harmonic telegraph and backed Bell’s experiments. Bell, however, was more interested in transmitting the human voice. Finally, he and Hubbard worked out an agreement that Bell would devote most of his time to the harmonic telegraph but would continue developing his telephone concept.

From harmonic telegraphs transmitting musical tones, it was a short conceptual step for both Bell and Gray to transmit the human voice. Bell filed a patent describing his method of transmitting sounds on February 14, 1876, just hours before Gray filed a caveat (a statement of concept) on a similar method. On March 7, 1876, the Patent Office awarded Bell what is said to be one of the most valuable patents in history. It is most likely that both Bell and Gray independently devised their telephone designs as an outgrowth of their work on harmonic telegraphy. However, the question of priority of invention between the two has been controversial from the very beginning.

Despite having the patent, Bell did not have a fully functioning instrument. He first produced intelligible speech on March 10, 1876, when he summoned his laboratory assistant, Thomas A. Watson, with words that Bell transcribed in his lab notes as “Mr. Watson—come here—I want to see you.” Over the next few months, Bell continued to refine his instrument to make it suitable for public exhibition. In June he demonstrated his telephone to the judges of the Philadelphia Centennial Exhibition, a test witnessed by Brazil’s Emperor Pedro II and the celebrated Scottish physicist Sir William Thomson. In August of that year, he was on the receiving end of the first one-way long-distance call, transmitted from Brantford to nearby Paris, Ontario, over a telegraph wire.

Get Unlimited Access
Try Britannica Premium for free and discover more.

Gardiner Hubbard organized a group that established the Bell Telephone Company in July 1877 to commercialize Bell’s telephone. Bell was the company’s technical adviser until he lost interest in telephony in the early 1880s. Although his invention rendered him independently wealthy, he sold off most of his stock holdings in the company early and did not profit as much as he might have had he retained his shares. Thus, by the mid-1880s his role in the telephone industry was marginal.

By that time, Bell had developed a growing interest in the technology of sound recording and playback. Although Edison had invented the phonograph in 1877, he soon turned his attention to other technologies, especially electric power and lighting, and his machine, which recorded and reproduced sound on a rotating cylinder wrapped in tinfoil, remained an unreliable and cumbersome device. In 1880 the French government awarded Bell the Volta Prize, given for achievement in electrical science. Bell used the prize money to set up his Volta Laboratory, an institution devoted to studying deafness and improving the lives of the deaf, in Washington, D.C. There he also devoted himself to improving the phonograph. By 1885 Bell and his colleagues (his cousin Chichester A. Bell and the inventor Charles Sumner Tainter) had a design fit for commercial use that featured a removable cardboard cylinder coated with mineral wax. They called their device the Graphophone and applied for patents, which were granted in 1886. The group formed the Volta Graphophone Company to produce their invention. Then in 1887 they sold their patents to the American Graphophone Company, which later evolved into the Columbia Phonograph Company. Bell used his proceeds from the sale to endow the Volta Laboratory.

Bell undertook two other noteworthy research projects at the Volta Laboratory. In 1880 he began research on using light as a means to transmit sound. In 1873 British scientist Willoughby Smith discovered that the element selenium, a semiconductor, varied its electrical resistance with the intensity of incident light. Bell sought to use this property to develop the photophone, an invention he regarded as at least equal to his telephone. He was able to demonstrate that the photophone was technologically feasible, but it did not develop into a commercially viable product. Nevertheless, it contributed to research into the photovoltaic effect that had practical applications later in the 20th century.

Bell’s other major undertaking was the development of an electrical bullet probe, an early version of the metal detector, for surgical use. The origin of this effort was the shooting of U.S. President James A. Garfield in July 1881. A bullet lodged in the president’s back, and doctors were unable to locate it through physical probing. Bell decided that a promising approach was to use an induction balance, a by-product of his research on canceling out electrical interference on telephone wires. Bell determined that a properly configured induction balance would emit a tone when a metal object was brought into proximity with it. At the end of July, he began searching for Garfield’s bullet, but to no avail. Despite Garfield’s death in September, Bell later successfully demonstrated the probe to a group of doctors. Surgeons adopted it, and it was credited with saving lives during the Boer War (1899–1902) and World War I (1914–18).

In September 1885 the Bell family vacationed in Nova Scotia, Canada, and immediately fell in love with the climate and landscape. The following year, Bell bought 50 acres of land near the village of Baddeck on Cape Breton Island and began constructing an estate he called Beinn Bhreagh, Scots Gaelic for “Beautiful Mountain.” The Scottish-born inventor had been an American citizen since 1882, but the Canadian estate became the family’s summer retreat and later permanent home.

During the 1890s Bell shifted his attention to heavier-than-air flight. Starting in 1891, inspired by the research of American scientist Samuel Pierpont Langley, he experimented with wing shapes and propeller blade designs. He continued his experiments even after Wilbur and Orville Wright made the first successful powered, controlled flight in 1903. In 1907 Bell founded the Aerial Experiment Association, which made significant progress in aircraft design and control and contributed to the career of pioneer aviator Glenn Hammond Curtiss.

Throughout his life, Bell sought to foster the advance of scientific knowledge. He supported the journal Science, which later became the official publication of the American Association for the Advancement of Science. He succeeded his father-in-law, Gardiner Hubbard, as president of the National Geographic Society (1898–1903). In 1903 his son-in-law, Gilbert H. Grosvenor, became editor in chief of the National Geographic Magazine, and Bell encouraged Grosvenor to make the magazine a more popular publication through more photographs and fewer scholarly articles. Bell died at his Nova Scotia estate, where he was buried.

David Hochfelder

cell phone

communications
Also known as: cellular phone, cellular telephone, mobile cellular phone, mobile phone

News 

cell phone, wireless telephone that permits telecommunication within a defined area that may include hundreds of square miles, using radio waves in the 800–900 megahertz (MHz) band. To implement a cell-phone system, a geographic area is broken into smaller areas, or cells, usually mapped as uniform hexagrams but in fact overlapping and irregularly shaped. Each cell is equipped with a low-powered radio transmitter and receiver that permit propagation of signals between cell-phone users. See also mobile telephone and smartphone.

The Editors of Encyclopaedia BritannicaThis article was most recently revised and updated by Tara Ramanathan.

telecommunication

Also known as: electronic communication

telecommunicationscience and practice of transmitting information by electromagnetic means. Modern telecommunication centres on the problems involved in transmitting large volumes of information over long distances without damaging loss due to noise and interference. The basic components of a modern digital telecommunications system must be capable of transmitting voice, data, radio, and television signals. Digital transmission is employed in order to achieve high reliability and because the cost of digital switching systems is much lower than the cost of analog systems. In order to use digital transmission, however, the analog signals that make up most voice, radio, and television communication must be subjected to a process of analog-to-digital conversion. (In data transmission this step is bypassed because the signals are already in digital form; most television, radio, and voice communication, however, use the analog system and must be digitized.) In many cases, the digitized signal is passed through a source encoder, which employs a number of formulas to reduce redundant binary information. After source encoding, the digitized signal is processed in a channel encoder, which introduces redundant information that allows errors to be detected and corrected. The encoded signal is made suitable for transmission by modulation onto a carrier wave and may be made part of a larger signal in a process known as multiplexing. The multiplexed signal is then sent into a multiple-access transmission channel. After transmission, the above process is reversed at the receiving end, and the information is extracted.

This article describes the components of a digital telecommunications system as outlined above. For details on specific applications that utilize telecommunications systems, see the articles telephonetelegraphfaxradio, and television. Transmission over electric wireradio wave, and optical fibre is discussed in telecommunications media. For an overview of the types of networks used in information transmission, see telecommunications network.

Analog-to-digital conversion

In transmission of speech, audio, or video information, the object is high fidelity—that is, the best possible reproduction of the original message without the degradations imposed by signal distortion and noise. The basis of relatively noise-free and distortion-free telecommunication is the binary signal. The simplest possible signal of any kind that can be employed to transmit messages, the binary signal consists of only two possible values. These values are represented by the binary digits, or bits, 1 and 0. Unless the noise and distortion picked up during transmission are great enough to change the binary signal from one value to another, the correct value can be determined by the receiver so that perfect reception can occur.

If the information to be transmitted is already in binary form (as in data communication), there is no need for the signal to be digitally encoded. But ordinary voice communications taking place by way of a telephone are not in binary form; neither is much of the information gathered for transmission from a space probe, nor are the television or radio signals gathered for transmission through a satellite link. Such signals, which continually vary among a range of values, are said to be analog, and in digital communications systems analog signals must be converted to digital form. The process of making this signal conversion is called analog-to-digital (A/D) conversion.

Sampling

Analog-to-digital conversion begins with sampling, or measuring the amplitude of the analog waveform at equally spaced discrete instants of time. The fact that samples of a continually varying wave may be used to represent that wave relies on the assumption that the wave is constrained in its rate of variation. Because a communications signal is actually a complex wave—essentially the sum of a number of component sine waves, all of which have their own precise amplitudes and phases—the rate of variation of the complex wave can be measured by the frequencies of oscillation of all its components. The difference between the maximum rate of oscillation (or highest frequency) and the minimum rate of oscillation (or lowest frequency) of the sine waves making up the signal is known as the bandwidth (B) of the signal. Bandwidth thus represents the maximum frequency range occupied by a signal. In the case of a voice signal having a minimum frequency of 300 hertz and a maximum frequency of 3,300 hertz, the bandwidth is 3,000 hertz, or 3 kilohertz. Audio signals generally occupy about 20 kilohertz of bandwidth, and standard video signals occupy approximately 6 million hertz, or 6 megahertz.

The concept of bandwidth is central to all telecommunication. In analog-to-digital conversion, there is a fundamental theorem that the analog signal may be uniquely represented by discrete samples spaced no more than one over twice the bandwidth (1/2B) apart. This theorem is commonly referred to as the sampling theorem, and the sampling interval (1/2B seconds) is referred to as the Nyquist interval (after the Swedish-born American electrical engineer Harry Nyquist). As an example of the Nyquist interval, in past telephone practice the bandwidth, commonly fixed at 3,000 hertz, was sampled at least every 1/6,000 second. In current practice 8,000 samples are taken per second, in order to increase the frequency range and the fidelity of the speech representation.

Quantization

In order for a sampled signal to be stored or transmitted in digital form, each sampled amplitude must be converted to one of a finite number of possible values, or levels. For ease in conversion to binary form, the number of levels is usually a power of 2—that is, 8, 16, 32, 64, 128, 256, and so on, depending on the degree of precision required. In digital transmission of voice, 256 levels are commonly used because tests have shown that this provides adequate fidelity for the average telephone listener.

Get Unlimited Access
Try Britannica Premium for free and discover more.

The input to the quantizer is a sequence of sampled amplitudes for which there are an infinite number of possible values. The output of the quantizer, on the other hand, must be restricted to a finite number of levels. Assigning infinitely variable amplitudes to a limited number of levels inevitably introduces inaccuracy, and inaccuracy results in a corresponding amount of signal distortion. (For this reason quantization is often called a “lossy” system.) The degree of inaccuracy depends on the number of output levels used by the quantizer. More quantization levels increase the accuracy of the representation, but they also increase the storage capacity or transmission speed required. Better performance with the same number of output levels can be achieved by judicious placement of the output levels and the amplitude thresholds needed for assigning those levels. This placement in turn depends on the nature of the waveform that is being quantized. Generally, an optimal quantizer places more levels in amplitude ranges where the signal is more likely to occur and fewer levels where the signal is less likely. This technique is known as nonlinear quantization. Nonlinear quantization can also be accomplished by passing the signal through a compressor circuit, which amplifies the signal’s weak components and attenuates its strong components. The compressed signal, now occupying a narrower dynamic range, can be quantized with a uniform, or linear, spacing of thresholds and output levels. In the case of the telephone signal, the compressed signal is uniformly quantized at 256 levels, each level being represented by a sequence of eight bits. At the receiving end, the reconstituted signal is expanded to its original range of amplitudes. This sequence of compression and expansion, known as companding, can yield an effective dynamic range equivalent to 13 bits.

Bit mapping

In the next step in the digitization process, the output of the quantizer is mapped into a binary sequence. An encoding table that might be used to generate the binary sequence is shown below:Encoding table.It is apparent that 8 levels require three binary digits, or bits; 16 levels require four bits; and 256 levels require eight bits. In general 2n levels require n bits.

In the case of 256-level voice quantization, where each level is represented by a sequence of 8 bits, the overall rate of transmission is 8,000 samples per second times 8 bits per sample, or 64,000 bits per second. All 8 bits must be transmitted before the next sample appears. In order to use more levels, more binary samples would have to be squeezed into the allotted time slot between successive signal samples. The circuitry would become more costly, and the bandwidth of the system would become correspondingly greater. Some transmission channels (telephone wires are one example) may not have the bandwidth capability required for the increased number of binary samples and would distort the digital signals. Thus, although the accuracy required determines the number of quantization levels used, the resultant binary sequence must still be transmitted within the bandwidth tolerance allowed.

Source encoding

As is pointed out in analog-to-digital conversion, any available telecommunications medium has a limited capacity for data transmission. This capacity is commonly measured by the parameter called bandwidth. Since the bandwidth of a signal increases with the number of bits to be transmitted each second, an important function of a digital communications system is to represent the digitized signal by as few bits as possible—that is, to reduce redundancyRedundancy reduction is accomplished by a source encoder, which often operates in conjunction with the analog-to-digital converter.

Huffman codes

In general, fewer bits on the average will be needed if the source encoder takes into account the probabilities at which different quantization levels are likely to occur. A simple example will illustrate this concept. Assume a quantizing scale of only four levels: 1, 2, 3, and 4. Following the usual standard of binary encoding, each of the four levels would be mapped by a two-bit code word. But also assume that level 1 occurs 50 percent of the time, that level 2 occurs 25 percent of the time, and that levels 3 and 4 each occur 12.5 percent of the time. Using variable-bit code words might cause more efficient mapping of these levels to be achieved. The variable-bit encoding rule would use only one bit 50 percent of the time, two bits 25 percent of the time, and three bits 25 percent of the time. On average it would use 1.75 bits per sample rather than the 2 bits per sample used in the standard code.

Thus, for any given set of levels and associated probabilities, there is an optimal encoding rule that minimizes the number of bits needed to represent the source. This encoding rule is known as the Huffman code, after the American D.A. Huffman, who created it in 1952. Even more efficient encoding is possible by grouping sequences of levels together and applying the Huffman code to these sequences.

The Lempel-Ziv algorithm

The design and performance of the Huffman code depends on the designers’ knowing the probabilities of different levels and sequences of levels. In many cases, however, it is desirable to have an encoding system that can adapt to the unknown probabilities of a source. A very efficient technique for encoding sources without needing to know their probable occurrence was developed in the 1970s by the Israelis Abraham Lempel and Jacob Ziv. The Lempel-Ziv algorithm works by constructing a codebook out of sequences encountered previously. For example, the codebook might begin with a set of four 12-bit code words representing four possible signal levels. If two of those levels arrived in sequence, the encoder, rather than transmitting two full code words (of length 24), would transmit the code word for the first level (12 bits) and then an extra two bits to indicate the second level. The encoder would then construct a new code word of 12 bits for the sequence of two levels, so that even fewer bits would be used thereafter to represent that particular combination of levels. The encoder would continue to read quantization levels until another sequence arrived for which there was no code word. In this case the sequence without the last level would be in the codebook, but not the whole sequence of levels. Again, the encoder would transmit the code word for the initial sequence of levels and then an extra two bits for the last level. The process would continue until all 4,096 possible 12-bit combinations had been assigned as code words.

In practice, standard algorithms for compressing binary files use code words of 12 bits and transmit 1 extra bit to indicate a new sequence. Using such a code, the Lempel-Ziv algorithm can compress transmissions of English text by about 55 percent, whereas the Huffman code compresses the transmission by only 43 percent.

Run-length codes

Certain signal sources are known to produce “runs,” or long sequences of only 1s or 0s. In these cases it is more efficient to transmit a code for the length of the run rather than all the bits that represent the run itself. One source of long runs is the fax machine. A fax machine works by scanning a document and mapping very small areas of the document into either a black pixel (picture element) or a white pixel. The document is divided into a number of lines (approximately 100 per inch), with 1,728 pixels in each line (at standard resolution). If all black pixels were mapped into 1s and all white pixels into 0s, then the scanned document would be represented by 1,857,600 bits (for a standard American 11-inch page). At older modem transmission speeds of 4,800 bits per second, it would take 6 minutes 27 seconds to send a single page. If, however, the sequence of 0s and 1s were compressed using a run-length code, significant reductions in transmission time would be made.

The code for fax machines is actually a combination of a run-length code and a Huffman code; it can be explained as follows: A run-length code maps run lengths into code words, and the codebook is partitioned into two parts. The first part contains symbols for runs of lengths that are a multiple of 64; the second part is made up of runs from 0 to 63 pixels. Any run length would then be represented as a multiple of 64 plus some remainder. For example, a run of 205 pixels would be sent using the code word for a run of length 192 (3 × 64) plus the code word for a run of length 13. In this way the number of bits needed to represent the run is decreased significantly. In addition, certain runs that are known to have a higher probability of occurrence are encoded into code words of short length, further reducing the number of bits that need to be transmitted. Using this type of encoding, typical compressions for facsimile transmission range between 4 to 1 and 8 to 1. Coupled to higher modem speeds, these compressions reduce the transmission time of a single page to between 48 seconds and 1 minute 37 seconds.

Channel encoding

As described in Source encoding, one purpose of the source encoder is to eliminate redundant binary digits from the digitized signal. The strategy of the channel encoder, on the other hand, is to add redundancy to the transmitted signal—in this case so that errors caused by noise during transmission can be corrected at the receiver. The process of encoding for protection against channel errors is called error-control coding. Error-control codes are used in a variety of applications, including satellite communication, deep-space communication, mobile radio communication, and computer networking.

There are two commonly employed methods for protecting electronically transmitted information from errors. One method is called forward error control (FEC). In this method information bits are protected against errors by the transmitting of extra redundant bits, so that if errors occur during transmission the redundant bits can be used by the decoder to determine where the errors have occurred and how to correct them. The second method of error control is called automatic repeat request (ARQ). In this method redundant bits are added to the transmitted information and are used by the receiver to detect errors. The receiver then signals a request for a repeat transmission. Generally, the number of extra bits needed simply to detect an error, as in the ARQ system, is much smaller than the number of redundant bits needed both to detect and to correct an error, as in the FEC system.

Repetition codes

One simple, but not usually implemented, FEC method is to send each data bit three times. The receiver examines the three transmissions and decides by majority vote whether a 0 or 1 represents a sample of the original signal. In this coded system, called a repetition code of block-length three and rate one-third, three times as many bits per second are used to transmit the same signal as are used by an uncoded system; hence, for a fixed available bandwidth only one-third as many signals can be conveyed with the coded system as compared with the uncoded system. The gain is that now at least two of the three coded bits must be in error before a reception error occurs.

The Hamming code

Another simple example of an FEC code is known as the Hamming code. This code is able to protect a four-bit information signal from a single error on the channel by adding three redundant bits to the signal. Each sequence of seven bits (four information bits plus three redundant bits) is called a code word. The first redundant bit is chosen so that the sum of ones in the first three information bits plus the first redundant bit amounts to an even number. (This calculation is called a parity check, and the redundant bit is called a parity bit.) The second parity bit is chosen so that the sum of the ones in the last three information bits plus the second parity bit is even, and the third parity bit is chosen so that the sum of ones in the first, second, and fourth information bits and the last parity bit is even. This code can correct a single channel error by recomputing the parity checks. A parity check that fails indicates an error in one of the positions checked, and the two subsequent parity checks, by process of elimination, determine the precise location of the error. The Hamming code thus can correct any single error that occurs in any of the seven positions. If a double error occurs, however, the decoder will choose the wrong code word.

Convolutional encoding

The Hamming code is called a block code because information is blocked into bit sequences of finite length to which a number of redundant bits are added. When k information bits are provided to a block encoder, n − k redundancy bits are appended to the information bits to form a transmitted code word of n bits. The entire code word of length n is thus completely determined by one block of k information bits. In another channel-encoding scheme, known as convolutional encoding, the encoder output is not naturally segmented into blocks but is instead an unending stream of bits. In convolutional encoding, memory is incorporated into the encoding process, so that the preceding M blocks of k information bits, together with the current block of k information bits, determine the encoder output. The encoder accomplishes this by shifting among a finite number of “states,” or “nodes.” There are several variations of convolutional encoding, but the simplest example may be seen in what is known as the (n,1) encoder, in which the current block of k information bits consists of only one bit. At each given state of the (n,1) encoder, when the information bit (a 0 or a 1) is received, the encoder transmits a sequence of n bits assigned to represent that bit when the encoder is at that current state. At the same time, the encoder shifts to one of only two possible successor states, depending on whether the information bit was a 0 or a 1. At this successor state, in turn, the next information bit is represented by a specific sequence of n bits, and the encoder is again shifted to one of two possible successor states. In this way, the sequence of information bits stored in the encoder’s memory determines both the state of the encoder and its output, which is modulated and transmitted across the channel. At the receiver, the demodulated bit sequence is compared to the possible bit sequences that can be produced by the encoder. The receiver determines the bit sequence that is most likely to have been transmitted, often by using an efficient decoding algorithm called Viterbi decoding (after its inventor, A.J. Viterbi). In general, the greater the memory (i.e., the more states) used by the encoder, the better the error-correcting performance of the code—but only at the cost of a more complex decoding algorithm. In addition, the larger the number of bits (n) used to transmit information, the better the performance—at the cost of a decreased data rate or larger bandwidth.

Coding and decoding processes similar to those described above are employed in trellis coding, a coding scheme used in high-speed modems. However, instead of the sequence of bits that is produced by a convolutional encoder, a trellis encoder produces a sequence of modulation symbols. At the transmitter, the channel-encoding process is coupled with the modulation process, producing a system known as trellis-coded modulation. At the receiver, decoding and demodulating are performed jointly in order to optimize the performance of the error-correcting algorithm.

Modulation

In many telecommunications systems, it is necessary to represent an information-bearing signal with a waveform that can pass accurately through a transmission medium. This assigning of a suitable waveform is accomplished by modulation, which is the process by which some characteristic of a carrier wave is varied in accordance with an information signal, or modulating wave. The modulated signal is then transmitted over a channel, after which the original information-bearing signal is recovered through a process of demodulation.

Modulation is applied to information signals for a number of reasons, some of which are outlined below.

  1. Many transmission channels are characterized by limited passbands—that is, they will pass only certain ranges of frequencies without seriously attenuating them (reducing their amplitude). Modulation methods must therefore be applied to the information signals in order to “frequency translate” the signals into the range of frequencies that are permitted by the channel. Examples of channels that exhibit passband characteristics include alternating-current-coupled coaxial cables, which pass signals only in the range of 60 kilohertz to several hundred megahertz, and fibre-optic cables, which pass light signals only within a given wavelength range without significant attenuation. In these instances frequency translation is used to “fit” the information signal to the communications channel.
  2. In many instances a communications channel is shared by multiple users. In order to prevent mutual interference, each user’s information signal is modulated onto an assigned carrier of a specific frequency. When the frequency assignment and subsequent combining is done at a central point, the resulting combination is a frequency-division multiplexed signal, as is discussed in Multiplexing. Frequently there is no central combining point, and the communications channel itself acts as a distributed combine. An example of the latter situation is the broadcast radio bands (from 540 kilohertz to 600 megahertz), which permit simultaneous transmission of multiple AM radio, FM radio, and television signals without mutual interference as long as each signal is assigned to a different frequency band.
  3. Even when the communications channel can support direct transmission of the information-bearing signal, there are often practical reasons why this is undesirable. A simple example is the transmission of a three-kilohertz (i.e., voiceband) signal via radio wave. In free space the wavelength of a three-kilohertz signal is 100 kilometres (60 miles). Since an effective radio antenna is typically as large as half the wavelength of the signal, a three-kilohertz radio wave might require an antenna up to 50 kilometres in length. In this case translation of the voice frequency to a higher frequency would allow the use of a much smaller antenna.

Analog modulation

As is noted in analog-to-digital conversion, voice signals, as well as audio and video signals, are inherently analog in form. In most modern systems these signals are digitized prior to transmission, but in some systems the analog signals are still transmitted directly without converting them to digital form. There are two commonly used methods of modulating analog signals. One technique, called amplitude modulation, varies the amplitude of a fixed-frequency carrier wave in proportion to the information signal. The other technique, called frequency modulation, varies the frequency of a fixed-amplitude carrier wave in proportion to the information signal.

Digital modulation

In order to transmit computer data and other digitized information over a communications channel, an analog carrier wave can be modulated to reflect the binary nature of the digital baseband signal. The parameters of the carrier that can be modified are the amplitude, the frequency, and the phase.

Amplitude-shift keying

If amplitude is the only parameter of the carrier wave to be altered by the information signal, the modulating method is called amplitude-shift keying (ASK). ASK can be considered a digital version of analog amplitude modulation. In its simplest form, a burst of radio frequency is transmitted only when a binary 1 appears and is stopped when a 0 appears. In another variation, the 0 and 1 are represented in the modulated signal by a shift between two preselected amplitudes.

electron hole: movement
More From Britannica
materials science: Materials for computers and communications

Frequency-shift keying

If frequency is the parameter chosen to be a function of the information signal, the modulation method is called frequency-shift keying (FSK). In the simplest form of FSK signaling, digital data is transmitted using one of two frequencies, whereby one frequency is used to transmit a 1 and the other frequency to transmit a 0. Such a scheme was used in the Bell 103 voiceband modem, introduced in 1962, to transmit information at rates up to 300 bits per second over the public switched telephone network. In the Bell 103 modem, frequencies of 1,080 +/- 100 hertz and 1,750 +/- 100 hertz were used to send binary data in both directions.

Phase-shift keying

When phase is the parameter altered by the information signal, the method is called phase-shift keying (PSK). In the simplest form of PSK a single radio frequency carrier is sent with a fixed phase to represent a 0 and with a 180° phase shift—that is, with the opposite polarity—to represent a 1. PSK was employed in the Bell 212 modem, which was introduced about 1980 to transmit information at rates up to 1,200 bits per second over the public switched telephone network.

Advanced methods

In addition to the elementary forms of digital modulation described above, there exist more advanced methods that result from a superposition of multiple modulating signals. An example of the latter form of modulation is quadrature amplitude modulation (QAM). QAM signals actually transmit two amplitude-modulated signals in phase quadrature (i.e., 90° apart), so that four or more bits are represented by each shift of the combined signal. Communications systems that employ QAM include digital cellular systems in the United States and Japan as well as most voiceband modems transmitting above 2,400 bits per second.

A form of modulation that combines convolutional codes with QAM is known as trellis-coded modulation (TCM), which is described in Channel encoding. Trellis-coded modulation forms an essential part of most of the modern voiceband modems operating at data rates of 9,600 bits per second and above, including V.32 and V.34 modems.

Multiplexing

Because of the installation cost of a communications channel, such as a microwave link or a coaxial cable link, it is desirable to share the channel among multiple users. Provided that the channel’s data capacity exceeds that required to support a single user, the channel may be shared through the use of multiplexing methods. Multiplexing is the sharing of a communications channel through local combining of signals at a common point. Two types of multiplexing are commonly employed: frequency-division multiplexing and time-division multiplexing.

Frequency-division multiplexing

In frequency-division multiplexing (FDM), the available bandwidth of a communications channel is shared among multiple users by frequency translating, or modulating, each of the individual users onto a different carrier frequency. Assuming sufficient frequency separation of the carrier frequencies that the modulated signals do not overlap, recovery of each of the FDM signals is possible at the receiving end. In order to prevent overlap of the signals and to simplify filtering, each of the modulated signals is separated by a guard band, which consists of an unused portion of the available frequency spectrum. Each user is assigned a given frequency band for all time.

While each user’s information signal may be either analog or digital, the combined FDM signal is inherently an analog waveform. Therefore, an FDM signal must be transmitted over an analog channel. Examples of FDM are found in some of the old long-distance telephone transmission systems, including the American N- and L-carrier coaxial cable systems and analog point-to-point microwave systems. In the L-carrier system a hierarchical combining structure is employed in which 12 voiceband signals are frequency-division multiplexed to form a group signal in the frequency range of 60 to 108 kilohertz. Five group signals are multiplexed to form a supergroup signal in the frequency range of 312 to 552 kilohertz, corresponding to 60 voiceband signals, and 10 supergroup signals are multiplexed to form a master group signal. In the L1 carrier system, deployed in the 1940s, the master group was transmitted directly over coaxial cable. For microwave systems, it was frequency modulated onto a microwave carrier frequency for point-to-point transmission. In the L4 system, developed in the 1960s, six master groups were combined to form a jumbo group signal of 3,600 voiceband signals.

Time-division multiplexing

Multiplexing also may be conducted through the interleaving of time segments from different signals onto a single transmission path—a process known as time-division multiplexing (TDM). Time-division multiplexing of multiple signals is possible only when the available data rate of the channel exceeds the data rate of the total number of users. While TDM may be applied to either digital or analog signals, in practice it is applied almost always to digital signals. The resulting composite signal is thus also a digital signal.

In a representative TDM system, data from multiple users are presented to a time-division multiplexer. A scanning switch then selects data from each of the users in sequence to form a composite TDM signal consisting of the interleaved data signals. Each user’s data path is assumed to be time-aligned or synchronized to each of the other users’ data paths and to the scanning mechanism. If only one bit were selected from each of the data sources, then the scanning mechanism would select the value of the arriving bit from each of the multiple data sources. In practice, however, the scanning mechanism usually selects a slot of data consisting of multiple bits of each user’s data; the scanner switch is then advanced to the next user to select another slot, and so on. Each user is assigned a given time slot for all time.

Most modern telecommunications systems employ some form of TDM for transmission over long-distance routes. The multiplexed signal may be sent directly over cable systems, or it may be modulated onto a carrier signal for transmission via radio wave. Examples of such systems include the North American T carriers as well as digital point-to-point microwave systems. In T1 systems, introduced in 1962, 24 voiceband signals (or the digital equivalent) are time-division multiplexed together. The voiceband signal is a 64-kilobit-per-second data stream consisting of 8-bit symbols transmitted at a rate of 8,000 symbols per second. The TDM process interleaves 24 8-bit time slots together, along with a single frame-synchronization bit, to form a 193-bit frame. The 193-bit frames are formed at the rate of 8,000 frames per second, resulting in an overall data rate of 1.544 megabits per second. For transmission over more recent T-carrier systems, T1 signals are often further multiplexed to form higher-data-rate signals—again using a hierarchical scheme.

Multiple access

Multiplexing is defined as the sharing of a communications channel through local combining at a common point. In many cases, however, the communications channel must be efficiently shared among many users that are geographically distributed and that sporadically attempt to communicate at random points in time. Three schemes have been devised for efficient sharing of a single channel under these conditions; they are called frequency-division multiple access (FDMA), time-division multiple access (TDMA), and code-division multiple access (CDMA). These techniques can be used alone or together in telephone systems, and they are well illustrated by the most advanced mobile cellular systems.

Frequency-division multiple access

In FDMA the goal is to divide the frequency spectrum into slots and then to separate the signals of different users by placing them in separate frequency slots. The difficulty is that the frequency spectrum is limited and that there are typically many more potential communicators than there are available frequency slots. In order to make efficient use of the communications channel, a system must be devised for managing the available slots. In the advanced mobile phone system (AMPS), the cellular system employed in the United States, different callers use separate frequency slots via FDMA. When one telephone call is completed, a network-managing computer at the cellular base station reassigns the released frequency slot to a new caller. A key goal of the AMPS system is to reuse frequency slots whenever possible in order to accommodate as many callers as possible. Locally within a cell, frequency slots can be reused when corresponding calls are terminated. In addition, frequency slots can be used simultaneously by multiple callers located in separate cells. The cells must be far enough apart geographically that the radio signals from one cell are sufficiently attenuated at the location of the other cell using the same frequency slot.

Time-division multiple access

In TDMA the goal is to divide time into slots and separate the signals of different users by placing the signals in separate time slots. The difficulty is that requests to use a single communications channel occur randomly, so that on occasion the number of requests for time slots is greater than the number of available slots. In this case information must be buffered, or stored in memory, until time slots become available for transmitting the data. The buffering introduces delay into the system. In the IS54 cellular system, three digital signals are interleaved using TDMA and then transmitted in a 30-kilohertz frequency slot that would be occupied by one analog signal in AMPS. Buffering digital signals and interleaving them in time causes some extra delay, but the delay is so brief that it is not ordinarily noticed during a call. The IS54 system uses aspects of both TDMA and FDMA.

Code-division multiple access

In CDMA, signals are sent at the same time in the same frequency band. Signals are either selected or rejected at the receiver by recognition of a user-specific signature waveform, which is constructed from an assigned spreading code. The IS95 cellular system employs the CDMA technique. In IS95 an analog speech signal that is to be sent to a cell site is first quantized and then organized into one of a number of digital frame structures. In one frame structure, a frame of 20 milliseconds’ duration consists of 192 bits. Of these 192 bits, 172 represent the speech signal itself, 12 form a cyclic redundancy check that can be used for error detection, and 8 form an encoder “tail” that allows the decoder to work properly. These bits are formed into an encoded data stream. After interleaving of the encoded data stream, bits are organized into groups of six. Each group of six bits indicates which of 64 possible waveforms to transmit. Each of the waveforms to be transmitted has a particular pattern of alternating polarities and occupies a certain portion of the radio-frequency spectrum. Before one of the waveforms is transmitted, however, it is multiplied by a code sequence of polarities that alternate at a rate of 1.2288 megahertz, spreading the bandwidth occupied by the signal and causing it to occupy (after filtering at the transmitter) about 1.23 megahertz of the radio-frequency spectrum. At the cell site one user can be selected from multiple users of the same 1.23-megahertz bandwidth by its assigned code sequence.

CDMA is sometimes referred to as spread-spectrum multiple access (SSMA), because the process of multiplying the signal by the code sequence causes the power of the transmitted signal to be spread over a larger bandwidth. Frequency management, a necessary feature of FDMA, is eliminated in CDMA. When another user wishes to use the communications channel, it is assigned a code and immediately transmits instead of being stored until a frequency slot opens.

James S. LehnertWayne Eric StarkDavid E. Borth

communication

social behavior

communication, the exchange of meanings between individuals through a common system of symbols.

This article treats the functions, types, and psychology of communication. For a treatment of animal communicationsee animal behaviour. For further treatment of the basic components and techniques of human communication, see languagespeechwriting. For technological aspects, including communications devices and information systems, see broadcastingdictionaryencyclopaediainformation processinginformation theorylibraryprintingpublishing, history of; telecommunications mediatelecommunications networktelecommunications system.

The subject of communication has concerned scholars since the time of ancient Greece. Until modern times, however, the topic was usually subsumed under other disciplines and taken for granted as a natural process inherent to each. In 1928 the English literary critic and author I.A. Richards offered one of the first—and in some ways still the best—definitions of communication as a discrete aspect of human enterprise:

Communication takes place when one mind so acts upon its environment that another mind is influenced, and in that other mind an experience occurs which is like the experience in the first mind, and is caused in part by that experience.

Richards’s definition is both general and rough, but its application to nearly all kinds of communication—including those between humans and animals (but excluding machines)—separated the contents of messages from the processes in human affairs by which these messages are transmitted. More recently, questions have been raised concerning the adequacy of any single definition of the term communication as it is currently employed. The American psychiatrist and scholar Jurgen Ruesch identified 40 varieties of disciplinary approaches to the subject, including architectural, anthropological, psychological, political, and many other interpretations of the apparently simple interaction described by Richards. In total, if such informal communications as sexual attraction and play behaviour are included, there exist at least 50 modes of interpersonal communication that draw upon dozens of discrete intellectual disciplines and analytic approaches. Communication may therefore be analyzed in at least 50 different ways.

A young boy dressed in retro 1980s attire, with bow tie and eyeglasses, wears a light bulb idea invention machine to help him think of the next big idea. (nerd, nerdy, thinker) SEE CONTENT NOTES.
Britannica Quiz
Are You An Idiom Savant? Quiz

Interest in communication has been stimulated by advances in science and technology, which, by their nature, have called attention to humans as communicating creatures. Among the first and most dramatic examples of the inventions resulting from technological ingenuity were the telegraph and telephone, followed by others like wireless radio and telephoto devices. The development of popular newspapers and periodicals, broadcasting, motion pictures, and television led to institutional and cultural innovations that permitted efficient and rapid communication between a few individuals and large populations; these media have been responsible for the rise and social power of the new phenomenon of mass communication. (See also information theoryinformation processingtelecommunication system.)

Since roughly 1920 the growth and apparent influence of communications technology have attracted the attention of many specialists who have attempted to isolate communication as a specific facet of their particular interest. Psychologists, in their studies of behaviour and mind, have evolved concepts of communication useful to their investigations as well as to certain forms of therapy. Social scientists have identified various forms of communication by which myths, styles of living, mores, and traditions are passed either from generation to generation or from one segment of society to another. Political scientists and economists have recognized that communication of many types lies at the heart of the regularities in the social order. Under the impetus of new technology—particularly high-speed computers—mathematicians and engineers have tried to quantify and measure components of communicated information and to develop methods for translating various types of messages into quantities or amounts amenable to both their procedures and instruments. Numerous and differently phrased questions have been posed by artists, architects, artisans, writers, and others concerning the overall influences of various types of communication. Many researchers, working within the relevant concerns of their disciplines, have also sought possible theories or laws of cause and effect to explain the ways in which human dispositions are affected by certain kinds of communication under certain circumstances, and the reasons for the change.

Get Unlimited Access
Try Britannica Premium for free and discover more.

In the 1960s a Canadian educator, Marshall McLuhan, drew the threads of interest in the field of communication into a view that associated many contemporary psychological and sociological phenomena with the media employed in modern culture. McLuhan’s often repeated idea, “the medium is the message,” stimulated numerous filmmakers, photographers, artists, and others, who adopted McLuhan’s view that contemporary society had moved (or was moving) from a “print” culture to a “visual” one. The particular forms of greatest interest to McLuhan and his followers were those associated with the sophisticated technological instruments for which young people in particular display enthusiasm—namely, motion pictures, television, and sound recordings.

In the late 20th century the main focus of interest in communication drifted away from McLuhanism and began to centre on (1) the mass communication industries, the people who run them, and the effects they have upon their audiences, (2) persuasive communication and the use of technology to influence dispositions, (3) processes of interpersonal communication as mediators of information, (4) dynamics of verbal and nonverbal (and perhaps extrasensory) communication between individuals, (5) perception of different kinds of communications, (6) uses of communication technology for social and artistic purposes, including education in and out of school, and (7) development of relevant criticism for artistic endeavours employing modern communications technology.

In short, a communication expert may be oriented to any of a number of disciplines in a field of inquiry that has, as yet, neither drawn for itself a conclusive roster of subject matter nor agreed upon specific methodologies of analysis.

Models of communication

Fragmentation and problems of interdisciplinary outlook have generated a wide range of discussion concerning the ways in which communication occurs and the processes it entails. Most speculation on these matters admits, in one way or another, that the communication theorist’s task is to answer as clearly as possible the question, “Who says what to whom with what effect?” (This query was originally posed by the U.S. political scientist Harold D. Lasswell.) Obviously, all the critical elements in this question may be interpreted differently by scholars and writers in different disciplines.

Linear models

One of the most productive schematic models of a communications system that has been proposed as an answer to Lasswell’s question emerged in the late 1940s, largely from the speculations of two American mathematicians, Claude Shannon and Warren Weaver. The simplicity of their model, its clarity, and its surface generality proved attractive to many students of communication in a number of disciplines, although it is neither the only model of the communication process extant nor is it universally accepted. As originally conceived, the model contained five elements—an information source, a transmitter, a channel of transmission, a receiver, and a destination—all arranged in linear order. Messages (electronic messages, initially) were supposed to travel along this path, to be changed into electric energy by the transmitter, and to be reconstituted into intelligible language by the receiver. In time, the five elements of the model were renamed so as to specify components for other types of communication transmitted in various manners. The information source was split into its components (both source and message) to provide a wider range of applicability. The six constituents of the revised model are (1) a source, (2) an encoder, (3) a message, (4) a channel, (5) a decoder, and (6) a receiver. For some communication systems, the components are as simple to specify as, for instance, (1) a person on a landline telephone, (2) the mouthpiece of the telephone, (3) the words spoken, (4) the electrical wires along which the words (now electrical impulses) travel, (5) the earpiece of another telephone, and (6) the mind of the listener. In other communication systems, the components are more difficult to isolate—e.g., the communication of the emotions of a fine artist by means of a painting to people who may respond to the message long after the artist’s death.

Begging a multitude of psychological, aesthetic, and sociological questions concerning the exact nature of each component, the linear model appeared, from the commonsense perspective, at least, to explain in general terms the ways in which certain classes of communication occurred. It did not indicate the reason for the inability of certain communications—obvious in daily life—to fit its neat paradigm.

Entropy, negative entropy, and redundancy

Another concept, first called by Shannon a noise source but later associated with the notion of entropy (a principle derived from physics), was imposed upon the communication model. Entropy is analogous in most communication to audio or visual static—that is, to outside influences that diminish the integrity of the communication and, possibly, distort the message for the receiver. Negative entropy may also occur in instances in which incomplete or blurred messages are nevertheless received intact, either because of the ability of the receiver to fill in missing details or to recognize, despite distortion or a paucity of information, both the intent and content of the communication.

Although rarely shown on diagrammatic models of this version of the communication process, redundancy—the repetition of elements within a message that prevents the failure of communication of information—is the greatest antidote to entropy. Most written and spoken languages, for example, are roughly half-redundant. If 50 percent of the words of this article were taken away at random, there would still remain an intelligible—although somewhat peculiar—essay. Similarly, if one-half of the words of a radio news commentator are heard, the broadcast can usually be understood. Redundancy is apparently involved in most human activities, and, because it helps to overcome the various forms of entropy that tend to turn intelligible messages into unintelligible ones (including psychological entropy on the part of the receiver), it is an indispensable element for effective communication.

Messages are therefore susceptible to considerable modification and mediation. Entropy distorts, while negative entropy and redundancy clarify; as each occurs differentially in the communication process, the chances of the message being received and correctly understood vary. Still, the process (and the model of it) remains conceptually static, because it is fundamentally concerned with messages sent from point to point and not with their results or possible influences upon sender and receiver.

Feedback

To correct this flaw, the principle of feedback was added to the model and provided a closer approximation of interpersonal human interaction than was known theretofore. This construct was derived from the studies of Norbert Wiener, the so-called father of the science of cybernetics. Wiener’s cybernetic models, some of which provide the basis for current computer technology, were designed to be responsive to their own behaviour; that is, they audited their own performances mathematically or electronically in order to avoid errors of entropy, unnecessary redundancy, or other simple hazards.

Certain types of common communications—holiday greeting cards, for instance—usually require little feedback. Others, particularly interactions between human beings in conversation, cannot function without the ability of the message sender to weigh and calculate the apparent effect of his words on his listener. It is largely the aspect of feedback that provides for this model the qualities of a process, because each instance of feedback conditions or alters the subsequent messages.

Dynamic models

Other models of communication processes have been constructed to meet the needs of students of communication whose interests differ from those of quantitatively oriented theorists like Shannon, Weaver, and Wiener. While the model described above displays some generality and shows simplicity, it lacks some of the predictive, descriptive, and analytic powers found in other approaches. A psychologist, Theodore M. Newcomb, for example, has articulated a more fluid system of dimensions to represent the individual interacting in his environment. Newcomb’s model and others similar to it are not as precisely mathematical (quantitative) as Shannon’s and thus permit more flexible accounts of human behaviour and its variable relationships. They do not deny the relevance of linear models to Shannon and Weaver’s main concerns—quanta of information and the delivery of messages under controlled conditions—but they question their completeness and utility in describing cognitive, emotional, and artistic aspects of communication as they occur in sociocultural matrices.

Students concerned mainly with persuasive and artistic communication often centre attention upon different kinds, or modes, of communication (i.e., narrative, pictorial, and dramatic) and theorize that the messages they contain, including messages of emotional quality and artistic content, are communicated in various manners to and from different sorts of people. For them the stability and function of the channel or medium are more variable and less mechanistically related to the process than they are for followers of Shannon and Weaver and psychologists like Newcomb. (McLuhan, indeed, asserts that the channel actually dictates, or severely influences, the message—both as sent and received.) Many analysts of communication, linguistic philosophers, and others are concerned with the nature of messages, particularly their compatibility with sense and emotion, their style, and the intentions behind them. They find both linear and geometric models of process of little interest to their concerns, although considerations related to these models, particularly those of entropyredundancy, and feedback, have provided significant and productive concepts for most students of communication.

Applications of formal logic and mathematics

Despite the numerous types of communication or information theory extant today—and those likely to be formulated tomorrow—the most rationally and experimentally consistent approaches to communication theory so far developed follow the constructions of Shannon and others described above. Such approaches tend to employ the structural rigours of logic rather than the looser syntaxes, grammars, and vocabularies of common languages, with their symbolic, poetic, and inferential aspects of meaning.

Cybernetic theory and computer technology require rigorous but straightforward languages to permit translation into nonambiguous, special symbols that can be stored and utilized for statistical manipulations. The closed system of formal logic proved ideal for this need. Premises and conclusions drawn from syllogisms according to logical rules may be easily tested in a consistent, scientific manner, as long as all parties communicating share the rational premises employed by the particular system.

That this logical mode of communication drew its frame of discourse from the logic of the ancient Greeks was inevitable. Translated into an Aristotelian manner of discourse, meaningful interactions between individuals could be transferred to an equally rational closed system of mathematics: an arithmetic for simple transactions, an algebra for solving certain well-delimited puzzles, a calculus to simulate changes, rates and flows, and a geometry for purposes of illustration and model construction. This progression has proved quite useful for handling those limited classes of communications that arise out of certain structured, rational operations, like those in economics, inductively oriented sociology, experimental psychology, and other behavioral and social sciences, as well as in most of the natural sciences.

A young boy dressed in retro 1980s attire, with bow tie and eyeglasses, wears a light bulb idea invention machine to help him think of the next big idea. (nerd, nerdy, thinker) SEE CONTENT NOTES.
Britannica Quiz
Are You An Idiom Savant? Quiz

The basic theorem of information theory rests, first, upon the assumption that the message transmitted is well organized, consistent, and characterized by relatively low and determinable degrees of entropy and redundancy. (Otherwise, the mathematical structure might yield only probability statements approaching random scatters, of little use to anyone.) Under these circumstances, by devising proper coding procedures for the transmitter, it becomes possible to transmit symbols over a channel at an average rate that is nearly the capacity of units per second of the channel (symbolized by C) as a function of the units per second from an information source (H)—but never at rates in excess of capacity divided by units per second (C/H), no matter how expertly the symbols are coded. As simple as this notion seems, upon determining the capacity of the channel and by cleverly coding the information involved, precise mathematical models of information transactions (similar to electronic frequencies of energy transmissions) may be evolved and employed for complex analyses within the strictures of formal logic. They must, of course, take into account as precisely as possible levels of entropy and redundancy as well as other known variables.

The internal capacities of the channel studied and the sophistication of the coding procedures that handle the information limit the usefulness of the theorem presented above. At present such procedures, while they may theoretically offer broad prospects, are restricted by formal encoding procedures that depend upon the capacities of the instruments in which they are stored. Although such devices can handle quickly the logic of vast amounts of relatively simple information, they cannot match the flexibility and complexity of the human brain, still the prime instrument for managing the subtleties of most human communication.

Types of communication

Nonvocal communication

Signals, signs, and symbols, three related components of communication processes found in all known cultures, have attracted considerable scholarly attention because they do not relate primarily to the usual conception of words or language. Each is apparently an increasingly more complex modification of the former, and each was probably developed in the depths of prehistory before, or at the start of, early human experiments with vocal language.

Signals

A signal may be considered as an interruption in a field of constant energy transfer. An example is the dots and dashes that open and close the electromagnetic field of a telegraph circuit. Such interruptions do not require the construction of a man-made field; interruptions in nature (e.g., the tapping of a pencil in a silent room, or puffs of smoke rising from a mountaintop) may produce the same result. The basic function of such signals is to provide the change of a single environmental factor in order to attract attention and to transfer meaning. A code system that refers interruptions to some form of meaningful language may easily be developed with a crude vocabulary of dots, dashes, or other elemental audio and visual articulations. Taken by themselves, the interruptions have a potential breadth of meaning that seems extremely small; they may indicate the presence of an individual in a room, an impatience, agreement, or disagreement with some aspect of the environment, or, in the case of a scream for help, a critical situation demanding attention. Coded to refer to spoken or written language, their potential to communicate language is extremely great.

Signs

While signs are usually less germane to the development of words than signals, most of them contain greater amounts of meaning of and by themselves. Ashley Montagu, an anthropologist, has defined a sign as a “concrete denoter” possessing an inherent specific meaning, roughly analogous to the sentence “This is it; do something about it!” The most common signs encountered in daily life are pictures or drawings, although a human posture like a clenched fist, an outstretched arm, or a hand posed in a “stop” gesture may also serve as signs. The main difference between a sign and a signal is that a sign (like a policeman’s badge) contains meanings of an intrinsic nature; a signal (like a scream for help) is merely a device by which one is able to formulate extrinsic meanings. Their difference is illustrated by the observation that many types of animals respond to signals while only a few intelligent and trained animals (usually dogs and apes) are competent to respond to even simple signs.

All known cultures utilize signs to convey relatively simple messages swiftly and conveniently. The meaning of signs may depend on their form, setting, colour, or location. In the United States, traffic signs, uniforms, badges, and barber poles are frequently encountered signs. Taken en masse, any society’s lexicon of signs makes up a rich vocabulary of colourful communications.

Symbols

Symbols are more difficult than signs to understand and to define, because, unlike signs and signals, they are intricately woven into an individual’s ongoing perceptions of the world. They appear to contain a dimly understood capacity that (as one of their functions), in fact, defines the very reality of that world. The symbol has been defined as any device with which an abstraction can be made. Although far from being a precise construction, it leads in a profitable direction. The abstractions of the values that people imbue in other people and in things they own and use lie at the heart of symbolism. Here is a process, according to the British philosopher Alfred North Whitehead, whereby

United Kingdom
More From Britannica
United Kingdom: The revolution in communications
some components of [the mind’s] experience elicit consciousness, beliefs, emotions, and usages respecting other components of experience.

In Whitehead’s opinion, symbols are analogues or metaphors (that may include written and spoken language as well as visual objects) standing for some quality of reality that is enhanced in importance or value by the process of symbolization itself.

Almost every society has evolved a symbol system whereby, at first glance, strange objects and odd types of behaviour appear to the outside observer to have irrational meanings and seem to evoke odd, unwarranted cognitions and emotions. Upon examination, each symbol system reflects a specific cultural logic, and every symbol functions to communicate information between members of the culture in much the same way as, but in a more subtle manner than, conventional language. Although a symbol may take the form of as discrete an object as a wedding ring or a totem pole, symbols tend to appear in clusters and depend upon one another for their accretion of meaning and value. They are not a language of and by themselves; rather they are devices by which ideas too difficult, dangerous, or inconvenient to articulate in common language are transmitted between people who have acculturated in common ways. It does not appear possible to compile discrete vocabularies of symbols, because they lack the precision and regularities present in natural language that are necessary for explicit definitions.

Icons

Rich clusters of related and unrelated symbols are usually regarded as icons. They are actually groups of interactive symbols, like the White House in Washington, D.C., a funeral ceremony, or an Impressionist painting. Although, in examples such as these, there is a tendency to isolate icons and individual symbols for examination, symbolic communication is so closely allied to all forms of human activity that it is generally and nonconsciously used and treated by most people as the most important aspect of communication in society. With the recognition that spoken and written words and numbers themselves constitute symbolic metaphors, their critical roles in the worlds of science, mathematics, literature, and art can be understood. In addition, with these symbols, an individual is able to define his own identity.

Gestures

Professional actors and dancers have known since antiquity that body gestures may also generate a vocabulary of communication more or less unique to each culture. Some American scholars have tried to develop a vocabulary of body language, called kinesics. The results of their investigations, both amusing and potentially practical, may eventually produce a genuine lexicon of American gestures similar to one prepared in detail by François Delsarte, a 19th-century French teacher of pantomime and gymnastics who described the ingenious and complex language of contemporary face and body positions for theatrical purposes.

Proxemics

Of more general, cross-cultural significance are the theories involved in the study of proxemics developed by an American anthropologist, Edward Hall. Proxemics involves the ways in which people in various cultures utilize both time and space as well as body positions and other factors for purposes of communication. Hall’s “silent language” of nonverbal communications consists of such culturally determined interactions as the physical distance or closeness maintained between individuals, the body heat they give off, odours they perceive in social situations, angles of vision they maintain while talking, the pace of their behaviour, and the sense of time appropriate for communicating under differing conditions. By comparing matters like these in the behaviour of different social classes (and in varying relationships), Hall elaborated and codified a number of sophisticated general principles that demonstrate how certain kinds of nonverbal communication occur. Although Hall’s most impressive arguments are almost entirely empirical and many of them are open to question, the study of proxemics does succeed in calling attention to major features of communication dynamics rarely considered by linguists and symbologists. Students of words have been more interested in objective formal vocabularies than in the more subtle means of discourse unknowingly acquired by the members of a culture.

Vocal communication

Significant differences between nonvocal and vocal communication are matters more of degree than of kind. Signs, signals, symbols, and possibly icons may, at times, be easily verbalized, although most people tend to think of them as visual means of expression. Kinesics and proxemics may also, in certain instances, involve vocalizations as accompaniments to nonverbal phenomena or as somehow integral to them. Be they grunts, words, or sentences, their function is to help in forwarding a communication that is fundamentally nonverbal.

Although there is no shortage of speculation on the issue, the origins of human speech remain obscure at present. It is plausible that man is born with an instinct for speech. A phenomenon supporting this belief is the presence of unlearned cries and gurgles of infants operating as crude vocal signs directed to others the baby cannot possibly be aware of. Some anthropologists claim that within the vocabularies of kinesics and proxemics are the virtual building blocks of spoken language; they postulate that primitive humans made various and ingenious inventions (including speech) as a result of their need to communicate with others in order to pool their intellectual and physical resources. Other observers suggest similar origins of speech, including the vocalization of physical activity, imitation of the sounds of nature, and sheer serendipity. Scientific proof of any of these speculations is at present impossible.

Not only is the origin of speech disputed among experts, but the precise reasons for the existence of the numerous languages of the world are also far from clear. In the 1920s an American linguistic anthropologist, Edward Sapir, and later Benjamin Lee Whorf, centred attention upon the various methods of expression found in different cultures. Drawing their evidence primarily from the languages of primitive societies, they made some very significant observations concerning spoken (and probably written) language. First, human language reflects in subtle ways those matters of greatest relevance and importance to the value system of each particular culture. Thus, language may be said to reflect culture, or, in other words, people seem to find ways of saying what they need to say. A familiar illustration is the many words (or variations of words) that Eskimos use to describe whale blubber in its various states—e.g., on the whale, ready to eat, raw, cooked, rancid. Another example is the observation that drunk possesses more synonyms than any other term in the English language. Apparently, this is the result of a psychological necessity to euphemize a somewhat nasty, uncomfortable, or taboo matter, a device also employed for other words that describe seemingly important but improper behaviour or facets of culture.

Adaptability of language

Other observations involve the discovery that any known language may be employed, without major modification, to say almost anything that may be said in any other language. A high degree of circumlocution and some nonverbal vocalization may be required to accomplish this end, but, no matter how alien the concept to the original language, it may be expressed clearly in the language of another culture. Students of linguistic anthropology have been able to describe adequately in English the esoteric linguistic propositions of primitive societies, just as it has been possible for anthropologists to describe details of Western technology to persons in remote cultures. Understood as an artifact of culture, spoken language may therefore be considered as a universal channel of communication into which various societies dip differentially in order to expedite and specify the numerous points of contact between individuals.

Language remains, however, a still partially understood phenomenon used to transact several types of discourse. Language has been classified on the basis of several criteria. One scheme established four categories on the basis of informative, dynamic, emotive, and aesthetic functions. Informative communication deals largely with narrative aspects of meaning; dynamic discourse concerns the transaction of dispositions such as opinions and attitudes; the emotive employment of language involves the evocation of feeling states in others in order to impel them to action; and aesthetic discourse, usually regarded as a poetic quality in speech, conveys stylistic aspects of expression.

Laughter

Although most vocal sounds other than words are usually considered prelinguistic language, the phenomenon of laughter as a form of communication is in a category by itself, with its closest relative being its apparent opposite, crying. Twentieth-century ethnologists, like Konrad Lorenz, attempted to associate laughter with group behaviour among animals in instances in which aggression is thwarted and laughlike phenomena seem to result among herds. Lorenz’s metaphors, while apparently reasonable, cannot be verified inductively. They seem less reasonable to many than the more common notions of the Austrian neurologist Sigmund Freud and others that laughter either results from or is related to the nonconscious reduction of tensions or inhibitions. Developed as a form of self-generated pleasure in the infant and rewarded both physically and psychologically by feelings of gratification, laughter provides a highly effective, useful, and contagious means of vocal communication. It deals with a wide range of cultural problems, often more effectively than speech, in much the same manner that crying, an infantile and probably instinctive reaction to discomfort, communicates an unmistakable emotional state to others.

The reasons for laughter in complex social situations is another question and is answered differently by philosophers and psychologists. The English novelist George Meredith proposed a theory, resulting from his analysis of 18th-century French court comedies, that laughter serves as an enjoyable social corrective. The two best-known modern theories of the social wellsprings of laughter are the philosopher Henri Bergson’s hypothesis that laughter is a form of rebellion against the mechanization of human behaviour and nature and Freud’s concept of laughter as repressed sexual feeling. The writer Arthur Koestler regarded laughter as a means of individual enlightenment, revelation, and subsequent freedom from confusion or misunderstanding concerning some part of the environment.

The human vocal instrument as a device of communication represents an apex of physical and intellectual evolution. It can express the most basic instinctual demands as well as a range of highly intellectual processes, including the possible mastery of numerous complex languages, each with an enormous vocabulary. Because of the imitative capacity of the vocal mechanism (including its cortical directors), suitably talented individuals can simulate the sounds of nature in song, can communicate in simple ways with animals, and can indulge in such tricks as ventriloquism and the mimicry of other voices. Recent tape recording techniques have even extended this flexibility into new domains, allowing singers to accompany their own voices in different keys to produce effects of duets or choruses composed electronically from one person’s voice.

Mass and public communication

Prerequisites for mass communication

The technology of modern mass communication results from the confluence of many types of inventions and discoveries, some of which (the printing press, for instance) actually preceded the Industrial Revolution. Technological ingenuity of the 19th and 20th centuries developed the newer means of mass communication, particularly broadcasting, without which the present near-global diffusion of printed words, pictures, and sounds would have been impossible. The steam printing pressradio, motion pictures, television, and sound recording—as well as systems of mass production and distribution—were necessary before public communication in its present form might occur.

Technology was not, however, the only prerequisite for the development of mass communication in the West. A large public of literate citizens was necessary before giant publishing and newspaper empires might employ extant communications technology to satisfy widespread desires or needs for popular reading materials. Affluence and interest were (and are) prerequisites for the maintenance of the radio, television, cinema, and recording industries, institutions that are most highly developed in wealthy, industrial nations. Even in countries in which public communication is employed largely for government propaganda, certain minimal economic and educational standards must be achieved before this persuasion is accepted by the general public.

Control of mass communication

Over the years, control of the instruments of mass communication has fallen into the hands of relatively small (some claim diminishing) numbers of professional communicators who seem, as populations expand and interest widens, to reach ever-increasing numbers of people. In the United States, for example, far fewer newspapers currently serve more readers than ever before, and a handful of book publishers produce the majority of the best sellers.

Public communicators are not entirely free to follow their own whims in serving the masses, however. As is the case of any market, consumer satisfaction (or the lack of it) limits the nature and quantity of the material produced and circulated. Mass communicators are also restricted in some measure by laws governing libel, slander, and invasion of privacy and, in most countries, by traditions of professionalism that entail obligations of those who maintain access to the public’s eyes and ears. In almost every modern nation, privileges to use broadcasting frequencies are circumscribed either loosely or rigidly by government regulations. In some countries, national agencies exercise absolute control of all broadcasting, and in certain areas print and film media operate under strict government control. Written and film communications may be subject to local legal restraints in regard to censorship and have restrictions similar to those of other private businesses. Traditions of decorum and self-censorship, however, apply variably to publishers and filmmakers, depending usually upon the particular markets to which their fare is directed.

Effects of mass communication

Lively controversy centres on the effect of public communication upon audiences, not only in matters concerning public opinion on political issues but in matters of personal lifestyles and tastes, consumer behaviour, the sensibilities and dispositions of children, and possible inducements to violence. Feelings regarding these matters vary greatly. Some people construe the overall effects of mass communication as generally harmless to both young and old. Many sociologists follow the theory that mass communication seems to influence attitudes and behaviour only insofar as it confirms the status quo—i.e., it influences values already accepted and operating in the culture. Numerous other analysts, usually oriented to psychological or psychiatric disciplines, believe that mass communications provide potent sources of informal education and persuasion. Their conclusions are drawn largely from observations that many, or most, people in technological societies form their personal views of the social realities beyond their immediate experience from messages presented to them through public communication.

To assume that public communication is predominantly reflective of current values, morals, and attitudes denies much common experience. Fashions, fads, and small talk are too obviously and directly influenced by material in the press, in films, and in television to support this view. The success of public communication as an instrument of commercial advertising has also been constant and noticeable. Present evidence indicates that various instruments of mass communication produce varying effects upon different segments of the audience. These effects seem too numerous and short-lived to be measured effectively with currently available instruments. Much of the enormous output on television and radio and in print is probably simply regarded as “play” and of little consequence in affecting adult dispositions, although many psychologists believe that the nature of children’s play experiences is critical to their maturation.

The role of newspapers, periodicals, and television in influencing political opinion is fairly well established in the voting behaviour of the so-called undecided voters. Numerous studies have shown that, while the majority of citizens in the United States cast their votes along party lines and according to social, educational, and economic determinants, middle-of-the-road voters often hold the balance of power that determines the outcomes of elections. Politicians have become sensitive to their television images and have devised much of their campaign strategy with the television audience in mind. Advertising agencies familiar with television techniques have been brought into the political arena to plan campaigns and develop their clients’ images. The effectiveness of television campaigning cannot yet be determined reliably.

Public communication is a near-ubiquitous condition of modernity. Most reliable surveys show that the majority of the people of the world (including those of totalitarian countries) are usually satisfied with the kind of mass communication available to them. Lacking alternatives to the communication that they easily and conveniently receive, most people seem to accept what they are given without complaint. Mass communication is but one facet of life for most individuals, whose main preoccupations centre on the home and on daily employment. Public communication is an inexpensive addendum to living, usually directed to low common denominators of taste, interest, and refinement of perception. Although mass communication places enormous potential power in the hands of relatively few people, traditional requirements for popular approval and assent generally have prevented its use for overt subversion of culturally sanctioned institutions. Fear of such subversion is sometimes expressed by critics.

The psychology of communication

Contemporary psychologists have, since World War II, shown considerable interest in the ways in which communications occur. Behaviourists have been prone to view communication in terms of stimulus-response relationships between sources of communications and individuals or groups that receive them. Those who subscribe to Freud’s analysis of group psychology and ego theory tend to regard interactions in communication as reverberations of family group dynamics experienced early in life.

By the mid-1950s, psychological interest settled largely on the persuasive aspects of various types of messages. Psychologists have attempted to discover whether a general factor of personality called “persuasibility” might be identified in people at large. It would appear, though with qualifications, that individuals are indeed variably persuasible and that, at times, factors of personality are related to this quality.

Other psychologists have studied the recipients of communication, evolving concepts of “selective perception,” “selective attention,” and “selective retention” in order to explain not only the ways in which communication changed attitudes but also the reasons for resistance to change. Among their interests were the dynamics of the communication of rumours, the effects of “scare messages,” the degree of credulity that sources of prestige value provide, and the pressure of group consensus upon individual perceptions of communications.

Some of the suggestions that emerged from the work of certain modern psychologists may be subsumed under a theory of what is called “cognitive dissonance,” which is based upon the observation that most people cannot tolerate more than a specific degree of inconsistency in the environments they perceive. An example of cognitive dissonance may involve a person who considers himself a superb bowler but who on one occasion earns an extremely low score. The dissonant or inconsistent elements include the bowler’s knowledge of his skill and the fact of his poor score. This produces tension. To reduce this tension—dissonance—the bowler may change his behaviour or misinterpret or reinterpret the dissonant elements in order to lessen the difference between the facts. For example, he may blame his performance on the bowling ball, the alley, or the temperature of the room. Thus he seeks a psychological equilibrium.

This modification of an individual’s perception of reality is of fundamental interest to the psychologist of communications. Because the agreement or disagreement of a communication with an individual’s cognitive structure affects not only behaviour but perception as well, the major criterion for the psychological analysis of communication is neither the message nor the medium but the expectation of the person receiving the message.

It must not be assumed that any of the theories of audience psychology offered to date (including those of Gestaltists, Freudians, behaviourists, and others) lack relevance to an understanding of communication processes. None, however, seems to account fully for all the effects of communications upon people. The many facets of communication offer substantial problems for future psychological experimentation and theorizing.

Disclaimer

This content has been reposted from Britannica.com for informational purposes only.