Videoconferencing

Videoconferencing is well over 50 years old. Today, it is fast and virtually free over the Internet. However, aside from extremely formal or informal events, videoconferencing has largely failed to catch on.

Background

AT&T introduced videoconference at the 1964 World’s Fair. People in New York waited in line to walk into a booth and spend a few minutes talking to a stranger, with voice and video, in Disneyland, in California.

Simultaneously, they tried to commercialize the service by adding devices in Washington, D.C., and Chicago. The device was called AT&T Picturephone Mod I. However, the most common pricing plan cost $80 for 15 minutes of voice/video chat (about $660 adjusted to 2019). Three minutes of videoconferencing cost $16 ($130 adjusted to 2019). In the first six months of service, 71 customers paid for calls and volumes declined from there.

Additionally, besides the high prices, the video was tiny. Black-and-white screens measured 13 cm x 12 cm (about 5×5 inches) and connections oftentimes dropped.

Eventually, AT&T released a picturephone with the same internal parts but a more attractive plastic box. However, potential buyers still passed; the value video provided did not correlate to the additional cost either in terms of money or convenience.

AT&T Keeps Trying

Between 1966 and 1973, AT&T invested more than half a billion dollars developing and marketing the videophone. They renamed it the Picturephone Mod II, targeted to the corporate market. Nobody was interested.

By 1982, they created a Picturephone Service Meeting. The equipment and call costs were exorbitant, and nobody was interested.

Finally, in January 1992, AT&T released the VideoPhone 2500, a phone with a small color video screen. At the initial price of $1.5 million, the phone attracted literally no sales. They reduced the price to $1,000 and allowed people to rent the phone for $30 per day. Buyers refused even at these lower prices.

By the late 1990s, free videoconferencing appeared on the web but, even as it evolved, the product still remains largely a niche market. Even with a price of zero, many customers will prefer texting or speaking rather than videoconferencing. One exception is in certain office situations, where high-end videoconferencing systems can reduce the price of in-person travel.

Long Playing (LP) Records & Talking Movies

Long-playing records play for a long time, enabling records with more than one song.

Background

As Edison’s phonograph evolved, the recordings eventually migrated to small disks played at 78 rotations per minute (rpm). Each disk held about three minutes of music per side.

Filmmakers wanted to add sound to their movies. Before then, movies ran and typically a musician played a piano or organ. Lee de Forest’s Audion amplifying tube made movie sound possible, but the three-minute recordings were too short.

In the early 1920s, de Forest tried to create his own extended-play sound system, but it never worked well. In response, Bell Labs created a longer playing disk. Significantly larger disks spun slower, at 33 1/3 rpm allowing it to play about 23 minutes. They branded it the “Vitaphone” sound system for movies.

The Vitaphone system functioned from 1926 to 1931. Eventually, optically encoding a soundtrack replaced the LP movie soundtrack. Optical encoding made it easier to synchronize sound to movies and played for an indefinite length.

LP’s, from B2B to B2C

However, the long-playing records caught on a consumer product. It was impossible to record entire classical pieces on the 78rpm records. Additionally, pop soundtracks contained one song per side. Finally, the maximum song length was about three minutes. To sell multiple songs, the small records would be bound together into a book, called an album, a term still in use.

Over the years, various recording technologies attempted to challenge the dominance of the LP record except none succeeded besides cassette tapes, which could play in cars. In 1982, the introduction of the digital Compact Disk (CD) eventually sent the LP into obsolescence.

Interestingly, vinyl LP records are becoming popular again. Starting in 2014, vinyl record sales climbed steadily higher due to a perception of better-quality sound. By 2018, vinyl accounted for 9.7 million album sales, up 12% from 8.6 million in 2017. In contrast, CD sales are falling by 41% per year. In 2018, CD sales are 70% and vinyl sales account for 30% of the physical music media. However, by 2018 overall sales of physical music – as opposed to digital soundtracks or streaming – makeup only 10% of the industrial total.

Speech Recognition

Speech recognition is the ability of a computer to recognize the spoken word.

“Alexa: read me something interesting from Innowiki.”

“Duh human, everything on Innowiki is interesting or it wouldn’t be there.”

Today, inexpensive pocket-sized phones connect to centralized servers and understand the spoken word in countless languages. Not so long ago, that was science fiction.

Background

Star Trek in 1966, The HAL 9000 of 2001: A Space Odyssey of 1968, Westworld in 1973, and Star Wars in 1977 all assumed computers will understand the spoken word. What they missed is that people would become so fast at using other input devices, especially keyboards, that speaking is viewed as an inefficient input method.

The first real speech recognition actually predates science fiction ones. In 1952, three Bell Labs scientists created a system, “Audrey,” which recognized a voice speaking digits. A decade later, IBM researchers launched “Shoebox” that recognized 16 English words.

In 1971, DARPA intervened with the “Speech Understanding Research” (SUR) program aimed at a system which could understand 1,000 English words. Researchers at Carnegie Mellon created “Harpy” which understood a vocabulary comparable to a three-year-old child.

Researched continued. In the 1980s the “Hidden Markov Model” (HMM) proved a major breakthrough. Computer scientists realized computers need not understand what a person was saying but, rather, just to listen to sounds and look for patterns. By the 1990’s faster and less expensive CPUs brought speech recognition to the masses with software like Dragon Dictate. Bell South created the voice portal phone-tree system which, unfortunately, frustrates and annoys people to this day.

DARPA stepped back in during the 2000s, sponsoring multi-language speech recognition systems.

Rapid Advancement

However, a major breakthrough came from the private sector. Google released a service called “Google 411” allowing people to dial Google and lookup telephone numbers for free. People would speak to a computer that would guess what they said then an operator would answer, check the computer’s accuracy, and delivered the phone number. The real purpose of the system was to better train computers with a myriad of voices, including difficult-to-decipher names. Eventually, this evolved into Google’s voice recognition software still in use today.

Speech recognition continues to advance in countless languages. Especially for English, the systems are nearing perfection. They are fast, accurate, and require relatively little computer processing power.

In 2019 anybody can speak to a computer though unless their hands are busy doing something else, most prefer not to.

Mass Market Broadband Internet (DSL & Cable Modems)

Broadband definitions continually change, but in 2017 the US definition of broadband is 25Mbps (megabytes per second) downstream and 3Mbps upstream. This is fast enough to stream music, movies, web surf, and read blurbs on innowiki.

Background

Early internet users used slow dial-up modems. The last mass-produced dial-up model ran at 56Kbps, about 1/450th the speed of broadband. In other words, a file that takes a minute to download on broadband took 450 minutes (7.5 hours) to download on the fastest dial-up modem.

Businesses, including the early independent internet service providers, connected via faster fiber-optic lines called T1’s and T3’s. T1’s carry data at 1.5Mbps and T3’s carry data at about 45 Mbps. T1’s were extremely expensive in the late 1980s and T3’s even more so.

One major problem with T1’s and T3’s is they require fiber-optic cables. The vast majority of homes lacked fiber-optic connections. A “last-mile problem” is the term for this lack of infrastructure. That is, figuring out how to economically bring high-speed internet from a close-by switching station the last mile to an individual house.

ADSL

Asymmetric Digital Subscriber Lines, ADSL, was an early enabler of broadband and remains a popular option in regions with copper phone lines but without cable television. Bell employee Joseph Lechleider invented DSL. However, the only mention about his invention is in his 2015 obituary.

Lechleider found that most users downloaded data, pulling information from the internet, much more than uploading. Therefore, data lines partitioned to prioritize downloading could be greatly increase perceived speed.

Cable Modems

Rouzbeh Yassini theorized data and television could co-exist on the same cable. Other engineers thought carrying television and data over the same line was impossible. He invented the cable modem and launched a company that was quickly acquired by Bay Networks for $59 million in 1996, as the Internet was becoming popular (at the time of the sale there were just 25,000 cable modems in use). Subsequently, he joined Bay Networks as an employee.

Especially in the US, cable modems became wildly popular. In 2010 there were about 73 million US cable broadband internet subscribers. Subsequently, in 2018, that number increased to about 100 million cable internet subscribers.

Mobile Phone

Mobile phones allow calls from anywhere that’s within range of a tower. They vastly increase productivity, convenience, lower the risk of a missed call, and they’re fun. Mobile phones work by beaming voice (and, later, data) to a tower, seamlessly switching from tower-to-tower as the person moves.

Cooper Creates the Mobile Phone

Battery-operated gadget maker Motorola invested $100 million between 1968 and 1983 to develop the mobile phone. Martin Cooper led the development effort. This culminated in the release of the $3,995 ($9,900 in 2019) Motorola DynaTAC 8000 in 1984. The world’s first mobile phone weighed 28 oz., offered about 20 minutes of talk time and took ten hours to charge. “The battery life wasn’t really a problem because you couldn’t hold that phone up for that long,” Cooper famously quipped.

The first public mobile call was from Cooper to Joel Engel, his competitor at AT&T who was also working on a mobile phone. In front of reporters, Cooper called Engel and said “Joel, this is Marty. I’m calling you from a cell phone, a real handheld portable cell phone.” Engel conceived of the idea for a cellular phone network, with switching from tower to tower, as a Bell Labs employee in 1970.

Mobile Mania

In 1980 consulting powerhouse McKinsey famously predicted there would be about 900,000 worldwide mobile subscribers by the year 2000. Instead, there were 109 million.

Due to mobile phones, Motorola revenue skyrocketed. A decade later, in 1994, their revenues of $22 billion put the firm 23rd on the Fortune 500 list.

Finnish company Nokia overtook Motorola in 1997 by retooling for digital, rather than analog, phone calls. Rather than focus on digital mobile phones, Motorola instead focused on Iridium satellite phone.

Motorola Stumbles

However, CEO Chris Galvin focused on a small digital phone, the Razr, but was fired in 2003 before it launched and replaced by Sun Microsystem COO Ed Zander. However, Galvin’s Razr was a mega-hit, boosting Motorola’s market cap to $42 billion in 2004.

Eventually, Zander struck a deal with Steve Jobs’ Apple to release an iTunes enabled phone, the Rokr, in 2005. Surprisingly, the iPod/iTunes phone flopped. However, as part of the development process, Motorola taught Apple about mobile phone technology and the mobile phone business. Simultaneously, Samsung adopted blue ocean strategy to manufacture good-enough phones at lower cost.

Subsequently, in 2007, Apple released the iPhone. Motorola had no smartphone, either in the development or the sales channel. In 2008, Carl Icahn purchased 6-percent of Motorola and demanded it be broken into parts and sold. Motorola continued working to compete, building an early Android phone. However, by 2012, Samsung had built a better Android phone, at lower cost.

On August 15, 2011, Motorola was sold to Google for $12.5 billion, a 63-percent premium over it’s then market capitalization. Twenty months later, Google sold what little remained of Motorola, besides the patents, to Lenovo for $2.9 billion.

Today, Motorola – innovator of the mobile phone – is essentially nothing more than a brand in the mobile phone world.

C Programming Language

Dennis Ritchie went on to create multiple aspects of modern computing culture. Indeed, odd hours, obsessed screen time, sloppy dress, funky naming conventions, and – most importantly – those attributed tied to brilliant and useful code all belong to Ritchie.

Sometime between Ritchie, the software engineer straight enough to gain employment at Bell Labs, and Ritchie, the bearded software guru, is Ritchie, the inventor of the most significant programming language created, C. Obviously, hinting at why he later favored interesting names, C is the predecessor to the languages A then B. No sooner did Ritchie release C than it spread like kudzu. Eventually, every professional programmer knew C, even if was not their preferred language.

C remains widely in use. It was succeeded by the ubiquitous C++, where ++ is a notation that increments a variable by one. Java is arguably the next iteration. Java creator James Gosline likely avoided naming his language anything tied to F in the scheme of grades.

Besides being ubiqutous, C allowed program portability. Programmers wrote applications in C that then compiled into machine language computers understood. This allowed programmers to write one program that ran on many different computers.

Unix inventor Ken Thompson was also instrumental in the development of C.

Unix

Unix is a computer operating system. Among other things, it allows a computer to do many things at once. Derivatives of the original Unix include Linux, MacOS, and BSD. You’re reading this right now due to a server running Unix derivative Linux.

Background

Dennis Ritchie and Ken Thompson worked at Bell Labs. Thompson worked on a powerful but extremely complex operating system called Multics, which never entirely worked. Eventually, Bell Labs abandoned Multics but Thompson remained intrigued with the technology.

Thompson started developing a vastly simplified version on his own, using a stray DEC PDP-7. No sooner did he develop basic functionality than he was joined by his friend and collaborator computer scientist Dennis Ritchie. Together, along with others who eventually joined, they finished.

Nobody remembers exactly who coined the name Unix but there was a broad consensus that the new operating system worked great. It was fast, reliable, stable, and relatively easy to program.

Unix Takes Off

Unix use grew organically. As Bell Labs purchased more DEC computers, operators chose to run Unix rather than DEC’s operating system, RT-11.

Version 2 of Unix included the C programming language, enabling the operating system to be more easily ported to various other computers. Because all software runs on an operating system, this enabled cross-platform software development. Specifically, software developers could write to an operating system that ran on a multitude of computers, a concept called system portability. This became vital in later years.

Unix also started an unspoken but very real war between the “hippies” – represented by Ritchie and Thompson – and the suits, represented by Intel, HP, DEC, and most other computer companies. The success of Unix led to the idea that wild-eyed long-haired software engineers, working with little or no planning, could produce something as good or better than their buttoned-down professionally-dressed counterparts.

Image result for thompson ritchie unix
Thompson & Ritchie

Charge-Coupled Device (CCD)

1969

William Boyle
George Smith

“We are the ones who started this profusion of little cameras all over the world.”

William Boyle

Charged Coupled Devices (CCD’s) are a special type of chip that reacts to light. They are inexpensive and especially useful in imagining, enabling digital photography and video.

William Boyle and George Smith worked for Bell Labs. Their research on “Charge Bubble Devices” advanced slowly. Eventually, they were told in a week they’d be reassigned to the more promising memory division.

In 1969, faced with losing their funding and lab, the two brainstormed. In one hour, they outlined the idea which became the CCD, the sensor driving all early digital video and photography. Forty years later their one-hour invention won them the 2009 Nobel Prize.

The device itself is made up of “charge bubbles” — a series of metal-oxide semiconducting capacitors (MOS). Light changes the photons to electrons that are then flushed to a capacitor. Boyd and Smith figured out how to quickly measure each row of MOS light to create still and moving pictures.

MOS, like most imaging, only captures black and white. However, by filtering for red, green, and blue then combining them electronically, the technology produces color images. Cameras typically then combined and enhance the images into smaller and more manageable files, typically a JPEG.

Boyle and Smith’s CCD soon became ubiquitous, famously used to capture and send images from the moon back to earth where other equipment would have been too heavy and bulky.

Other Bell Labs researchers are critical, arguing that Boyle and Smith stumbled upon CCD’s accidentally and did not think up or use the technology for imaging. “They wouldn’t know an imaging device if it stared them in the face,” said Eugene Gordon who, along with Michael Tompsett, applied CCD’s to imaging.

“I can clearly remember the day that George and I developed the concept for the CCD,” answers Boyle. “It’s pretty firm in my mind. I’ve documentation that disproves most of what they’re saying, and the rest of what they’re saying is not at all logical.”. Smith simply called them “liars.”

CCD chips made for early video equipment and digital cameras. However, CMOS chips eventually overtook CCD chips for most imaging solutions. The advantage of CMOS is it reads directly from the chip, rather than reading line by line, making it faster and ultimately less expensive.

Communication Satellite

“This satellite must be high enough to carry messages from both sides of the world, which is, of course, an essential requirement for peace…”

President Kennedy, July 23, 1962

Communication satellites bring the world closer together, with instant communication. They are especially important for communication, beaming information from one central place to many more. For example, they can broadcast TV signal to many stations at once. Communication satellites also have important military uses.

The first communication satellite, SCORE, was launched in 1958. It was more a public relations ploy than a useful instrument, transmitting a Christmas message before burning up in reentry after a few days.

“Telstar 1” is the first real communication satellite, created by Bell Labs. It used solar panels for power and relayed television, telephone, and telegraph signals.

Like early intercontinental telephone lines, Telstar 1 was expensive and flaky. It worked poorly when it functioned, which was seldom. However, the satellite did manage limited live television broadcasts between the US and Europe. Eventually, on February 21, 1962, about seven months after launch, the satellite permanently failed.

The satellite’s lead engineer was John Robinson Pierce. He went on to work as a professor and researcher at Caltech, the Jet Propulsion Laboratory, and Stanford.

LASER

LASER’s allow light to be intensely focused. There are many uses, from reading digital media at low power to cutting at higher powers. Countless applications rely on LASER technology.

In 1957, Arthur Schawlow and Charles Townes, of Bell Labs, worked on an infrared LASER, called an “optical MASER.” They patented the invention in 1958.

In 1960, Theodore Maiman of Hughes Research Lab created the first visible-light LASER. It was based on Schawlow and Townes (and, arguably, Gould’s) work.

Gordon Gould also claims credit for innovating the LASER, having notarized earlier notes he’d shown to Schawlow. However, Gould could not obtain patents because the work was considered classified by the US Government. Gould was a communist sympathizer. He spent 30 years fighting for laser patents and eventually won. However, by then he’d sold 80% of the royalties. Eventually, he still collected several million dollars.

Townes shared the 1964 Nobel Prize in Physics for his work on the LASER. Schawlow shared the 1981 Nobel Prize for helping to invent the LASER.